CN106991431B - Post-verification method for local feature point matching pairs - Google Patents

Post-verification method for local feature point matching pairs Download PDF

Info

Publication number
CN106991431B
CN106991431B CN201710123132.7A CN201710123132A CN106991431B CN 106991431 B CN106991431 B CN 106991431B CN 201710123132 A CN201710123132 A CN 201710123132A CN 106991431 B CN106991431 B CN 106991431B
Authority
CN
China
Prior art keywords
local feature
diff
matching
image
matching pairs
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710123132.7A
Other languages
Chinese (zh)
Other versions
CN106991431A (en
Inventor
姚金良
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Electronic Science and Technology University
Original Assignee
Hangzhou Electronic Science and Technology University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Electronic Science and Technology University filed Critical Hangzhou Electronic Science and Technology University
Priority to CN201710123132.7A priority Critical patent/CN106991431B/en
Publication of CN106991431A publication Critical patent/CN106991431A/en
Application granted granted Critical
Publication of CN106991431B publication Critical patent/CN106991431B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Library & Information Science (AREA)
  • Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

本发明公开了一种局部特征点匹配对的后验证方法。本发明首先提取图像中的局部特征点,并通过视觉词汇获得候选局部特征点匹配对;然后,对候选局部特征点匹配对提取属性变化值:主方向变化值和方位变化值;然后根据匹配对的属性变化值和阈值来验证两个匹配对是否一致;最后采用投票法,根据肯定票的票数来确认候选局部特征点匹配对是否为一个正确的匹配对。本后验证方法能够适应图像裁剪、旋转、尺度缩放等变换带来的影响,可用于基于视觉词汇的图像检索和分类等应用中,提高检索和识别的准确率。本发明对于那些非透视变换图像中的特征点匹配对具有非常好的验证效果,在基于视觉词汇的图像拷贝检索应用中能极大地提高拷贝检索的准确率和召回率。

Figure 201710123132

The invention discloses a post-verification method for matching pairs of local feature points. The present invention first extracts local feature points in the image, and obtains candidate local feature point matching pairs through visual vocabulary; then, extracts attribute change values: main direction change value and orientation change value for the candidate local feature point matching pairs; The attribute change value and threshold value of , verify whether the two matching pairs are consistent; finally, the voting method is used to confirm whether the candidate local feature point matching pair is a correct matching pair according to the number of positive votes. This post-verification method can adapt to the effects of image cropping, rotation, scaling and other transformations, and can be used in applications such as image retrieval and classification based on visual vocabulary to improve the accuracy of retrieval and recognition. The invention has a very good verification effect for the feature point matching pairs in those non-perspective transformed images, and can greatly improve the accuracy and recall rate of copy retrieval in the application of image copy retrieval based on visual vocabulary.

Figure 201710123132

Description

一种局部特征点匹配对的后验证方法A Post-Verification Method for Local Feature Point Matching Pairs

技术领域technical field

本发明属于计算机图像处理和图像检索领域,涉及一种两幅图像中局部特征点匹配对的后验证方法。The invention belongs to the field of computer image processing and image retrieval, and relates to a post-verification method for matching pairs of local feature points in two images.

背景技术Background technique

随着图像中局部特征点的广泛研究与应用,基于局部特征点进行图像分析、识别和检索已经成为当前图像处理领域的一种重要方式。借鉴文档处理中的词袋模型,一幅图像可以表示为局部特征的集合,从而消除了图像中的一些冗余信息。近年来,研究者将局部特征点的描述子量化为视觉词汇,从而提出了视觉词汇词袋模型(Bag of visual wordsmodel)。该模型已成为当前图像识别与检索的一类重要方法。视觉词汇词袋模型与倒排索引相结合是当前最有效的基于内容的图像检索方式, 具有非常好的鲁棒性。在图像检索应用中,其可以应对图像的各种编辑操作和变换;而倒排索引结构提高了检索的效率,可以在大规模图像库中实现实时的查询。但是通过局部特征点的特征向量量化得到的视觉词汇相对于自然语言中的词汇并没有明确的意义。其区分能力很弱,并不能完全表示局部图像的内容。为了能够保证视觉词汇的区分能力要求:词典中视觉词汇数量越多越好;但是越多的视觉词汇导致了其抗噪能力变弱,并且在特征向量量化为视觉词汇时需要耗费更多的计算量。另外,为了消除噪声影响而减少词典中视觉词汇的数量,也导致了视觉词汇区分能力降低,从而造成了视觉词汇较高的误匹配率。视觉词汇的误匹配给后面的图像相似度计算带来了困难。With the extensive research and application of local feature points in images, image analysis, recognition and retrieval based on local feature points has become an important method in the current image processing field. Borrowing from the bag-of-words model in document processing, an image can be represented as a collection of local features, thereby eliminating some redundant information in the image. In recent years, researchers have quantified the descriptors of local feature points into visual words, thus proposing the bag of visual words model. This model has become an important method of current image recognition and retrieval. The visual vocabulary bag-of-words model combined with inverted index is the most effective content-based image retrieval method, and it has very good robustness. In the application of image retrieval, it can deal with various editing operations and transformations of images; and the inverted index structure improves the retrieval efficiency and can realize real-time query in large-scale image databases. However, the visual vocabulary obtained by the feature vector quantization of local feature points has no clear meaning relative to the vocabulary in natural language. Its discriminative ability is weak and cannot fully represent the content of the local image. In order to ensure the distinguishing ability of visual vocabulary: the more visual vocabulary in the dictionary, the better; however, the more visual vocabulary, the weaker its anti-noise ability, and it takes more calculation to quantify the feature vector into visual vocabulary quantity. In addition, reducing the number of visual words in the dictionary in order to eliminate the influence of noise also leads to a decrease in the distinguishing ability of visual words, resulting in a high false matching rate of visual words. The mismatch of visual vocabulary brings difficulties to the subsequent image similarity calculation.

针对视觉词汇的误匹配问题,研究者们提出了很多建设性的方法。这些方法主要可以分为两类:一类是为视觉词汇增加一个附加的描述子,来提高视觉词汇的区分能力,另一类是根据两幅图像中候选的局部特征点匹配对进行空间一致性验证,从而过滤误匹配对。当前已有的附加描述子方法有:Liang Zheng提出的嵌入颜色信息到局部特征点中的方法 (Liang Zheng,Shengjin Wang,Qi Tian,Coupled Binary Embedding for Large-Scale Image Retrieval,IEEE TRANSACTIONS ON IMAGE PROCESSING,VOL.23,NO.8,2014),Yao提出了将视觉词汇的上下文信息作为附加描述子,以及H.Jégou提出的一种汉明嵌入方法。这些方法需要将附加描述子添加到索引库中,增加了系统的存储消耗。另外,附加描述子的鲁棒性也是一个需要关注的问题。Researchers have proposed many constructive methods for the problem of mismatching visual vocabulary. These methods can be mainly divided into two categories: one is to add an additional descriptor to the visual vocabulary to improve the discriminative ability of the visual vocabulary, and the other is to perform spatial consistency based on matching pairs of candidate local feature points in the two images. Validation, thereby filtering out mismatched pairs. The existing additional descriptor methods include: the method of embedding color information into local feature points proposed by Liang Zheng (Liang Zheng, Shengjin Wang, Qi Tian, Coupled Binary Embedding for Large-Scale Image Retrieval, IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL.23, NO.8, 2014), Yao proposed the context information of visual vocabulary as an additional descriptor, and a Hamming embedding method proposed by H.Jégou. These methods need to add additional descriptors to the index library, which increases the storage consumption of the system. In addition, the robustness of additional descriptors is also a concern.

基于局部特征点匹配对的空间验证方法是一种后验证方法。其是在检索或者分类时在已经找到候选图像或者目标基础上,计算查询图像与候选图像之间匹配的局部特征点的空间一致性。因为图像编辑和轻微的透视变换不会改变图像中某个目标上局部特征点的空间上的相对关系,因此局部特征点匹配对之间的空间一致性被广泛用于过滤误匹配对的后验证方法中。最初的方法是在局部特征点匹配对的集合上采用RANSAC 方法获得图像的变换参数,认为不符合变换模型的匹配对为误匹配。由于RANSAC方法效率较低,有研究者提出了弱几何一致性方法,该类方法根据局部特征点的尺度和主方向的差来确定图像的变换参数,并通过该参数来过滤误匹配。另外,Zhou提出了一个空间编码(Spatial Coding) 方法,该方法通过匹配对的空间位置(x和y坐标的相对位置关系)一致性来识别正确的匹配(Zhou WG,Li HQ,Lu Y et al.Encoding spatial context for large-scale partial-duplicate Web image retrieval. JOURNAL OF COMPUTER SCIENCE AND TECHNOLOGY 29(5):837-848 Sept. 2014)。Lingyang Chu建立了一种主方向和位置一致性的图模型。该图模型将强连接的匹配对认为是正确的匹配对(Lingyang Chu, ShuqiangJiang,etal.Robust Spatial Consistency Graph Model for Partial Duplicate ImageRetrieval,IEEE TRANSACTIONS ON MULTIMEDIA,VOL.15,NO.8,PP.1982-1986,DECEMBER2013)。Wu通过最大稳定极限区域将视觉词汇组合成Bundle,然后基于Bundle对图像进行索引,并通过Bundle中视觉词汇的匹配实现相似性的度量。The spatial verification method based on matching pairs of local feature points is a post-verification method. It is to calculate the spatial consistency of local feature points matching between the query image and the candidate image based on the candidate image or target already found during retrieval or classification. Because image editing and slight perspective transformation do not change the spatial relative relationship of local feature points on a certain target in the image, the spatial consistency between local feature point matching pairs is widely used for post-validation of filtering mismatched pairs method. The original method is to use the RANSAC method to obtain the transformation parameters of the image on the set of local feature point matching pairs, and the matching pairs that do not conform to the transformation model are considered to be mismatches. Due to the low efficiency of the RANSAC method, some researchers have proposed a weak geometric consistency method, which determines the transformation parameter of the image according to the difference between the scale and the main direction of the local feature points, and uses this parameter to filter the mismatch. In addition, Zhou proposed a spatial coding (Spatial Coding) method, which identifies the correct match by the consistency of the spatial position (relative positional relationship of x and y coordinates) of matching pairs (Zhou WG, Li HQ, Lu Y et al. .Encoding spatial context for large-scale partial-duplicate Web image retrieval. JOURNAL OF COMPUTER SCIENCE AND TECHNOLOGY 29(5):837-848 Sept. 2014). Lingyang Chu established a graphical model of principal orientation and position consistency. The graph model considers strongly connected matching pairs as correct matching pairs (Lingyang Chu, ShuqiangJiang, et al. Robust Spatial Consistency Graph Model for Partial Duplicate ImageRetrieval, IEEE TRANSACTIONS ON MULTIMEDIA, VOL.15, NO.8, PP.1982- 1986, DECEMBER 2013). Wu combines visual vocabulary into bundles through the maximum stable limit region, then indexes images based on bundles, and achieves similarity measurement through matching of visual vocabulary in bundles.

针对局部特征量化为视觉词汇后导致区分能力减弱的问题而造成的匹配准确率降低的问题,本发明方法提出利用局部特征点匹配对之间的属性变化一致性和一个投票方法来确认正确的匹配对。该方法扩展了已有的局部特征点空间一致性的后验证方法;具有较高的处理速度,并能应对各种图像编辑操作,可以应用于图像的识别和检索等相关应用中。Aiming at the problem that the matching accuracy is reduced due to the problem that the distinguishing ability is weakened after the local features are quantified into visual words, the method of the present invention proposes to use the attribute change consistency between the matching pairs of local feature points and a voting method to confirm the correct matching. right. This method extends the existing post-verification method of spatial consistency of local feature points; it has high processing speed, can cope with various image editing operations, and can be applied to related applications such as image recognition and retrieval.

发明内容SUMMARY OF THE INVENTION

本发明的目的是在当前基于视觉词汇的图像内容检索应用中提供一种局部特征点匹配对的后验证方法,它可用于确认正确的局部特征点匹配对,过滤错误的匹配对,从而提高检索的准确率。The purpose of the present invention is to provide a post-verification method for matching pairs of local feature points in the current visual vocabulary-based image content retrieval application, which can be used to confirm correct matching pairs of local feature points and filter wrong matching pairs, thereby improving retrieval. 's accuracy.

本发明方法的具体步骤为:The concrete steps of the method of the present invention are:

步骤(1)根据局部特征点对应的视觉词汇得到两幅图像中局部特征点匹配对;Step (1) obtaining matching pairs of local feature points in the two images according to the visual vocabulary corresponding to the local feature points;

所述的视觉词汇是图像中局部特征点的特征向量量化得到的词汇 ID;The visual vocabulary is the vocabulary ID obtained by the quantization of the feature vector of the local feature points in the image;

所述的局部特征点是通过局部特征点检测方法(比如:SIFT)得到,其具有在图像中的相关属性:空间位置、尺度、主方向和特征向量;The local feature points are obtained by a local feature point detection method (such as: SIFT), which has related attributes in the image: spatial position, scale, main direction and feature vector;

所述的局部特征点匹配对是两幅图像中视觉词汇一致的一对局部特征点。假设两幅图像:Img和Img’,其中的局部特征点分别用Vi和Vm′来表示;如果Vi和Vm′量化得到的视觉词汇一样,则(Vi,Vm′)就是一个局部特征点的匹配对。The matching pair of local feature points is a pair of local feature points with the same visual vocabulary in the two images. Suppose two images: Img and Img', where the local feature points are represented by Vi and V m 'respectively; if the visual vocabulary obtained by quantization of Vi and V m ' is the same, then (V i , V m ') is A matching pair of local feature points.

步骤(2)计算局部特征点匹配对的属性变化值。Step (2) Calculate the attribute change value of the matching pair of local feature points.

所述的属性变化值是匹配的两个局部特征点属性的差值,其反映了图像变换后原图中一个局部特征点变换到结果图像中对应特征点的变化情况。本发明方法提出了两个属性变化值:主方向变化值和方位变化值。假设图像:Img和Img’之间存在两个匹配对M1:(Vi,Vm′)和M2: (Vj,Vn′)。Vi的主方向表示为θi,位置属性表示为(Pxi,Pyi);Vm′的主方向表示为θm′,位置属性表示为(Pxm′,Pym′)。主方向变化值(Diff_Ori)定义为匹配对中局部特征点的主方向的差,如公式(1)所示。方位变化值的获取需要两个匹配对,先获取两个匹配对在同一图像中的两个局部特征点的方位,在图像Img中局部特点Vi和Vj的方位值为Direction(i,j),如公式(2)所示,其中arctan2函数用于获取反正切值,该角度为:原点指向坐标(Pyj-Pyi,Pxj-Pxi)的向量与x轴正方向之间,沿逆时针方向的角度;同样的Vm′和Vn′的方位值可通过公式(3)得到。The attribute change value is the difference between the attributes of the two matched local feature points, which reflects the change of a local feature point in the original image after image transformation to the corresponding feature point in the result image. The method of the present invention proposes two attribute change values: the main direction change value and the azimuth change value. Suppose there are two matching pairs M1: (V i , V m ') and M2: (V j , V n ') between the images: Img and Img'. The main direction of V i is expressed as θ i , and the position attribute is expressed as (Px i , Py i ); the main direction of V m ′ is expressed as θ m ′, and the position attribute is expressed as (Px m ′, Py m ′). The main direction change value (Diff_Ori) is defined as the difference of the main directions of the local feature points in the matching pair, as shown in formula (1). The acquisition of the orientation change value requires two matching pairs. First, the orientations of the two local feature points of the two matching pairs in the same image are obtained. In the image Img, the orientation values of the local features V i and V j are Direction(i, j ), as shown in formula (2), where the arctan2 function is used to obtain the arc tangent value, the angle is: the origin points between the vector of the coordinates (Py j -Py i , Px j -Px i ) and the positive direction of the x-axis, The angle in the counterclockwise direction; the same azimuth values of V m ' and V n ' can be obtained by formula (3).

Diff_Ori(i,m)=θm′-θi (1)Diff_Ori(i,m)=θ m ′-θ i (1)

Direction(i,j)=arctan2(Pyj-Pyi,Pxj-Pxi) (2)Direction(i,j)=arctan2(Py j -Py i ,Px j -Px i ) (2)

Direction′(m,n)=arctan2(Py′n-Py′m,Px′n-Px′m) (3)Direction′(m,n)=arctan2(Py′ n -Py′ m ,Px′ n -Px′ m ) (3)

从而方位属性的变化值Diff_Dir(i,m)则为:Therefore, the change value of the orientation attribute Diff_Dir(i,m) is:

Diff_Dir(i,m)=Direction(i,j)-Direction′(m,n) (4)Diff_Dir(i,m)=Direction(i,j)-Direction'(m,n) (4)

步骤(3)根据局部特征点匹配对之间的属性变化值一致性确认局部特征点匹配对是否是一致的。Step (3) Confirm whether the matching pairs of local feature points are consistent according to the consistency of attribute change values between the matching pairs of local feature points.

所述的属性变化值一致性通过阈值进行判断。所有属性变化值都满足阈值要求,则认为两个局部特征点匹配对是一致的。两个匹配对M1 和M2的主方向变化值一致性通过公式(5)和(6)得到,其中TH_ORI为主方向变化一致性的阈值。The consistency of the attribute change value is judged through a threshold. All attribute change values meet the threshold requirements, then the two matching pairs of local feature points are considered to be consistent. The consistency of the main direction change values of the two matching pairs M1 and M2 is obtained by formulas (5) and (6), where TH_ORI is the threshold value of the main direction change consistency.

Diff_Ori_M(M1,M2)=|Diff_Ori(i,m)-Diff_Ori(j,n)| (5)Diff_Ori_M(M1,M2)=|Diff_Ori(i,m)-Diff_Ori(j,n)| (5)

Figure DEST_PATH_GDA0001325472300000041
Figure DEST_PATH_GDA0001325472300000041

其中,Diff_Ori_M(M1,M2)表示为两个匹配对M1和M2主方向变化值的差值;Diff_Ori(i,m)表示匹配对M1主方向变化值;Diff_Ori(j,n) 表示匹配对M2主方向变化值;Ori_Cons(M1,M2)表示两个匹配对M1和 M2一致性的结果值。Among them, Diff_Ori_M(M1,M2) represents the difference between the main direction change values of the two matching pairs M1 and M2; Diff_Ori(i,m) represents the main direction change value of the matching pair M1; Diff_Ori(j,n) represents the matching pair M2 The main direction change value; Ori_Cons(M1, M2) represents the result value of the consistency of the two matching pairs M1 and M2.

本发明方法通过减去主方向变化值来消除图像旋转对方位变化值的影响。具体地说,两个匹配对M1和M2的方位变化值一致性判断通过公式(7)(8)(9)实现;公式(8)用于消除图像旋转的影响,其中TH_Dir为方位变化一致性阈值。The method of the invention eliminates the influence of image rotation on the orientation change value by subtracting the main direction change value. Specifically, the consistency judgment of the azimuth change values of the two matching pairs M1 and M2 is realized by formula (7)(8)(9); formula (8) is used to eliminate the influence of image rotation, where TH_Dir is the azimuth change consistency threshold.

Diff_Dir_M(M1,M2)=|Diff_Dir(i,m)-Diff_Dir(j,n)| (7)Diff_Dir_M(M1,M2)=|Diff_Dir(i,m)-Diff_Dir(j,n)| (7)

Diff_Dir(M1,M2)=|Diff_Dir_M(M1,M2)-Diff_Ori_M(M1,M2)| (8)Diff_Dir(M1,M2)=|Diff_Dir_M(M1,M2)-Diff_Ori_M(M1,M2)| (8)

其中,Diff_Dir_M(M1,M2)表示为两个匹配对M1和M2方位变化值的差值;Diff_Dir(i,m)表示匹配对M1方位变化值;Diff_Dir(j,n)表示匹配对M2方位变化值;Diff_Dir(M1,M2)表示两个匹配对M1和M2主方向变化值的差值与方位变化值的差值之间的差值;Dir_Cons(M1,M2)表示两个匹配对M1和M2方位一致性的结果值。Among them, Diff_Dir_M(M1,M2) represents the difference between the azimuth change values of the two matching pairs M1 and M2; Diff_Dir(i,m) represents the azimuth change value of the matching pair M1; Diff_Dir(j,n) represents the azimuth change value of the matching pair M2 value; Diff_Dir(M1, M2) represents the difference between the main direction change value of the two matching pairs M1 and M2 and the difference between the azimuth change value; Dir_Cons(M1, M2) represents the two matching pairs M1 and M2 The resulting value for orientation consistency.

步骤(4)采用投票法来确定某个匹配对是否是正确的匹配对。也就是:对于某个匹配对Mi,验证与其它匹配对之间的属性变化值一致性,如果一致就投一张肯定票。肯定票数与两幅图像候选匹配对数的比值大于给定阈值(Th_Votes)就认为该匹配对Mi为一个正确匹配。Step (4) adopts voting method to determine whether a certain matching pair is the correct matching pair. That is: for a certain matching pair M i , verify the consistency of attribute change values with other matching pairs, and cast a positive vote if they are consistent. If the ratio of the number of positive votes to the number of candidate matching pairs of the two images is greater than a given threshold (Th_Votes), the matching pair M i is considered to be a correct match.

本发明相对于现有技术具有以下有益效果:The present invention has the following beneficial effects with respect to the prior art:

本发明方法不同于弱几何约束方法中一致性特征进行分开独立处理,提出了通过局部特征点对的主方向变化值和方位变化值的联合来验证匹配对之间的一致性,提高了匹配对之间一致性验证的准确率。The method of the present invention is different from the weak geometric constraint method, which is to separate and independently process the consistent features. The accuracy of the consistency verification between them.

本发明方法提出的主方向变化值和方位变化值具有非常高的匹配对一致性区分率;假设特征点之间的方位是随机分布的,则通过方位变化值将不一致的匹配对认为一致的概率为TH_Dir/π;当TH_Dir为0.087 (5度)时,其概率为2.8%。同样,当TH_ORI为0.087(5度)时,通过主方向变化值错判的概率也为2.8%。两个特征联合进行判别匹配对之间的一致性错判的概率为:0.00077。因此,本发明方法提出的匹配对一致性验证方法具有非常高的准确率。The main direction change value and the orientation change value proposed by the method of the present invention have a very high matching pair consistency discrimination rate; assuming that the orientations between the feature points are randomly distributed, then the orientation change value is used. is TH_Dir/π; when TH_Dir is 0.087 (5 degrees), its probability is 2.8%. Similarly, when TH_ORI is 0.087 (5 degrees), the probability of misjudgment by the main direction change value is also 2.8%. The probability of misjudgment of consistency between two feature joints to discriminate matching pairs is: 0.00077. Therefore, the matching pair consistency verification method proposed by the method of the present invention has a very high accuracy rate.

本发明方法将特征点匹配问题看成两幅图像中部分图像内容的匹配问题,相同图像内容之间特征点匹配对的方位和主方向的变化应该是一致性的。另外,由于特征点匹配对的一致性验证具有非常高的准确率,因此投票法可以有效地应对部分图像内容匹配的问题。从而,本发明方法对图像的裁剪、添加目标等复杂编辑操作具有非常好的鲁棒性。The method of the invention regards the feature point matching problem as the matching problem of part of the image content in the two images, and the change of the orientation and main direction of the matching pair of feature points between the same image content should be consistent. In addition, since the consistency verification of feature point matching pairs has a very high accuracy, the voting method can effectively deal with the problem of partial image content matching. Therefore, the method of the present invention has very good robustness to complex editing operations such as image cropping and object addition.

本发明方法通过减去主方向变化值的差值来消除图像旋转对方位变化值差值的影响。而局部特征点的主方向本身具有旋转不变性。因此,本发明方法具有旋转鲁棒性。The method of the invention eliminates the influence of the image rotation on the difference of the azimuth change value by subtracting the difference value of the main direction change value. The main direction of the local feature point itself has rotation invariance. Therefore, the method of the present invention is rotationally robust.

本发明方法对于那些非透视变换图像中的特征点匹配对具有非常好的验证效果,在基于视觉词汇的图像拷贝检索应用中能极大地提高拷贝检索的准确率和召回率。The method of the invention has a very good verification effect on the feature point matching pairs in those non-perspective transformed images, and can greatly improve the accuracy and recall rate of copy retrieval in the application of image copy retrieval based on visual vocabulary.

附图说明Description of drawings

图1表示本发明方法的流程图;Fig. 1 represents the flow chart of the method of the present invention;

图2基于视觉词汇的局部特征点匹配结果;Fig. 2 Matching results of local feature points based on visual vocabulary;

图3计算方位属性变化值的示意图;Figure 3 is a schematic diagram of calculating the change value of the orientation attribute;

图4基于投票法得到的匹配对的投票数;Fig. 4 The number of votes of matching pairs obtained based on the voting method;

图5本发明方法确认正确匹配对的效果;Fig. 5 method of the present invention confirms the effect of correct matching pair;

图6本发明后验证方法的有效性比较结果示意图。FIG. 6 is a schematic diagram of the effectiveness comparison result of the post-verification method of the present invention.

具体实施方式Detailed ways

下面将结合附图对本发明加以详细说明,应指出的是,所描述的实施例仅便于对本发明的理解,而对其不起任何限定作用。The present invention will be described in detail below with reference to the accompanying drawings. It should be noted that the described embodiments are only for the understanding of the present invention, but do not have any limiting effect on it.

本发明方法的具体步骤为:The concrete steps of the method of the present invention are:

步骤(1)根据局部特征点对应的视觉词汇得到两幅图像中局部特征点匹配对;局部特征点的提取当前有较多的方法,本发明方法采用当前被广泛使用的尺度不变描述子(SIFT),其具有旋转、尺度变换等鲁棒性。SIFT描述子提取后,图像被表示为一个局部特征点的集合{Si},Si是图像中一个局部特征点描述子,其在该图像中具有以下相关属性:特征向量(Fi),主方向(θi),尺度(σi),和空间位置(Pxi,Pyi)。局部特征点的匹配可以将特征向量量化为视觉词汇ID,然后基于视觉词汇ID的一致性来得到局部特征点的匹配对。为了将局部特征点的特征向量量化为视觉词汇ID,在本实施例中,采用乘积量化方法(productquantization method),并通过K均值聚类来构建视觉词汇词典。该量化方法具有非常高的量化效率。在本实施例中,采用32维的SIFT局部特征描述子,乘积量化方法将32维的特征向量分为4组,每个组8维。8维的一组特征向量根据样本库量化为32个词根;根据词根的组合可以构建一个220个视觉词汇的词典。根据以上步骤,假设两幅图像:Img和Img’,其中的局部特征点分别用Vi和Vm′来表示;如果Vi和Vm′的特征向量量化得到的视觉词汇ID一样,则(Vi,Vm′)就是一个局部特征点的匹配对。附图2为本实施例匹配结果的样例。其中横跨两幅图像的线段表示局部特征点匹配对,两个端点为图像中的局部特征点空间位置,箭头线表示特征点的尺度和主方向。附图2中白色线段表示正确的局部特征点匹配对,其对应的图像内容一致;而黑色线段为错误的局部特征点匹配对。从图中也可以看到错误的匹配对中的两个局部特征点的局部内容具有一定的相似性(都是具有一点弧度的边缘点),但是从整个图像出发,其内容是不一致的。本方法的目标就是识别过滤这些错误的匹配对。Step (1) obtain the matching pairs of local feature points in the two images according to the visual vocabulary corresponding to the local feature points; there are currently many methods for the extraction of local feature points, and the method of the present invention adopts the currently widely used scale-invariant descriptor ( SIFT), which is robust to rotation, scale transformation, etc. After the SIFT descriptor is extracted, the image is represented as a set of local feature points {S i } , and Si is a local feature point descriptor in the image, which has the following related properties in the image: feature vector (F i ), Principal direction (θ i ), scale (σ i ), and spatial position (Px i , Py i ). The matching of local feature points can quantify feature vectors into visual vocabulary IDs, and then obtain matching pairs of local feature points based on the consistency of visual vocabulary IDs. In order to quantify the feature vectors of local feature points into visual vocabulary IDs, in this embodiment, a product quantization method is adopted, and K-means clustering is used to construct a visual vocabulary dictionary. This quantization method has very high quantization efficiency. In this embodiment, a 32-dimensional SIFT local feature descriptor is used, and the product quantization method divides the 32-dimensional feature vector into 4 groups, each of which has 8 dimensions. A set of 8-dimensional feature vectors is quantified into 32 word roots according to the sample base; a dictionary of 220 visual words can be constructed according to the combination of word roots. According to the above steps, suppose two images: Img and Img', where the local feature points are represented by Vi and Vm 'respectively; if the visual vocabulary IDs obtained by the quantization of the feature vectors of Vi and Vm ' are the same, then ( V i , V m ′) is a matching pair of local feature points. FIG. 2 is an example of the matching result in this embodiment. The line segment across the two images represents the matching pair of local feature points, the two endpoints are the spatial positions of the local feature points in the image, and the arrow line represents the scale and main direction of the feature points. In FIG. 2 , the white line segments represent correct matching pairs of local feature points, and the corresponding image contents are consistent; while the black line segments represent wrong matching pairs of local feature points. It can also be seen from the figure that the local contents of the two local feature points in the wrong matching pair have a certain similarity (both are edge points with a little radian), but from the whole image, their contents are inconsistent. The goal of this method is to identify and filter these false matching pairs.

步骤(2)计算局部特征点匹配对的属性变化值。所述的属性变化值是匹配的两个局部特征点属性的差值,其反映了图像变换后原图中一个局部特征点变换到结果图像中对应特征点的变化情况。本发明方法提出了两个属性变化值:主方向变化值和方位变化值。假设图像:Img和 Img’之间存在两个匹配对M1:(Vi,Vm′)和M2:(Vj,Vn′)。Vi和Vm′的主方向和位置属性分别表示为:(θi,Pxi,Pyi)和(θm′,Pxm′,Pym′)。主方向变化值(Diff_Ori)定义为匹配对中局部特征点的主方向的差,如公式(10) 所示。方位变化值的获取需要两个匹配对,先获取两个匹配对在同一图像中的两个局部特征点的方位,在图像Img中局部特点Vi和Vj的方位值为Direction(i,j),如公式(11)所示,其中arctan2函数用于获取反正切值,该角度为:原点指向坐标(Pyj-Pyi,Pxj-Pxi)的向量与x轴正方向之间,沿逆时针方向的角度;同样的Vm′和Vn′的方位值可通过公式(12) 得到。Step (2) Calculate the attribute change value of the matching pair of local feature points. The attribute change value is the difference between the attributes of the two matched local feature points, which reflects the change of a local feature point in the original image after image transformation to the corresponding feature point in the result image. The method of the present invention proposes two attribute change values: the main direction change value and the azimuth change value. Suppose there are two matching pairs M1: (V i , V m ') and M2: (V j , V n ') between the images: Img and Img'. The principal direction and position properties of V i and V m ′ are expressed as: (θ i , Px i , Py i ) and (θ m ′, Px m ′, Py m ′), respectively. The main direction change value (Diff_Ori) is defined as the difference of the main directions of the local feature points in the matching pair, as shown in formula (10). The acquisition of the orientation change value requires two matching pairs. First, the orientations of the two local feature points of the two matching pairs in the same image are obtained. In the image Img, the orientation values of the local features V i and V j are Direction(i, j ), as shown in formula (11), where the arctan2 function is used to obtain the arc tangent value, the angle is: the origin points between the vector of the coordinates (Py j -Py i , Px j -Px i ) and the positive direction of the x-axis, The angle in the counterclockwise direction; the same azimuth values of V m ' and V n ' can be obtained by formula (12).

Diff_Ori(i,m)=θm′-θi (10)Diff_Ori(i,m)=θ m ′-θ i (10)

Direction(i,j)=arctan2(Pyj-Pyi,Pxj-Pxi) (11)Direction(i,j)=arctan2(Py j -Py i ,Px j -Px i ) (11)

Direction′(m,n)=arctan2(Py′n-Py′m,Px′n-Px′m) (12)Direction'( m , n )=arctan2( Py'n -Py'm ,Px'n -Px'm ) (12)

从而方位属性的变化值Diff_Dir(i,m)则为:Therefore, the change value of the orientation attribute Diff_Dir(i,m) is:

Diff_Dir(i,m)=Direction(i,j)-Direction′(m,n) (13)Diff_Dir(i,m)=Direction(i,j)-Direction'(m,n) (13)

方位属性变化值可以通过附图3直观的表示;图像中方位属性通过两个特征点连线与x轴的角来表示。如果图像没有遭受旋转攻击,两个一致的匹配对的方位属性应该是一样的。方位属性变化值表达了图像中两个特征点的方位的稳定性。同样,主方向属性变化值也是对除了旋转之外的大部分操作鲁棒。The change value of the orientation attribute can be visually represented by Fig. 3; the orientation attribute in the image is represented by the angle between the line connecting the two feature points and the x-axis. If the image is not subject to a rotation attack, the orientation properties of two identical matching pairs should be the same. The orientation attribute change value expresses the stability of orientation of two feature points in the image. Likewise, the main direction property change value is also robust to most operations except rotation.

步骤(3)根据局部特征点匹配对之间的属性变化值一致性确认局部特征点匹配对是否是一致的。所述的属性变化值一致性通过阈值进行判断。所有属性变化值都满足阈值要求,则认为两个局部特征点匹配对是一致的。两个匹配对M1:(Vi,Vm′)和M2:(Vj,Vn′)的主方向变化值一致性通过公式(14)和(15)得到,其中TH_ORI为主方向变化一致性阈值。本方法采用两个匹配对之间的比较来确认;如果对图像进行了旋转操作,则图像中对应的匹配特征点都会相应的旋转;从而两个匹配对之间的主方向变化值就不受图像旋转的影响。在本实施例中,我们在相关测试库上进行了相关实验,最后设定TH_ORI为0.087(5度)。Step (3) Confirm whether the matching pairs of local feature points are consistent according to the consistency of attribute change values between the matching pairs of local feature points. The consistency of the attribute change value is judged through a threshold. All attribute change values meet the threshold requirements, then the two matching pairs of local feature points are considered to be consistent. The two matching pairs M1: (V i , V m ′) and M2: (V j , V n ′) have the same main direction change values through formulas (14) and (15), where TH_ORI is consistent with the main direction changes Sex threshold. This method uses the comparison between the two matching pairs to confirm; if the image is rotated, the corresponding matching feature points in the image will be rotated accordingly; thus the main direction change value between the two matching pairs will not be affected. The effect of image rotation. In this embodiment, we have conducted relevant experiments on relevant test libraries, and finally set TH_ORI to 0.087 (5 degrees).

Diff_Ori_M(M1,M2)=|Diff_Ori(i,m)-Diff_Ori(j,n)| (14)Diff_Ori_M(M1,M2)=|Diff_Ori(i,m)-Diff_Ori(j,n)| (14)

Figure DEST_PATH_GDA0001325472300000081
Figure DEST_PATH_GDA0001325472300000081

其中,Diff_Ori_M(M1,M2)表示为两个匹配对M1和M2主方向变化值的差值;Diff_Ori(i,m)表示匹配对M1主方向变化值;Diff_Ori(j,n) 表示匹配对M2主方向变化值;Ori_Cons(M1,M2)表示两个匹配对M1和 M2一致性的结果值。Among them, Diff_Ori_M(M1,M2) represents the difference between the main direction change values of the two matching pairs M1 and M2; Diff_Ori(i,m) represents the main direction change value of the matching pair M1; Diff_Ori(j,n) represents the matching pair M2 The main direction change value; Ori_Cons(M1, M2) represents the result value of the consistency of the two matching pairs M1 and M2.

本发明方法通过减去主方向变化值来消除图像旋转对方位变化值的影响。具体地说,两个匹配对M1和M2的方位变化值一致性判断通过公式(16)(17)(18)实现;公式(17)用于消除图像旋转的影响,其中 TH_Dir为方位变化一致性阈值。The method of the invention eliminates the influence of image rotation on the orientation change value by subtracting the main direction change value. Specifically, the consistency judgment of the azimuth change values of the two matching pairs M1 and M2 is realized by formulas (16) (17) (18); formula (17) is used to eliminate the influence of image rotation, where TH_Dir is the azimuth change consistency threshold.

Diff_Dir_M(M1,M2)=|Diff_Dir(i,m)-Diff_Dir(j,n)| (16)Diff_Dir_M(M1,M2)=|Diff_Dir(i,m)-Diff_Dir(j,n)| (16)

Diff_Dir(M1,M2)=|Diff_Dir_M(M1,M2)-Diff_Ori_M(M1,M2)| (17)Diff_Dir(M1,M2)=|Diff_Dir_M(M1,M2)-Diff_Ori_M(M1,M2)| (17)

Figure DEST_PATH_GDA0001325472300000082
Figure DEST_PATH_GDA0001325472300000082

其中,Diff_Dir_M(M1,M2)表示为两个匹配对M1和M2方位变化值的差值;Diff_Dir(i,m)表示匹配对M1方位变化值;Diff_Dir(j,n)表示匹配对M2方位变化值;Diff_Dir(M1,M2)表示两个匹配对M1和M2主方向变化值的差值与方位变化值的差值之间的差值;Dir_Cons(M1,M2)表示两个匹配对M1和M2方位一致性的结果值。Among them, Diff_Dir_M(M1,M2) represents the difference between the azimuth change values of the two matching pairs M1 and M2; Diff_Dir(i,m) represents the azimuth change value of the matching pair M1; Diff_Dir(j,n) represents the azimuth change value of the matching pair M2 value; Diff_Dir(M1, M2) represents the difference between the main direction change value of the two matching pairs M1 and M2 and the difference between the azimuth change value; Dir_Cons(M1, M2) represents the two matching pairs M1 and M2 The resulting value for orientation consistency.

在本实施例中,我们在相关测试库上进行了相关实验,最后设定 TH_Dir同样为0.087(5度)。In this embodiment, we have carried out relevant experiments on relevant test libraries, and finally set TH_Dir to also be 0.087 (5 degrees).

由于计算两幅图像中候选匹配对两两之间方位一致性需要较大的计算量。在本实施例中,为了提高验证的效率,方法采用如下两种策略。策略一为:优先进行主方向一致性的判断;如果确认为两匹配对不一致,则无需再进行后续方位一致性的判断,从而减少了计算量。策略二为:当某个匹配对已经获得了其它匹配对验证或者说得到了足够多的肯定票,则无需再进行后续的验证,直接确认正确的匹配对,从而提高了效率。另外,我们也可以通过设定高频词汇表来过滤候选匹配对,从而提高验证的效率。Because the calculation of the orientation consistency between the candidate matching pairs in the two images requires a large amount of calculation. In this embodiment, in order to improve the verification efficiency, the method adopts the following two strategies. Strategy 1 is: prioritize the judgment of the consistency of the main direction; if it is confirmed that the two matching pairs are inconsistent, there is no need to judge the consistency of the subsequent direction, thereby reducing the amount of calculation. The second strategy is: when a matching pair has been verified by other matching pairs or has obtained enough affirmative votes, no subsequent verification is required, and the correct matching pair is directly confirmed, thereby improving efficiency. In addition, we can also filter candidate matching pairs by setting a high-frequency vocabulary, thereby improving the efficiency of verification.

步骤(4)采用投票法来确定某个匹配对是否是正确的匹配对。也就是:对于某个匹配对Mi,验证与其它匹配对之间的属性变化值一致性,如果一致就投一张肯定票。肯定票数除以两幅图像候选匹配对数大于给定阈值(Th_Votes)就认为该匹配对Mi为一个正确匹配。在本实施例中,Th_Votes根据实验设定为0.2.根据以上步骤用于计算两幅图像之间正确匹配对数的算法流程如下:Step (4) adopts voting method to determine whether a certain matching pair is the correct matching pair. That is: for a certain matching pair M i , verify the consistency of attribute change values with other matching pairs, and cast a positive vote if they are consistent. If the number of positive votes divided by the number of candidate matching pairs of the two images is greater than a given threshold (Th_Votes), the matching pair M i is considered to be a correct match. In this embodiment, Th_Votes is set to 0.2 according to the experiment. According to the above steps, the algorithm flow for calculating the correct matching logarithm between two images is as follows:

Figure DEST_PATH_GDA0001325472300000091
Figure DEST_PATH_GDA0001325472300000091

附图4为候选匹配对验证的结果样例,其中各个匹配对边上的数字代表其获得的肯定票的数量,那些误匹配对获得的肯定票数量非常少,大部分为0。因此本方法可以有效的识别那些误匹配对。Figure 4 is an example of the result of the candidate matching pair verification, in which the numbers on the side of each matching pair represent the number of positive votes obtained, and the number of positive votes obtained for those mismatched pairs is very small, most of which are 0. Therefore, the method can effectively identify those mismatched pairs.

为了证明方法的有效性,在本实施例中,我们进行了相关的测试。我们采用基于局部特征点匹配的拷贝图像检索作为本方法的应用场景。采用拷贝图像检测标准测试库Holidays和Web图像库作为测试库。 Holidays测试库包括157张原始图像,以及在其上获取的各种压缩率的jpeg图像,和各种裁剪得到的拷贝图像集。Web测试库是我们从网上收集的各种拷贝图像,其一组拷贝图像平均32张。本测试采用图像检索领域的mAP值作为评价指标。测试结果如图6所示。其中Spatial Coding 方法是当前最好的视觉词汇后验证方法,Baseline方法为不进行后验证直接基于候选匹配对数量的拷贝图像检索效果。从图6中可以确认本方法在两个测试库上都具有一定优势。In order to prove the effectiveness of the method, in this example, we conducted relevant tests. We adopt copy image retrieval based on local feature point matching as the application scenario of this method. The standard test library Holidays and Web image library for copy image detection are used as test libraries. The Holidays test library includes 157 original images, as well as jpeg images of various compression ratios obtained on them, and various cropped copies of image sets. The web test library is a collection of various copy images we collected from the Internet, with an average of 32 copy images in a set. This test uses the mAP value in the field of image retrieval as the evaluation index. The test results are shown in Figure 6. The Spatial Coding method is currently the best post-verification method for visual vocabulary, and the Baseline method is a copy image retrieval effect that is directly based on the number of candidate matching pairs without post-verification. It can be confirmed from Figure 6 that this method has certain advantages on both test libraries.

附图5给出了本方法在拷贝图像检索中的应用效果。其中灰黑线条表示正确的匹配对,而白表示通过本方法识别的误匹配对。Figure 5 shows the application effect of the method in the retrieval of copied images. The gray and black lines represent correct matching pairs, while the white lines represent false matching pairs identified by this method.

Claims (3)

1.一种局部特征点匹配对的后验证方法,其特征在于具体包括如下步骤:1. a post-verification method for matching pairs of local feature points, is characterized in that specifically comprising the steps: 步骤(1)根据局部特征点对应的视觉词汇得到两幅图像中局部特征点匹配对;Step (1) obtaining matching pairs of local feature points in the two images according to the visual vocabulary corresponding to the local feature points; 所述的视觉词汇是图像中局部特征点的特征向量量化得到的词汇ID;The visual vocabulary is the vocabulary ID obtained by the quantization of the feature vector of the local feature points in the image; 所述的局部特征点是通过局部特征点检测方法得到,其具有在图像中的相关属性:空间位置、尺度、主方向和特征向量;The local feature points are obtained by a local feature point detection method, and have relevant attributes in the image: spatial position, scale, main direction and feature vector; 所述的局部特征点匹配对是两幅图像中视觉词汇一致的一对局部特征点;设两幅图像:Img和Img’,其中的局部特征点分别用Vi和Vm′来表示;如果Vi和Vm′量化得到的视觉词汇一样,则(Vi,Vm′)就是一个局部特征点的匹配对;The described local feature point matching pair is a pair of local feature points with the same visual vocabulary in the two images; suppose two images: Img and Img', where the local feature points are represented by V i and V m 'respectively; if The visual vocabulary obtained by V i and V m ' is the same, then (V i , V m ') is a matching pair of local feature points; 步骤(2)计算局部特征点匹配对的属性变化值;Step (2) calculate the attribute change value of the matching pair of local feature points; 所述的属性变化值是匹配的两个局部特征点属性的差值,其反映了图像变换后原图中一个局部特征点变换到结果图像中对应特征点的变化情况,设图像存在两个属性变化值:主方向变化值和方位变化值;假设图像:Img和Img’之间存在两个匹配对M1:(Vi,Vm′)和M2:(Vj,Vn′);Vi的主方向表示为θi,位置属性表示为(Pxi,Pyi);Vm′的主方向表示为θm′,位置属性表示为(Pxm′,Pym′);主方向变化值(Diff_Ori)定义为匹配对中局部特征点的主方向的差,如公式(1)所示;方位变化值的获取需要两个匹配对,先获取两个匹配对在同一图像中的两个局部特征点的方位,在图像Img中局部特点Vi和Vj的方位值为Direction(i,j),如公式(2)所示,其中arctan2函数用于获取反正切值,该角度为:原点指向坐标(Pyj-Pyi,Pxj-Pxi)的向量与x轴正方向之间,沿逆时针方向的角度;同样的Vm′和Vn′的方位值可通过公式(3)得到;The attribute change value is the difference between the attributes of the two matched local feature points, which reflects the change of a local feature point in the original image after the image is transformed to the corresponding feature point in the result image. It is assumed that the image has two attributes. Change value: main direction change value and azimuth change value; Assume image: There are two matching pairs between Img and Img' M1: (V i ,V m ') and M2: (V j ,V n '); V i The main direction of V m is expressed as θ i , and the position attribute is expressed as (Px i ,Py i ); the main direction of V m ′ is expressed as θ m ′, and the position attribute is expressed as (Px m ′,Py m ′); the main direction change value (Diff_Ori) is defined as the difference between the main directions of the local feature points in the matching pair, as shown in formula (1); the acquisition of the orientation change value requires two matching pairs, first obtain the two local matching pairs in the same image. The orientation of the feature point, the orientation value of the local features V i and V j in the image Img is Direction(i, j), as shown in formula (2), where the arctan2 function is used to obtain the arc tangent value, the angle is: the origin The angle between the vector pointing to the coordinates (Py j -Py i , Px j -Px i ) and the positive direction of the x-axis, along the counterclockwise direction; the same orientation values of V m ' and V n ' can be obtained by formula (3) get; Diff_Ori(i,m)=θm′-θi (1)Diff_Ori(i,m)=θ m ′-θ i (1) Direction(i,j)=arctan2(Pyj-Pyi,Pxj-Pxi) (2)Direction(i,j)=arctan2(Py j -Py i ,Px j -Px i ) (2) Direction′(m,n)=arctan2(Py′n-Py′m,Px′n-Px′m) (3)Direction′(m,n)=arctan2(Py′ n -Py′ m ,Px′ n -Px′ m ) (3) 从而方位属性的变化值Diff_Dir(i,m)则为:Therefore, the change value of the orientation attribute Diff_Dir(i,m) is: Diff_Dir(i,m)=Direction(i,j)-Direction′(m,n) (4)Diff_Dir(i,m)=Direction(i,j)-Direction'(m,n) (4) 步骤(3)根据局部特征点匹配对之间的属性变化值一致性确认局部特征点匹配对是否是一致的;Step (3) confirming whether the matching pairs of local feature points are consistent according to the consistency of attribute change values between the matching pairs of local feature points; 步骤(4)采用投票法来确定某个匹配对是否是正确的匹配对;也就是:对于某个匹配对Mi,验证与其它匹配对之间的属性变化值一致性,如果一致就投一张肯定票;肯定票数与两幅图像候选匹配对数的比值大于给定阈值Th_Votes就认为该匹配对Mi为一个正确匹配。Step (4) adopts the voting method to determine whether a certain matching pair is a correct matching pair; that is, for a certain matching pair M i , verify the consistency of the attribute change values with other matching pairs, and vote one if they are consistent. A positive vote; if the ratio of the positive votes to the number of candidate matching pairs of the two images is greater than a given threshold Th_Votes, the matching pair M i is considered to be a correct match. 2.根据权利要求1所述一种局部特征点匹配对的后验证方法,其特征在于步骤3具体实现如下:2. the back verification method of a kind of local feature point matching pair according to claim 1, is characterized in that step 3 is specifically realized as follows: 所述的属性变化值一致性通过阈值进行判断,当所有属性变化值都满足阈值要求,则认为两个局部特征点匹配对是一致的;The consistency of the attribute change values is judged by a threshold value, and when all the attribute change values meet the threshold value requirements, it is considered that the two local feature point matching pairs are consistent; 两个匹配对M1:(Vi,Vm′)和M2:(Vj,Vn′)的主方向变化值一致性通过公式(5)和(6)得到,其中TH_ORI为主方向变化一致性的阈值;The two matching pairs M1: (V i , V m ') and M2: (V j , V n ') have the same main direction change values through formulas (5) and (6), where TH_ORI is consistent in the main direction changes sex threshold; Diff_Ori_M(M1,M2)=|Diff_Ori(i,m)-Diff_Ori(j,n)| (5)Diff_Ori_M(M1,M2)=|Diff_Ori(i,m)-Diff_Ori(j,n)| (5)
Figure FDA0001237636180000021
Figure FDA0001237636180000021
其中,Diff_Ori_M(M1,M2)表示为两个匹配对M1和M2主方向变化值的差值;Diff_Ori(i,m)表示匹配对M1主方向变化值;Diff_Ori(j,n)表示匹配对M2主方向变化值;Ori_Cons(M1,M2)表示两个匹配对M1和M2一致性的结果值。Among them, Diff_Ori_M(M1,M2) represents the difference between the main direction change values of the two matching pairs M1 and M2; Diff_Ori(i,m) represents the main direction change value of the matching pair M1; Diff_Ori(j,n) represents the matching pair M2 The main direction change value; Ori_Cons(M1, M2) represents the result value of the consistency of the two matching pairs M1 and M2.
3.根据权利要求2所述一种局部特征点匹配对的后验证方法,其特征在于:3. the back verification method of a kind of local feature point matching pair according to claim 2, is characterized in that: 通过减去主方向变化值来消除图像旋转对方位变化值的影响;具体地:The effect of image rotation on the orientation change is removed by subtracting the principal orientation change; specifically: 两个匹配对M1和M2的方位变化值一致性判断通过公式(7)(8)(9)实现;公式(8)用于消除图像旋转的影响,其中TH_Dir为方位变化一致性阈值;The consistency judgment of the orientation change values of the two matching pairs M1 and M2 is realized by formula (7) (8) (9); formula (8) is used to eliminate the influence of image rotation, wherein TH_Dir is the orientation change consistency threshold; Diff_Dir_M(M1,M2)=|Diff_Dir(i,m)-Diff_Dir(j,n)| (7)Diff_Dir_M(M1,M2)=|Diff_Dir(i,m)-Diff_Dir(j,n)| (7) Diff_Dir(M1,M2)=|Diff_Dir_M(M1,M2)-Diff_Ori_M(M1,M2)| (8)Diff_Dir(M1,M2)=|Diff_Dir_M(M1,M2)-Diff_Ori_M(M1,M2)| (8)
Figure FDA0001237636180000031
Figure FDA0001237636180000031
其中,Diff_Dir_M(M1,M2)表示为两个匹配对M1和M2方位变化值的差值;Diff_Dir(i,m)表示匹配对M1方位变化值;Diff_Dir(j,n)表示匹配对M2方位变化值;Diff_Dir(M1,M2)表示两个匹配对M1和M2主方向变化值的差值与方位变化值的差值之间的差值;Dir_Cons(M1,M2)表示两个匹配对M1和M2方位一致性的结果值。Among them, Diff_Dir_M(M1,M2) represents the difference between the azimuth change values of the two matching pairs M1 and M2; Diff_Dir(i,m) represents the azimuth change value of the matching pair M1; Diff_Dir(j,n) represents the azimuth change value of the matching pair M2 value; Diff_Dir(M1, M2) represents the difference between the main direction change value of the two matching pairs M1 and M2 and the difference between the azimuth change value; Dir_Cons(M1, M2) represents the two matching pairs M1 and M2 The resulting value for orientation consistency.
CN201710123132.7A 2017-03-03 2017-03-03 Post-verification method for local feature point matching pairs Active CN106991431B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710123132.7A CN106991431B (en) 2017-03-03 2017-03-03 Post-verification method for local feature point matching pairs

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710123132.7A CN106991431B (en) 2017-03-03 2017-03-03 Post-verification method for local feature point matching pairs

Publications (2)

Publication Number Publication Date
CN106991431A CN106991431A (en) 2017-07-28
CN106991431B true CN106991431B (en) 2020-02-07

Family

ID=59412669

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710123132.7A Active CN106991431B (en) 2017-03-03 2017-03-03 Post-verification method for local feature point matching pairs

Country Status (1)

Country Link
CN (1) CN106991431B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116910296B (en) * 2023-09-08 2023-12-08 上海任意门科技有限公司 Method, system, electronic device and medium for identifying transport content

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103699905A (en) * 2013-12-27 2014-04-02 深圳市捷顺科技实业股份有限公司 A license plate location method and device
CN104484671A (en) * 2014-11-06 2015-04-01 吉林大学 Target retrieval system applied to moving platform
CN104615642A (en) * 2014-12-17 2015-05-13 吉林大学 Space verification wrong matching detection method based on local neighborhood constrains

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150169993A1 (en) * 2012-10-01 2015-06-18 Google Inc. Geometry-preserving visual phrases for image classification using local-descriptor-level weights

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103699905A (en) * 2013-12-27 2014-04-02 深圳市捷顺科技实业股份有限公司 A license plate location method and device
CN104484671A (en) * 2014-11-06 2015-04-01 吉林大学 Target retrieval system applied to moving platform
CN104615642A (en) * 2014-12-17 2015-05-13 吉林大学 Space verification wrong matching detection method based on local neighborhood constrains

Also Published As

Publication number Publication date
CN106991431A (en) 2017-07-28

Similar Documents

Publication Publication Date Title
Pham et al. Lcd: Learned cross-domain descriptors for 2d-3d matching
CN106355577B (en) Fast Image Matching Method and System Based on Feature State and Global Consistency
CN103400384B (en) The wide-angle image matching process of calmodulin binding domain CaM coupling and some coupling
CN104766084A (en) Nearly copied image detection method based on multi-target matching
Wang et al. Shrinking the semantic gap: spatial pooling of local moment invariants for copy-move forgery detection
US9412020B2 (en) Geometric coding for billion-scale partial-duplicate image search
Shakoor et al. Feature selection and mapping of local binary pattern for texture classification
Ling et al. Fast image copy detection approach based on local fingerprint defined visual words
JPWO2017006852A1 (en) Image collation apparatus, image collation method, and program
CN104966081A (en) Spine image recognition method
Liu et al. An image-based near-duplicate video retrieval and localization using improved edit distance
Suwanwimolkul et al. Learning of low-level feature keypoints for accurate and robust detection
Zhang et al. Topological spatial verification for instance search
Tu et al. Fingerprint restoration using cubic Bezier curve
CN106649440A (en) Approximate repeated video retrieval method incorporating global R features
Tolias et al. Towards large-scale geometry indexing by feature selection
CN106127243A (en) A kind of image matching method describing son based on binaryzation SIFT
CN106991431B (en) Post-verification method for local feature point matching pairs
CN105678349B (en) A kind of sub- generation method of the context-descriptive of visual vocabulary
CN108710886B (en) A Repeated Image Matching Method Based on SIFT Algorithm
CN104573651B (en) Fingerprint identification method and device
Zheng et al. The augmented homogeneous coordinates matrix-based projective mismatch removal for partial-duplicate image search
CN113095384A (en) Remote sensing image matching method based on context characteristics of straight line segments
CN103823889B (en) L1 norm total geometrical consistency check-based wrong matching detection method
CN111614665A (en) An Intrusion Detection Method Based on Deep Residual Hash Network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20170728

Assignee: Hangzhou Zihong Technology Co., Ltd

Assignor: Hangzhou University of Electronic Science and technology

Contract record no.: X2021330000654

Denomination of invention: A post verification method for local feature point matching

Granted publication date: 20200207

License type: Common License

Record date: 20211104