CN111444948A - Image feature extraction and matching method - Google Patents
Image feature extraction and matching method Download PDFInfo
- Publication number
- CN111444948A CN111444948A CN202010204462.0A CN202010204462A CN111444948A CN 111444948 A CN111444948 A CN 111444948A CN 202010204462 A CN202010204462 A CN 202010204462A CN 111444948 A CN111444948 A CN 111444948A
- Authority
- CN
- China
- Prior art keywords
- point
- corner
- pixel
- gradient
- pixels
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000605 extraction Methods 0.000 title claims abstract description 16
- 238000000034 method Methods 0.000 title claims abstract description 16
- 238000001514 detection method Methods 0.000 claims abstract description 20
- 238000012216 screening Methods 0.000 claims abstract description 18
- 238000004364 calculation method Methods 0.000 claims abstract description 4
- 238000005457 optimization Methods 0.000 claims abstract description 4
- 239000013598 vector Substances 0.000 claims description 27
- 239000011159 matrix material Substances 0.000 claims description 15
- 238000005316 response function Methods 0.000 claims description 6
- 230000001186 cumulative effect Effects 0.000 claims description 4
- 238000006243 chemical reaction Methods 0.000 claims description 3
- 230000000717 retained effect Effects 0.000 claims description 3
- 230000009466 transformation Effects 0.000 abstract description 2
- 230000000694 effects Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
本发明公开了一种图像特征提取与匹配方法,包括步骤一:对特征点进行初步筛选;步骤二:利用候选角点中X和Y方向的梯度对S1得到的候选角点进行二次筛选;步骤三:像素级角点检测;步骤四:亚像素级角点检测,通过迭代优化Harris位置获得S3中得到的像素级角点的亚像素级角点坐标;步骤五:旋转不变快速变化描述子计算;步骤六:进行特征提取与特征匹配。发明在Harris角点检测的基础上,经过两次候选角点的筛选以提高角点的检测速度,并且通过迭代优化提高了角点检测的位置精度,最后,利用旋转不变快速变换描述子来表示特征。
The invention discloses a method for extracting and matching image features, comprising step 1: preliminary screening of feature points; step 2: secondary screening of candidate corner points obtained by S1 by using gradients in the X and Y directions in the candidate corner points; Step 3: Pixel-level corner detection; Step 4: Sub-pixel-level corner detection, obtain the sub-pixel-level corner coordinates of the pixel-level corner obtained in S3 by iteratively optimizing the Harris position; Step 5: Rotation-invariant description of rapid changes Sub-calculation; Step 6: Feature extraction and feature matching are performed. The invention is based on Harris corner detection, through two candidate corner screenings to improve the detection speed of the corner, and through iterative optimization to improve the position accuracy of the corner detection, and finally, using the rotation-invariant fast transformation descriptor to represent features.
Description
技术领域technical field
本发明涉及一种图像特征提取与匹配方法,特别是一种基于Harris的特征提取与匹配方法,属于图像处理领域。The invention relates to an image feature extraction and matching method, in particular to a Harris-based feature extraction and matching method, which belongs to the field of image processing.
背景技术Background technique
图像匹配是一种在不同图像中查找相似图像部分的方法。它被广泛应用于图像融合,目标识别,计算机视觉等领域。目前,图像匹配可以分为基于灰度的方法和基于特征的方法。众所周知,特征是图像中非常重要的信息,对于图像来说,特征是图像局部信息的抽象描述。特征可以在保留图像关键信息的同时极大地减少数据量。此外,这些特征对图像噪声,灰度变化,图像变形和遮挡具有良好的适应性,因此基于图像特征的匹配在实践中越来越广泛地使用。现在常用的方法是Harris角点检测,但是它的精度为像素级别,没有合适的描述子,并且计算复杂度较大。Image matching is a method of finding similar image parts in different images. It is widely used in image fusion, object recognition, computer vision and other fields. Currently, image matching can be divided into gray-based methods and feature-based methods. As we all know, features are very important information in images. For images, features are abstract descriptions of local information in images. Features can greatly reduce the amount of data while preserving key information about the image. Furthermore, these features have good adaptability to image noise, grayscale variation, image deformation and occlusion, so image feature-based matching is more and more widely used in practice. The commonly used method is Harris corner detection, but its accuracy is at the pixel level, there is no suitable descriptor, and the computational complexity is large.
发明内容SUMMARY OF THE INVENTION
针对上述现有技术,本发明要解决的技术问题是提供一种有效提高图像特征提取与匹配的快速性与准确性的图像特征提取与匹配方法。In view of the above-mentioned prior art, the technical problem to be solved by the present invention is to provide an image feature extraction and matching method that effectively improves the speed and accuracy of image feature extraction and matching.
为解决上述技术问题,本发明的一种图像特征提取与匹配方法,包括以下步骤:In order to solve the above-mentioned technical problems, an image feature extraction and matching method of the present invention includes the following steps:
S1:对特征点进行初步筛选,具体为:S1: Preliminary screening of feature points, specifically:
将采集到的彩色图像转换为灰度图像,转换公式如下:Convert the collected color image to grayscale image, the conversion formula is as follows:
Gray=(306*R+601*G+1147*B)>>10Gray=(306*R+601*G+1147*B)>>10
其中,Gray表示图像的灰度值,R、G、B分别表示红、绿、蓝三个通道的值,通过图像中每个像素点与其邻域中8个其他像素点的相似性来选择候选角点,根据两个像素点的灰度差异确定它们之间的相似度,对于在点(i,j)的像素点P,如果其邻域中8个像素点与该点之间的灰度差的绝对值P小于设置的灰色阈值T1,则认为该点与点P相似,检测P点与领域中8个像素点的相似性,并记录与P点相似的点的数量,记为N(i,j);Among them, Gray represents the gray value of the image, and R, G, and B represent the values of the red, green, and blue channels, respectively. The candidate is selected by the similarity between each pixel in the image and 8 other pixels in its neighborhood. Corner, the similarity between two pixels is determined according to the grayscale difference between them. For the pixel point P at point (i, j), if the grayscale between 8 pixels in its neighborhood and this point is If the absolute value P of the difference is less than the set gray threshold T 1 , the point is considered to be similar to the point P, and the similarity between the point P and the 8 pixel points in the field is detected, and the number of points similar to the point P is recorded, denoted as N (i,j);
P属于某个局部区域中的像素点,根据P点的N(i,j)值,判断点P是否为可能的角点,如果P点的N(i,j)在间隔(3,6)之间,点P被视为一个可能的角点,遍历图像中的所有像素点,并选择所有符合条件的像素点作为候选角点;P belongs to a pixel point in a local area. According to the N(i,j) value of point P, it is judged whether point P is a possible corner point. If N(i,j) of point P is in the interval (3,6) In between, point P is regarded as a possible corner, traverses all pixels in the image, and selects all eligible pixels as candidate corners;
S2:利用候选角点中X和Y方向的梯度对S1得到的候选角点进行二次筛选;S2: Use the gradients in the X and Y directions in the candidate corners to perform secondary screening on the candidate corners obtained by S1;
S3:像素级角点检测,具体为:计算步骤2中得到的每个候选角点的自相关矩阵:计算与每个候选角点相对应的梯度乘积,获得自相关矩阵M1:S3: pixel-level corner detection, specifically: calculating the autocorrelation matrix of each candidate corner obtained in step 2: calculating the gradient product corresponding to each candidate corner to obtain the autocorrelation matrix M 1 :
Ix、Iy分别表示候选角点在x与y方向的梯度值,然后使用高斯核函数G(x,y,σ)和M1进行卷积,得到一个新的自相关矩阵M2;I x and I y respectively represent the gradient values of the candidate corners in the x and y directions, and then use the Gaussian kernel function G(x, y, σ) and M 1 for convolution to obtain a new autocorrelation matrix M 2 ;
计算候选角点的角点响应函数值,并将该值用于确定它是否是正确的角点,角点响应函数值R的计算如下:The corner response function value of the candidate corner is calculated and used to determine whether it is the correct corner. The corner response function value R is calculated as follows:
Det(M2)=λ1λ2 Det(M 2 )=λ 1 λ 2
Tr(M2)=λ1+λ2 Tr(M 2 )=λ 1 +λ 2
R=Det(M2)-k*Tr2(M2)R=Det(M 2 )-k*Tr 2 (M 2 )
其中λ1和λ2是自相关矩阵M2的特征值,k是常数,如果点的CRF值R大于设置的阈值T3,则将该点选择为像素级角点;where λ 1 and λ 2 are the eigenvalues of the autocorrelation matrix M 2 , and k is a constant. If the CRF value R of the point is greater than the set threshold T 3 , the point is selected as a pixel-level corner point;
S4:亚像素级角点检测,通过迭代优化Harris位置获得S3中得到的像素级角点的亚像素级角点坐标;S4: Sub-pixel-level corner detection, obtaining the sub-pixel-level corner coordinates of the pixel-level corner points obtained in S3 by iteratively optimizing the Harris position;
S5:旋转不变快速变化描述子计算,具体为:S5: Rotation-invariant fast-changing descriptor calculation, specifically:
描述子选择的局部区域是一个以特征点为中心,半径为12个像素的圆形区域,所选局部区域被三个圆圈划分为三层,以特征点为中心的半径为4个像素,8个像素和12个像素点,中间的圆是一个子区域,中间层的环被均匀地分成4个子区域,最外层的环被均匀地分成8个子区域,共有13个子区域,在每个子区域中提取8方向梯度向量,最后获得104维特征向量作为描述子;The local area selected by the descriptor is a circular area centered on the feature point and a radius of 12 pixels. The selected local area is divided into three layers by three circles, and the radius centered on the feature point is 4 pixels, 8 pixels and 12 pixels, the circle in the middle is a sub-region, the ring in the middle layer is evenly divided into 4 sub-regions, the outermost ring is evenly divided into 8 sub-regions, a total of 13 sub-regions, in each sub-region Extract 8-direction gradient vector from , and finally obtain 104-dimensional feature vector as descriptor;
首先,以特征点为中心,根据特征点的邻域梯度的主方向,将选择区域中的所有像素点沿相同方向旋转,主方向θ(i,j)满足:First, with the feature point as the center, according to the main direction of the neighborhood gradient of the feature point, all the pixels in the selected area are rotated in the same direction, and the main direction θ(i, j) satisfies:
其次,计算局部区域中每个像素的梯度方向和大小:根据θ(i,j)计算梯度方向,将0°至360°的范围划分为八个方向,每个方向包含45°,并且确定每个像素属于哪个梯度方向,梯度幅度m(i,j)满足:Second, calculate the gradient direction and size of each pixel in the local area: calculate the gradient direction according to θ(i,j), divide the range from 0° to 360° into eight directions, each direction contains 45°, and determine each Which gradient direction does each pixel belong to, and the gradient magnitude m(i,j) satisfies:
m(i,j)=sqrt[(I(i+1,j)-I(i-1,j))2+(I(i,j+1)-I(i,j-1))2]m(i,j)=sqrt[(I(i+1,j)-I(i-1,j)) 2 +(I(i,j+1)-I(i,j-1)) 2 ]
每个像素的梯度加权大小由高斯决定,该点的高斯权重w(i,j)满足:The gradient weight of each pixel is determined by Gaussian, and the Gaussian weight w(i,j) of this point satisfies:
最后,根据每个像素的位置和梯度方向,确定其所贡献的统计块并通过将梯度插值系数乘以梯度幅度得到每个像素点对该统计块的贡献,每个子区域都有8个梯度方向,因此共有104个统计块,通过计算所有像素点对某个统计块的贡献的累加值得到相应的子区域在相应的梯度方向上的梯度分布特征值,共获得104维梯度分布特征向量,差值系数的计算公式如下:Finally, according to the position and gradient direction of each pixel, determine the statistical block it contributes and get the contribution of each pixel to the statistical block by multiplying the gradient interpolation coefficient by the gradient magnitude, each sub-region has 8 gradient directions , so there are 104 statistical blocks in total. By calculating the cumulative value of the contribution of all pixels to a statistical block, the gradient distribution eigenvalues of the corresponding sub-regions in the corresponding gradient directions are obtained, and a total of 104-dimensional gradient distribution eigenvectors are obtained. The formula for calculating the value coefficient is as follows:
像素对第n个统计块的贡献k(n)是:The pixel contribution k(n) to the nth statistical block is:
k(n)=c·I(i,j)·w(i,j)·m(i,j)k(n)=c·I(i,j)·w(i,j)·m(i,j)
累加对第n个统计块有贡献的所有像素的贡献值,得到K(n):Accumulate the contribution values of all pixels contributing to the nth statistical block to get K(n):
K(n)=∑k(n)K(n)=∑k(n)
S6:进行特征提取与特征匹配。S6: Perform feature extraction and feature matching.
本发明还包括:The present invention also includes:
1.S2中利用候选角点中X和Y方向的梯度对特征点进行二次筛选具体为:假设初始筛选后保留的候选角点数为N,将X与Y方向梯度平均值的70%设为阈值,消除梯度值小于阈值的像素点,将梯度值大于阈值的像素点保留为新的候选角点。1. In S2, the secondary screening of feature points using the gradients in the X and Y directions in the candidate corners is as follows: Assuming that the number of candidate corners retained after the initial screening is N, set 70% of the average value of the gradients in the X and Y directions as Threshold, eliminate pixels with gradient values less than the threshold, and keep pixels with gradient values greater than the threshold as new candidate corners.
2.S4中亚像素级角点检测,通过迭代优化Harris位置获得S3中得到的像素级角点的亚像素级角点坐标具体为:距离S3中得到的任一像素级角点O给定像素范围内的点包括两类点:A类点的灰度梯度值为0,B类点的梯度方向垂直于向量O;图像原点指向O点的向量为图像原点指向距离角点O给定像素范围内的第i个点的向量为则第k次迭代的向量满足:2. Sub-pixel-level corner detection in S4, the sub-pixel-level corner coordinates of the pixel-level corner points obtained in S3 are obtained by iteratively optimizing the Harris position, specifically: the distance from any pixel-level corner point obtained in S3 O given pixel The points in the range include two types of points: the gray gradient value of the A-type point is 0, the gradient direction of the B-type point is perpendicular to the vector O; the vector of the image origin pointing to the point O is The vector of the image origin pointing to the i-th point within a given pixel range from the corner point O is then the vector of the k-th iteration Satisfy:
其中,是灰度梯度向量,是第k次迭代的灰度梯度向量,在对应的坐标点附近选取向量得到迭代直到满足条件ε为设定的误差,则对应的坐标为该角点O的亚像素级角点坐标。in, is the grayscale gradient vector, is the gray gradient vector of the k-th iteration, in Select a vector near the corresponding coordinate point get iterate until the condition is met ε is the set error, then The corresponding coordinates are the sub-pixel-level corner coordinates of the corner O.
本发明的有益效果:本发明充分考虑特征提取的精度以及效率的问题,针对Harris角点检测精度以及效率问题,本发明在Harris角点检测的基础上,经过两次候选角点的筛选以提高角点的检测速度,并且通过迭代优化提高了角点检测的位置精度,最后,利用旋转不变快速变换描述子来表示特征。本发明可用于图像处理领域。本发明的主要优点体现在:Beneficial effects of the present invention: The present invention fully considers the accuracy and efficiency of feature extraction. In view of the accuracy and efficiency of Harris corner detection, the present invention is based on Harris corner detection, through two candidate corner screenings to improve The detection speed of the corner points is improved, and the position accuracy of the corner point detection is improved through iterative optimization. Finally, the features are represented by the rotation-invariant fast transformation descriptor. The present invention can be used in the field of image processing. The main advantages of the present invention are reflected in:
1.本发明经过两次候选角点的筛选,极大地提高了角点检测的速度。1. The present invention greatly improves the speed of corner detection after two selections of candidate corner points.
2.本发明利用迭代优化的方法,有效提高了角点检测的位置精度。2. The present invention uses the iterative optimization method to effectively improve the position accuracy of corner detection.
附图说明Description of drawings
图1(a)为Harris方法效果图;Figure 1(a) is the effect diagram of Harris method;
图1(b)为本发明效果图;Fig. 1 (b) is the effect drawing of the present invention;
图2为本发明的算法流程图;Fig. 2 is the algorithm flow chart of the present invention;
具体实施方式Detailed ways
下面结合附图对本发明具体实施方式做进一步说明。The specific embodiments of the present invention will be further described below with reference to the accompanying drawings.
结合图2,本发明具体实施方式包括以下步骤:2, the specific embodiment of the present invention includes the following steps:
步骤一:对特征点进行初步筛选;Step 1: Preliminary screening of feature points;
将采集到的彩色图像转换为灰度图像,转换公式如下:Convert the collected color image to grayscale image, the conversion formula is as follows:
Gray=(306*R+601*G+1147*B)>>10 (1)Gray=(306*R+601*G+1147*B)>>10 (1)
公式(1)中,Gray表示图像的灰度值,R、G、B分别表示红、绿、蓝三个通道的值。通过图像中每个像素点与其邻域中8个其他像素点的相似性来选择候选角点。根据两个像素点的灰度差异确定它们之间的相似度。对于在点(i,j)的像素点P,如果其邻域中8个像素点与该点之间的灰度差的绝对值P小于设置的灰色阈值T1,则认为该点与点P相似。检测P点与领域中8个像素点的相似性,并记录与P点相似的点的数量,记为N(i,j)。In formula (1), Gray represents the grayscale value of the image, and R, G, and B represent the values of the red, green, and blue channels, respectively. The candidate corners are selected by the similarity of each pixel in the image to 8 other pixels in its neighborhood. The similarity between two pixels is determined according to their grayscale differences. For the pixel point P at point (i, j), if the absolute value P of the grayscale difference between the 8 pixels in its neighborhood and the point is less than the set gray threshold T 1 , it is considered that the point is different from the point P resemblance. Detect the similarity between point P and 8 pixels in the field, and record the number of points similar to point P, denoted as N(i,j).
根据值N(i,j),可以判断点P是否为可能的角点。如果P点的N(i,j)非常大,其附近的像素点与点P相似,P属于某个局部区域中的像素点。如果P点的N(i,j)比较小,其附近没有点与P相似,P属于孤立像素点或噪点。在本文中,如果P点的N(i,j)在间隔(3,6)之间,点P可以被视为一个可能的角点。遍历图像中的所有像素点,并选择所有符合条件的像素点作为候选角点。According to the value N(i,j), it can be judged whether the point P is a possible corner point. If the N(i,j) of point P is very large, the nearby pixels are similar to point P, and P belongs to the pixels in a certain local area. If the N(i,j) of point P is relatively small, there is no point near it similar to P, and P is an isolated pixel or noise. In this paper, point P can be regarded as a possible corner point if N(i,j) of point P is between the interval (3,6). Traverse all pixels in the image and select all eligible pixels as candidate corners.
步骤二:对特征点进行二次筛选;Step 2: Secondary screening of feature points;
初始筛选后,后续步骤中计算出的像素点数将大大减少。通常,角点的灰度值在其附近变化很大,并且梯度值相对较大。候选角点中X和Y方向的梯度可用于二次筛选,以进一步降低特征提取算法的计算复杂度。After the initial screening, the number of pixels calculated in subsequent steps will be greatly reduced. Usually, the gray value of a corner point varies greatly around it, and the gradient value is relatively large. The gradients in the X and Y directions in the candidate corners can be used for secondary screening to further reduce the computational complexity of the feature extraction algorithm.
假设初始筛选后保留的候选角点数为N。在本文中,将X与Y方向梯度平均值的70%设为阈值,消除梯度值小于阈值的像素点,将梯度较大的像素点保留为新的候选角点。二次筛选阈值的公式如下:Assume that the number of candidate corner points retained after the initial screening is N. In this paper, 70% of the average value of the gradients in the X and Y directions is set as the threshold, the pixels with gradient values less than the threshold are eliminated, and the pixels with larger gradients are reserved as new candidate corner points. The formula for the secondary screening threshold is as follows:
公式(2)中,表示第i个候选角点在x方向的梯度值,N表示候选角点的数量。公式(3)中,表示第i个候选角点在y方向的梯度值,N表示候选角点的数量。In formula (2), Represents the gradient value of the ith candidate corner in the x direction, and N represents the number of candidate corners. In formula (3), Represents the gradient value of the ith candidate corner in the y direction, and N represents the number of candidate corners.
步骤三:像素级角点检测;Step 3: Pixel-level corner detection;
计算每个候选角点的自相关矩阵。计算与每个候选角点相对应的梯度乘积,获得如下式所示的自相关矩阵M1。Compute the autocorrelation matrix for each candidate corner. Calculate the gradient product corresponding to each candidate corner to obtain the autocorrelation matrix M 1 as shown below.
公式(4)中,Ix、Iy分别表示候选角点在x与y方向的梯度值。In formula (4), I x and I y represent the gradient values of the candidate corner points in the x and y directions, respectively.
然后使用高斯核函数G(x,y,σ)和M1进行卷积,得到一个新的自相关矩阵M2。Then use the Gaussian kernel function G(x, y, σ) to convolve with M 1 to obtain a new autocorrelation matrix M 2 .
接下来,计算候选角点的角点响应函数(CRF)值,并将该CRF值用于确定它是否是正确的角点。自相关矩阵M2的特征值是λ1和λ2。当两个特征值都较小时,表示该点位于平坦区域。当一个特征值较小而另一个较大时,表示该点位于边缘。当两个特征值都较大时,它就是一个角点。为了避免求解特征值,通常使用角点响应函数。CRF值R的计算如下:Next, the corner response function (CRF) value of the candidate corner is calculated and used to determine whether it is the correct corner. The eigenvalues of the autocorrelation matrix M 2 are λ 1 and λ 2 . When both eigenvalues are small, it means that the point lies in a flat area. When one eigenvalue is small and the other is large, it means that the point is on the edge. When both eigenvalues are large, it is a corner point. To avoid solving for eigenvalues, corner response functions are often used. The CRF value R is calculated as follows:
Det(M2)=λ1λ2 (5)Det(M 2 )=λ 1 λ 2 (5)
Tr(M2)=λ1+λ2 (6)Tr(M 2 )=λ 1 +λ 2 (6)
R=Det(M2)-k*Tr2(M2) (7)R=Det(M 2 )-k*Tr 2 (M 2 ) (7)
公式(5)中,λ1、λ2分别表示自相关矩阵M2的特征值,Det(M2)表示矩阵M2的行列式。公式(6)中,Tr(M2)表示矩阵M2的迹。公式(7)中,k是常数,值的范围通常为0.04至0.06。角点的CRF值为正,通常不会很小。如果点的CRF值R大于设置的阈值T3,则将该点选择为角点。In formula (5), λ 1 and λ 2 respectively represent the eigenvalues of the autocorrelation matrix M 2 , and Det(M 2 ) represents the determinant of the matrix M 2 . In formula (6), Tr(M 2 ) represents the trace of the matrix M 2 . In Equation (7), k is a constant with a value typically in the range of 0.04 to 0.06. The CRF values of the corners are positive and usually not very small. If the CRF value R of a point is greater than the set threshold T3, the point is selected as a corner point.
步骤四:亚像素级角点检测;Step 4: Sub-pixel corner detection;
通过迭代优化Harris位置获得更准确的亚像素级角点坐标。对于角点O,靠近点O的点可以分为两种,一种在边缘,还有一种不在边缘。A点的灰度梯度值为0,且B点的梯度方向垂直于向量OB,因此可以认为角点O附近的灰度梯度垂直于将该点连接到角点O的线。More accurate sub-pixel corner coordinates are obtained by iteratively optimizing the Harris position. For the corner point O, the points close to the point O can be divided into two types, one is on the edge, and the other is not on the edge. The gray gradient value of point A is 0, and the gradient direction of point B is perpendicular to the vector OB, so it can be considered that the gray gradient near the corner point O is perpendicular to the line connecting this point to the corner point O.
数学表达式为:The mathematical expression is:
公式(8)中,是灰度梯度向量,是图像原点指向O点的向量,是图像原点指向第i个点的向量。In formula (8), is the grayscale gradient vector, is the vector from the origin of the image to point O, is the vector from the image origin to the ith point.
实际上,图像通常受噪声影响,因此公式(10)的左侧不等于0。假设误差为ε,有:In practice, images are often affected by noise, so the left-hand side of equation (10) is not equal to 0. Assuming that the error is ε, we have:
角点O附近所有点的累积误差总和是E:The sum of the cumulative error for all points near the corner O is E:
这样,将求解角点的精确位置的问题转化为最小化误差总和E的问题。这个问题可以通过迭代的方法解决,并且在等式(10)的两端乘以 In this way, the problem of solving the exact position of the corner points is transformed into the problem of minimizing the sum of errors E. This problem can be solved iteratively by multiplying both sides of equation (10) by
将点O附近区域的点都代入方程式(11),可得出总和:Substituting all points in the region around point O into equation (11), the sum is obtained:
根据公式(12)可求得的表达式为:According to formula (12), it can be obtained The expression is:
在附近选取向量继续执行公式(13)得到不断迭代,直到满足条件即可,一般选取ε=1.0e-6,最终获得了准确的亚像素级角点坐标。exist pick vector nearby Continue to execute formula (13) to get Iterate continuously until the condition is met That is, ε=1.0e −6 is generally selected, and accurate sub-pixel-level corner coordinates are finally obtained.
步骤五:旋转不变快速变化描述子计算;Step 5: Rotation-invariant fast-changing descriptor calculation;
描述子选择的局部区域是一个以特征点为中心,半径为12个像素的圆形区域。所选局部区域被三个圆圈划分为三层,以特征点为中心的半径为4个像素,8个像素和12个像素点。中间的圆是一个子区域,中间层的环被均匀地分成4个子区域,最外层的环被均匀地分成8个子区域,对于共有13个子区域。在每个子区域中提取8方向梯度向量,最后获得104维特征向量作为描述子。The local area selected by the descriptor is a circular area centered on the feature point with a radius of 12 pixels. The selected local area is divided into three layers by three circles, and the radii centered on the feature point are 4 pixels, 8 pixels and 12 pixels. The circle in the middle is a subregion, the ring in the middle layer is evenly divided into 4 subregions, and the ring in the outermost layer is evenly divided into 8 subregions, for a total of 13 subregions. 8-direction gradient vectors are extracted in each sub-region, and finally 104-dimensional feature vectors are obtained as descriptors.
首先,以特征点为中心,根据特征点的邻域梯度的主方向,将选择区域中的所有像素点沿相同方向旋转。旋转的目的是对齐主方向,以便可以在相似的局部区域中提取一致的特征向量,从而确保旋转不变。主方向θ(i,j)的计算如下:First, with the feature point as the center, all the pixels in the selected area are rotated in the same direction according to the main direction of the neighborhood gradient of the feature point. The purpose of the rotation is to align the principal directions so that consistent feature vectors can be extracted in similar local regions, thus ensuring that the rotation is invariant. The main direction θ(i,j) is calculated as follows:
公式(14)中,I(i,j)表示在点(i,j)处的灰度值。In formula (14), I(i,j) represents the gray value at point (i,j).
其次,计算局部区域中每个像素的梯度方向和大小。根据等式(14)计算梯度方向,将0°至360°的范围划分为八个方向,每个方向包含45°,并且确定每个像素属于哪个梯度方向。梯度幅度m(i,j)计算如下:Second, the gradient direction and magnitude of each pixel in the local region are calculated. The gradient direction is calculated according to equation (14), the range of 0° to 360° is divided into eight directions, each direction contains 45°, and which gradient direction each pixel belongs to is determined. The gradient magnitude m(i,j) is calculated as follows:
m(i,j)=sqrt[(I(i+1,j)-I(i-1,j))2+(I(i,j+1)-I(i,j-1))2] (15)m(i,j)=sqrt[(I(i+1,j)-I(i-1,j)) 2 +(I(i,j+1)-I(i,j-1)) 2 ] (15)
公式(15)中,I(i,j)表示在点(i,j)处的灰度值。In formula (15), I(i,j) represents the gray value at point (i,j).
每个像素的梯度加权大小由高斯决定。该点的高斯权重w(i,j)的计算如下:The magnitude of the gradient weight for each pixel is determined by a Gaussian. The Gaussian weight w(i,j) of this point is calculated as follows:
公式(16)中,r表示点(i,j)到角点的距离,表示方差。In formula (16), r represents the distance from point (i, j) to the corner point, represents the variance.
最后,根据每个像素的位置和梯度方向,确定其所贡献的统计块,并通过线性插值法确定其贡献。每个子区域都有8个梯度方向,因此共有104个统计块。通过将梯度插值系数乘以梯度幅度获得贡献。所有像素点对某个统计块的贡献的累加值是相应的子区域在相应的梯度方向上的梯度分布特征值。在完成所有统计块的计算之后,可以获得104维梯度分布特征向量。其中。差值系数d(i,j)的计算公式如下:Finally, according to the position and gradient direction of each pixel, the statistical block contributed by it is determined, and its contribution is determined by linear interpolation. Each subregion has 8 gradient directions, so there are 104 statistical blocks in total. The contribution is obtained by multiplying the gradient interpolation coefficient by the gradient magnitude. The cumulative value of the contribution of all pixels to a certain statistical block is the characteristic value of the gradient distribution of the corresponding sub-region in the corresponding gradient direction. After completing the computation of all statistical blocks, a 104-dimensional gradient distribution feature vector can be obtained. in. The formula for calculating the difference coefficient d(i,j) is as follows:
公式(17)中,r表示点(i,j)到角点的距离。In formula (17), r represents the distance from the point (i, j) to the corner point.
则像素对第n个统计块的贡献k(n)是:Then the contribution k(n) of the pixel to the nth statistical block is:
k(n)=c·d(i,j)·w(i,j)·m(i,j) (18)k(n)=c·d(i,j)·w(i,j)·m(i,j) (18)
公式(18)中,c表示贡献系数,d(i,j)表示差值系数,w(i,j)表示高斯权重系数,w(i,j)表示梯度幅度。In formula (18), c represents the contribution coefficient, d(i, j) represents the difference coefficient, w(i, j) represents the Gaussian weight coefficient, and w(i, j) represents the gradient magnitude.
累加对第n个统计块有贡献的所有像素的贡献值,得到K(n):Accumulate the contribution values of all pixels contributing to the nth statistical block to get K(n):
K(n)=∑k(n) (19)K(n)=∑k(n) (19)
步骤六:进行特征提取与特征匹配。Step 6: Perform feature extraction and feature matching.
通过拍摄两幅图片进行实验验证。在算法用时方面,Harris算法用时5.04s,改进Harris算法用时1.276s,其时间消耗仅为Harris算法的25.3%,该算法有效地提高了特征提取与匹配速度。在特征匹配方面,当图中有400个特征点时,Harris算法正确匹配数量为87,改进Harris算法正确匹配数量为121。该算法有效的提高了正确匹配的特征点的数量。The experimental verification is carried out by taking two pictures. In terms of algorithm time, the Harris algorithm takes 5.04s, the improved Harris algorithm takes 1.276s, and its time consumption is only 25.3% of the Harris algorithm. This algorithm effectively improves the speed of feature extraction and matching. In terms of feature matching, when there are 400 feature points in the graph, the number of correct matches for the Harris algorithm is 87, and the number of correct matches for the improved Harris algorithm is 121. The algorithm effectively increases the number of correctly matched feature points.
Claims (3)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010204462.0A CN111444948B (en) | 2020-03-21 | 2020-03-21 | Image feature extraction and matching method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010204462.0A CN111444948B (en) | 2020-03-21 | 2020-03-21 | Image feature extraction and matching method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111444948A true CN111444948A (en) | 2020-07-24 |
CN111444948B CN111444948B (en) | 2022-11-18 |
Family
ID=71629600
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010204462.0A Active CN111444948B (en) | 2020-03-21 | 2020-03-21 | Image feature extraction and matching method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111444948B (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112819844A (en) * | 2021-01-29 | 2021-05-18 | 山东建筑大学 | Image edge detection method and device |
CN113111212A (en) * | 2021-04-01 | 2021-07-13 | 广东拓斯达科技股份有限公司 | Image matching method, device, equipment and storage medium |
CN114694162A (en) * | 2022-05-31 | 2022-07-01 | 深圳航天信息有限公司 | Invoice image recognition method and system based on image processing |
CN115861603A (en) * | 2022-12-29 | 2023-03-28 | 宁波星巡智能科技有限公司 | Interest region locking method, device, equipment and storage medium |
CN117114971A (en) * | 2023-08-01 | 2023-11-24 | 北京城建设计发展集团股份有限公司 | Pixel map-to-vector map conversion method and system |
CN118823751A (en) * | 2024-07-09 | 2024-10-22 | 深圳市睿灏科技服务有限公司 | A drug identification method and device for intelligent hardware |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104221031A (en) * | 2011-11-18 | 2014-12-17 | 美塔欧有限公司 | Method of matching image features with reference features and integrated circuit therefor |
CN105787912A (en) * | 2014-12-18 | 2016-07-20 | 南京大目信息科技有限公司 | Classification-based step type edge sub pixel localization method |
CN106096621A (en) * | 2016-06-02 | 2016-11-09 | 西安科技大学 | Based on vector constraint fall position detection random character point choosing method |
CN106127755A (en) * | 2016-06-21 | 2016-11-16 | 奇瑞汽车股份有限公司 | The image matching method of feature based and device |
EP3113077A1 (en) * | 2015-06-30 | 2017-01-04 | Lingaro Sp. z o.o. | A method and a system for image feature point description |
CN107909085A (en) * | 2017-12-01 | 2018-04-13 | 中国科学院长春光学精密机械与物理研究所 | A kind of characteristics of image Angular Point Extracting Method based on Harris operators |
CN107992791A (en) * | 2017-10-13 | 2018-05-04 | 西安天和防务技术股份有限公司 | Target following failure weight detecting method and device, storage medium, electronic equipment |
-
2020
- 2020-03-21 CN CN202010204462.0A patent/CN111444948B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104221031A (en) * | 2011-11-18 | 2014-12-17 | 美塔欧有限公司 | Method of matching image features with reference features and integrated circuit therefor |
CN105787912A (en) * | 2014-12-18 | 2016-07-20 | 南京大目信息科技有限公司 | Classification-based step type edge sub pixel localization method |
EP3113077A1 (en) * | 2015-06-30 | 2017-01-04 | Lingaro Sp. z o.o. | A method and a system for image feature point description |
CN106096621A (en) * | 2016-06-02 | 2016-11-09 | 西安科技大学 | Based on vector constraint fall position detection random character point choosing method |
CN106127755A (en) * | 2016-06-21 | 2016-11-16 | 奇瑞汽车股份有限公司 | The image matching method of feature based and device |
CN107992791A (en) * | 2017-10-13 | 2018-05-04 | 西安天和防务技术股份有限公司 | Target following failure weight detecting method and device, storage medium, electronic equipment |
CN107909085A (en) * | 2017-12-01 | 2018-04-13 | 中国科学院长春光学精密机械与物理研究所 | A kind of characteristics of image Angular Point Extracting Method based on Harris operators |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112819844A (en) * | 2021-01-29 | 2021-05-18 | 山东建筑大学 | Image edge detection method and device |
CN112819844B (en) * | 2021-01-29 | 2023-03-14 | 山东建筑大学 | Image edge detection method and device |
CN113111212A (en) * | 2021-04-01 | 2021-07-13 | 广东拓斯达科技股份有限公司 | Image matching method, device, equipment and storage medium |
CN113111212B (en) * | 2021-04-01 | 2024-05-17 | 广东拓斯达科技股份有限公司 | Image matching method, device, equipment and storage medium |
CN114694162A (en) * | 2022-05-31 | 2022-07-01 | 深圳航天信息有限公司 | Invoice image recognition method and system based on image processing |
CN115861603A (en) * | 2022-12-29 | 2023-03-28 | 宁波星巡智能科技有限公司 | Interest region locking method, device, equipment and storage medium |
CN115861603B (en) * | 2022-12-29 | 2023-09-26 | 宁波星巡智能科技有限公司 | Method, device, equipment and medium for locking region of interest in infant care scene |
CN117114971A (en) * | 2023-08-01 | 2023-11-24 | 北京城建设计发展集团股份有限公司 | Pixel map-to-vector map conversion method and system |
CN117114971B (en) * | 2023-08-01 | 2024-03-08 | 北京城建设计发展集团股份有限公司 | Pixel map-to-vector map conversion method and system |
CN118823751A (en) * | 2024-07-09 | 2024-10-22 | 深圳市睿灏科技服务有限公司 | A drug identification method and device for intelligent hardware |
Also Published As
Publication number | Publication date |
---|---|
CN111444948B (en) | 2022-11-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111444948B (en) | Image feature extraction and matching method | |
US9141871B2 (en) | Systems, methods, and software implementing affine-invariant feature detection implementing iterative searching of an affine space | |
CN107067415B (en) | A kind of object localization method based on images match | |
US7577297B2 (en) | Pattern identification method, device thereof, and program thereof | |
US8233716B2 (en) | System and method for finding stable keypoints in a picture image using localized scale space properties | |
CN109118473B (en) | Angular point detection method based on neural network, storage medium and image processing system | |
CN109886878B (en) | Infrared image splicing method based on coarse-to-fine registration | |
CN109712071B (en) | UAV image stitching and positioning method based on track constraints | |
CN108805904A (en) | A kind of moving ship detection and tracking based on satellite sequence image | |
CN102800099B (en) | Multi-feature multi-level visible light and high-spectrum image high-precision registering method | |
CN101493891A (en) | Characteristic extracting and describing method with mirror plate overturning invariability based on SIFT | |
CN106981077A (en) | Infrared image and visible light image registration method based on DCE and LSS | |
CN101650784A (en) | Method for matching images by utilizing structural context characteristics | |
CN103761768A (en) | Stereo matching method of three-dimensional reconstruction | |
US20190279022A1 (en) | Object recognition method and device thereof | |
CN115471682A (en) | An Image Matching Method Based on SIFT Fusion ResNet50 | |
CN110516528A (en) | A moving target detection and tracking method based on moving background | |
CN110222661B (en) | Feature extraction method for moving target identification and tracking | |
CN114358166A (en) | Multi-target positioning method based on self-adaptive k-means clustering | |
CN110210511A (en) | A kind of improvement PCA-SIFT method for registering images based on cosine measure | |
CN108319961B (en) | A fast detection method of image ROI based on local feature points | |
WO2024130857A1 (en) | Article display inspection method and apparatus, and device and readable storage medium | |
CN106529548A (en) | Sub-pixel level multi-scale Harris corner detection algorithm | |
CN112132054A (en) | Document positioning and segmenting method based on deep learning | |
CN109508674B (en) | Airborne Down-View Heterogeneous Image Matching Method Based on Region Division |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |