CN111444948A - Image feature extraction and matching method - Google Patents
Image feature extraction and matching method Download PDFInfo
- Publication number
- CN111444948A CN111444948A CN202010204462.0A CN202010204462A CN111444948A CN 111444948 A CN111444948 A CN 111444948A CN 202010204462 A CN202010204462 A CN 202010204462A CN 111444948 A CN111444948 A CN 111444948A
- Authority
- CN
- China
- Prior art keywords
- pixel
- point
- points
- corner
- gradient
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 24
- 238000000605 extraction Methods 0.000 title claims abstract description 18
- 238000012216 screening Methods 0.000 claims abstract description 19
- 238000001514 detection method Methods 0.000 claims abstract description 16
- 230000008859 change Effects 0.000 claims abstract description 5
- 239000013598 vector Substances 0.000 claims description 29
- 239000011159 matrix material Substances 0.000 claims description 14
- 238000005316 response function Methods 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 claims description 3
- 238000006243 chemical reaction Methods 0.000 claims description 3
- 238000005457 optimization Methods 0.000 abstract description 3
- 230000009466 transformation Effects 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses an image feature extraction and matching method, which comprises the following steps: primarily screening the characteristic points; step two: performing secondary screening on the candidate corner points obtained in the S1 by using gradients in the X and Y directions in the candidate corner points; step three: detecting a pixel-level corner point; step four: performing subpixel-level corner detection, and obtaining subpixel-level corner coordinates of the pixel-level corner obtained in S3 by iteratively optimizing the Harris position; step five: calculating a rotation invariant fast change descriptor; step six: and carrying out feature extraction and feature matching. On the basis of Harris corner detection, the method improves the detection speed of the corner through twice screening of candidate corners, improves the position precision of corner detection through iterative optimization, and finally utilizes a rotation invariant fast transformation descriptor to represent the characteristics.
Description
Technical Field
The invention relates to an image feature extraction and matching method, in particular to a Harris-based feature extraction and matching method, and belongs to the field of image processing.
Background
Image matching is a method of finding similar image portions in different images. The method is widely applied to the fields of image fusion, target recognition, computer vision and the like. Currently, image matching can be divided into grayscale-based methods and feature-based methods. As is well known, a feature is very important information in an image, and for an image, a feature is an abstract description of local information of the image. The features can greatly reduce the amount of data while retaining key information of the image. In addition, these features have good adaptability to image noise, gradation change, image deformation, and occlusion, and thus matching based on image features is increasingly widely used in practice. The method commonly used today is Harris corner detection, but its accuracy is at the pixel level, there is no proper descriptor, and the computational complexity is large.
Disclosure of Invention
In view of the prior art, the technical problem to be solved by the present invention is to provide an image feature extraction and matching method that effectively improves the rapidity and accuracy of image feature extraction and matching.
In order to solve the technical problem, the image feature extraction and matching method of the invention comprises the following steps:
s1: the method comprises the following steps of carrying out preliminary screening on the characteristic points:
converting the collected color image into a gray image, wherein the conversion formula is as follows:
Gray=(306*R+601*G+1147*B)>>10
gray represents the Gray value of the image, R, G, B represents the values of the three channels of red, green and blue, respectively, candidate corner points are selected according to the similarity between each pixel point in the image and 8 other pixel points in the neighborhood, the similarity between the two pixel points is determined according to the Gray difference of the two pixel points, and for the pixel points, the Gray value of the pixel points is expressed, R, G, B represents the values of the three channels of red, green and blue, and the similarity between the two pixel(i, j) if the absolute value P of the gray difference between 8 pixel points in the neighborhood and the point is less than the set gray threshold T1If the point is similar to the point P, detecting the similarity between the point P and 8 pixel points in the field, recording the number of the points similar to the point P, and marking as N (i, j);
p belongs to pixel points in a certain local area, whether the P is a possible angular point is judged according to the N (i, j) value of the P, if the N (i, j) of the P is between the intervals (3,6), the P is regarded as a possible angular point, all pixel points in the image are traversed, and all pixel points meeting the conditions are selected as candidate angular points;
s2: performing secondary screening on the candidate corner points obtained in the S1 by using gradients in the X and Y directions in the candidate corner points;
s3: pixel-level corner detection, specifically: calculating an autocorrelation matrix of each candidate corner obtained in the step 2: calculating gradient product corresponding to each candidate corner point to obtain autocorrelation matrix M1:
Ix、IyRepresenting the gradient values of the candidate corner points in the x and y directions, respectively, and then using Gaussian kernel functions G (x, y, sigma) and M1Performing convolution to obtain a new autocorrelation matrix M2;
The corner response function value of the candidate corner is calculated and used to determine whether it is the correct corner, and the corner response function value R is calculated as follows:
Det(M2)=λ1λ2
Tr(M2)=λ1+λ2
R=Det(M2)-k*Tr2(M2)
wherein λ1And λ2Is an autocorrelation matrix M2K is a constant if the CRF value R of the point is greater than a set threshold T3Selecting the point as a pixel-level corner point;
s4: performing subpixel-level corner detection, and obtaining subpixel-level corner coordinates of the pixel-level corner obtained in S3 by iteratively optimizing the Harris position;
s5: calculating a rotation invariant quick change descriptor, specifically:
the local area selected by the descriptor is a circular area with the characteristic point as the center and the radius of 12 pixels, the selected local area is divided into three layers by three circles, the radius with the characteristic point as the center is 4 pixels, 8 pixels and 12 pixel points, the middle circle is a sub-area, the ring of the middle layer is uniformly divided into 4 sub-areas, the ring of the outermost layer is uniformly divided into 8 sub-areas, 13 sub-areas are in total, 8-direction gradient vectors are extracted from each sub-area, and finally 104-dimensional characteristic vectors are obtained to serve as the descriptor;
firstly, taking a feature point as a center, rotating all pixel points in a selected area along the same direction according to the main direction of the neighborhood gradient of the feature point, wherein the main direction theta (i, j) meets the following requirements:
secondly, the gradient direction and magnitude of each pixel in the local region are calculated: calculating gradient directions according to theta (i, j), dividing a range of 0 DEG to 360 DEG into eight directions, each direction containing 45 DEG, and determining which gradient direction each pixel belongs to, the gradient magnitude m (i, j) satisfying:
m(i,j)=sqrt[(I(i+1,j)-I(i-1,j))2+(I(i,j+1)-I(i,j-1))2]
the gradient weight of each pixel is determined by Gaussian, and the Gaussian weight w (i, j) of the point satisfies the following conditions:
finally, according to the position and gradient direction of each pixel, determining the statistical block contributed by each pixel, and obtaining the contribution of each pixel point to the statistical block by multiplying the gradient interpolation coefficient by the gradient amplitude, wherein each sub-region has 8 gradient directions, so 104 statistical blocks are in total, obtaining the gradient distribution characteristic value of the corresponding sub-region in the corresponding gradient direction by calculating the accumulated value of the contributions of all the pixel points to a certain statistical block, and obtaining 104-dimensional gradient distribution characteristic vectors in total, wherein the calculation formula of the difference coefficient is as follows:
the contribution k (n) of a pixel to the nth statistical block is:
k(n)=c·I(i,j)·w(i,j)·m(i,j)
the contribution values of all pixels contributing to the nth statistical block are accumulated to obtain k (n):
K(n)=∑k(n)
s6: and carrying out feature extraction and feature matching.
The invention also includes:
the secondary screening of the feature points by using the gradients in the X and Y directions in the candidate corner points in S2 specifically comprises the following steps: assuming that the number of candidate angular points remained after the initial screening is N, setting 70% of the average value of the gradients in the X and Y directions as a threshold, eliminating pixel points with gradient values smaller than the threshold, and remaining the pixel points with gradient values larger than the threshold as new candidate angular points.
Detecting sub-pixel-level corner points in S4, and obtaining sub-pixel-level corner point coordinates of the pixel-level corner points obtained in S3 by iteratively optimizing Harris positions specifically as follows: the points within a given pixel range from any pixel-level corner point O found in the distance S3 include two types of points: the gray scale gradient value of the A-type point is 0, and the gradient direction of the B-type point is vertical to the vector O; the vector of the image origin pointing to the O point isThe vector of the image origin pointing to the ith point within a given pixel range from the corner point O isThe vector of the kth iterationSatisfies the following conditions:
wherein,is a vector of the gradient of the gray scale,is the gray gradient vector of the kth iteration, atSelecting vectors near corresponding coordinate pointsTo obtainIterate until a condition is satisfiedFor a set error, thenThe corresponding coordinates are the sub-pixel level corner coordinates of the corner O.
The invention has the beneficial effects that: the method fully considers the problems of precision and efficiency of feature extraction, aims at the problems of Harris angular point detection precision and efficiency, improves the angular point detection speed through twice screening of candidate angular points on the basis of Harris angular point detection, improves the position precision of angular point detection through iterative optimization, and finally expresses the features by using a rotation invariant fast transformation descriptor. The invention can be used in the field of image processing. The main advantages of the invention are as follows:
1. the invention greatly improves the speed of corner detection by twice screening of candidate corners.
2. The invention effectively improves the position precision of angular point detection by using an iterative optimization method.
Drawings
FIG. 1(a) is a Harris process effect diagram;
FIG. 1(b) is a diagram showing the effect of the present invention;
FIG. 2 is a flow chart of the algorithm of the present invention;
Detailed Description
The following further describes the embodiments of the present invention with reference to the drawings.
With reference to fig. 2, the embodiment of the present invention includes the following steps:
the method comprises the following steps: primarily screening the characteristic points;
converting the collected color image into a gray image, wherein the conversion formula is as follows:
Gray=(306*R+601*G+1147*B)>>10 (1)
in the formula (1), Gray represents the Gray level of the image, and R, G, B represents the values of the three channels of red, green, and blue, respectively. And selecting candidate corner points according to the similarity between each pixel point in the image and 8 other pixel points in the neighborhood of the pixel point. And determining the similarity between the two pixel points according to the gray difference of the two pixel points. For the pixel point P at the point (i, j), if the absolute value P of the gray difference between 8 pixel points in the neighborhood and the point is less than the set gray threshold value T1Then the point is considered similar to point P. And detecting the similarity of the P point and 8 pixel points in the field, recording the number of the points similar to the P point, and marking as N (i, j).
From the value N (i, j) it can be determined whether the point P is a possible corner point. If N (i, j) of the point P is very large, the nearby pixel points are similar to the point P, and the point P belongs to a pixel point in a certain local area. If N (i, j) of the P point is smaller, no point nearby is similar to P, and P belongs to an isolated pixel point or a noise point. In this context, a point P can be considered as a possible corner point if N (i, j) of the P points is between the intervals (3, 6). And traversing all pixel points in the image, and selecting all pixel points meeting the conditions as candidate angular points.
Step two: carrying out secondary screening on the characteristic points;
after the initial screening, the number of pixel points calculated in the subsequent steps will be greatly reduced. In general, the gray value of a corner point varies greatly in its vicinity, and the gradient value is relatively large. The gradients in the X and Y directions in the candidate corner points may be used for secondary screening to further reduce the computational complexity of the feature extraction algorithm.
Assume that the number of candidate corners remaining after the initial screening is N. In this document, 70% of the average value of the gradients in the X and Y directions is set as a threshold, pixel points with gradient values smaller than the threshold are eliminated, and pixel points with larger gradients are reserved as new candidate corner points. The formula of the secondary screening threshold is as follows:
in the formula (2), the first and second groups,representing the gradient value of the ith candidate corner in the x direction, and N representing the number of candidate corners. In the formula (3), the first and second groups,representing the gradient value of the ith candidate corner in the y direction, and N representing the number of candidate corners.
Step three: detecting a pixel-level corner point;
an autocorrelation matrix is computed for each candidate corner point. Calculating the gradient product corresponding to each candidate corner point to obtain an autocorrelation matrix M as shown in the following formula1。
In the formula (4), Ix、IyRepresenting the gradient values of the candidate corner points in the x and y directions, respectively.
Then using the Gaussian kernel functions G (x, y, σ) andM1performing convolution to obtain a new autocorrelation matrix M2。
Next, a Corner Response Function (CRF) value for the candidate corner is calculated and used to determine whether it is the correct corner. Autocorrelation matrix M2Is a characteristic value of1And λ2. When both eigenvalues are small, it means that the point is located in a flat area. When one feature value is small and the other is large, it indicates that the point is located at an edge. When both feature values are large, it is a corner point. To avoid solving for the eigenvalues, a corner response function is typically used. The CRF value R is calculated as follows:
Det(M2)=λ1λ2(5)
Tr(M2)=λ1+λ2(6)
R=Det(M2)-k*Tr2(M2) (7)
in the formula (5), λ1、λ2Respectively representing autocorrelation matrices M2Characteristic value of (D), Det (M)2) Representation matrix M2Determinant (c). In formula (6), Tr (M)2) Representation matrix M2The trace of (c). In equation (7), k is a constant, and the value range is usually 0.04 to 0.06. The CRF value for a corner point is positive and usually not very small. If the CRF value R of a point is greater than the set threshold value T3Then the point is selected as the corner point.
Step four: detecting angular points at a subpixel level;
and obtaining more accurate sub-pixel-level corner point coordinates by iteratively optimizing the Harris position. For the corner point O, points close to the point O can be classified into two types, one at the edge and the other not at the edge. The gray scale gradient value of point a is 0 and the gradient direction of point B is perpendicular to vector OB, so the gray scale gradient near corner O can be considered to be perpendicular to the line connecting the point to corner O.
The mathematical expression is:
in the formula (8), the first and second groups,is a vector of the gradient of the gray scale,is a vector with the origin of the image pointing to point O,is a vector with the origin of the image pointing to the ith point.
In practice, the image is usually affected by noise, so the left side of equation (10) is not equal to 0. Assuming the error is, there are:
the sum of the accumulated errors of all points near the corner O is E:
in this way, the problem of solving for the exact position of the corner points is translated into a problem of minimizing the sum of errors E. This problem can be solved by an iterative method and multiplied at both ends of equation (10)
The sum is obtained by substituting all points in the region around point O into equation (11):
in thatNearby selected vectorsContinuing to execute the formula (13) to obtainContinuously iterating until the condition is satisfiedThat is, 1.0e is generally selected-6And finally, obtaining accurate corner point coordinates at the sub-pixel level.
Step five: calculating a rotation invariant fast change descriptor;
the descriptor-selected local region is a circular region with a radius of 12 pixels centered on the feature point. The selected local area is divided into three layers by three circles, and the radius taking the characteristic point as the center is 4 pixels, 8 pixels and 12 pixel points. The middle circle is one sub-region, the ring of the middle layer is evenly divided into 4 sub-regions, the ring of the outermost layer is evenly divided into 8 sub-regions, for a total of 13 sub-regions. 8-direction gradient vectors are extracted in each sub-region, and finally 104-dimensional feature vectors are obtained as descriptors.
Firstly, taking the feature point as a center, and rotating all pixel points in a selected area along the same direction according to the main direction of the neighborhood gradient of the feature point. The purpose of the rotation is to align the main directions so that consistent feature vectors can be extracted in similar local regions, thereby ensuring that the rotation is invariant. The principal direction θ (i, j) is calculated as follows:
in formula (14), I (I, j) represents a gradation value at a point (I, j).
Next, the gradient direction and magnitude of each pixel in the local region are calculated. The gradient direction is calculated according to equation (14), the range of 0 ° to 360 ° is divided into eight directions, each direction contains 45 °, and it is determined to which gradient direction each pixel belongs. The gradient magnitude m (i, j) is calculated as follows:
m(i,j)=sqrt[(I(i+1,j)-I(i-1,j))2+(I(i,j+1)-I(i,j-1))2](15)
in formula (15), I (I, j) represents the gradation value at the point (I, j).
The magnitude of the gradient weighting for each pixel is determined by gaussian. The Gaussian weight w (i, j) for this point is calculated as follows:
in equation (16), r represents the distance of the point (i, j) to the corner point,the variance is indicated.
And finally, determining a statistical block contributed by each pixel according to the position and gradient direction of each pixel, and determining the contribution by a linear interpolation method. Each sub-region has 8 gradient directions, so there are 104 statistical blocks in total. The contribution is obtained by multiplying the gradient interpolation coefficient by the gradient magnitude. The accumulated value of the contributions of all pixel points to a certain statistical block is the gradient distribution characteristic value of the corresponding sub-region in the corresponding gradient direction. After the computation of all statistical blocks is completed, a 104-dimensional gradient distribution feature vector can be obtained. Wherein. The calculation formula of the difference coefficient d (i, j) is as follows:
in equation (17), r represents the distance from the point (i, j) to the corner point.
The contribution k (n) of the pixel to the nth statistical block is:
k(n)=c·d(i,j)·w(i,j)·m(i,j) (18)
in equation (18), c represents a contribution coefficient, d (i, j) represents a difference coefficient, w (i, j) represents a gaussian weight coefficient, and w (i, j) represents a gradient magnitude.
The contribution values of all pixels contributing to the nth statistical block are accumulated to obtain k (n):
K(n)=∑k(n) (19)
step six: and carrying out feature extraction and feature matching.
Experimental verification was performed by taking two pictures. In terms of algorithm time, the Harris algorithm time is 5.04s, the improved Harris algorithm time is 1.276s, the time consumption is only 25.3% of that of the Harris algorithm, and the algorithm effectively improves the feature extraction and matching speed. In the aspect of feature matching, when there are 400 feature points in the graph, the number of correct matches of the Harris algorithm is 87, and the number of correct matches of the improved Harris algorithm is 121. The algorithm effectively improves the number of correctly matched feature points.
Claims (3)
1. An image feature extraction and matching method is characterized by comprising the following steps:
s1: the method comprises the following steps of carrying out preliminary screening on the characteristic points:
converting the collected color image into a gray image, wherein the conversion formula is as follows:
Gray=(306*R+601*G+1147*B)>>10
wherein Gray represents the Gray value of the image, R, G, B represents the values of the red, green and blue channels, respectively, the candidate corner points are selected according to the similarity between each pixel point in the image and 8 other pixel points in the neighborhood, the similarity between the two pixel points is determined according to the Gray difference of the two pixel points, and for the pixel point P at the point (i, j), if the absolute value P of the Gray difference between the 8 pixel points in the neighborhood and the point is less than the set Gray threshold T1If the point is similar to the point P, detecting the similarity between the point P and 8 pixel points in the field, recording the number of the points similar to the point P, and marking as N (i, j);
p belongs to pixel points in a certain local area, whether the P is a possible angular point is judged according to the N (i, j) value of the P, if the N (i, j) of the P is between the intervals (3,6), the P is regarded as a possible angular point, all pixel points in the image are traversed, and all pixel points meeting the conditions are selected as candidate angular points;
s2: performing secondary screening on the candidate corner points obtained in the S1 by using gradients in the X and Y directions in the candidate corner points;
s3: pixel-level corner detection, specifically: calculating an autocorrelation matrix of each candidate corner obtained in the step 2: calculating gradient product corresponding to each candidate corner point to obtain autocorrelation matrix M1:
Ix、IyRepresenting the gradient values of the candidate corner points in the x and y directions, respectively, and then using Gaussian kernel functions G (x, y, sigma) and M1Performing convolution to obtain a new autocorrelation matrix M2;
The corner response function value of the candidate corner is calculated and used to determine whether it is the correct corner, and the corner response function value R is calculated as follows:
Det(M2)=λ1λ2
Tr(M2)=λ1+λ2
R=Det(M2)-k*Tr2(M2)
wherein λ1And λ2Is an autocorrelation matrix M2K is a constant if the CRF value R of the point is greater than a set threshold T3Selecting the point as a pixel-level corner point;
s4: performing subpixel-level corner detection, and obtaining subpixel-level corner coordinates of the pixel-level corner obtained in S3 by iteratively optimizing the Harris position;
s5: calculating a rotation invariant quick change descriptor, specifically:
the local area selected by the descriptor is a circular area with the characteristic point as the center and the radius of 12 pixels, the selected local area is divided into three layers by three circles, the radius with the characteristic point as the center is 4 pixels, 8 pixels and 12 pixel points, the middle circle is a sub-area, the ring of the middle layer is uniformly divided into 4 sub-areas, the ring of the outermost layer is uniformly divided into 8 sub-areas, 13 sub-areas are in total, 8-direction gradient vectors are extracted from each sub-area, and finally 104-dimensional characteristic vectors are obtained to serve as the descriptor;
firstly, taking a feature point as a center, rotating all pixel points in a selected area along the same direction according to the main direction of the neighborhood gradient of the feature point, wherein the main direction theta (i, j) meets the following requirements:
secondly, the gradient direction and magnitude of each pixel in the local region are calculated: calculating gradient directions according to theta (i, j), dividing a range of 0 DEG to 360 DEG into eight directions, each direction containing 45 DEG, and determining which gradient direction each pixel belongs to, the gradient magnitude m (i, j) satisfying:
m(i,j)=sqrt[(I(i+1,j)-I(i-1,j))2+(I(i,j+1)-I(i,j-1))2]
the gradient weight of each pixel is determined by Gaussian, and the Gaussian weight w (i, j) of the point satisfies the following conditions:
finally, according to the position and gradient direction of each pixel, determining the statistical block contributed by each pixel, and obtaining the contribution of each pixel point to the statistical block by multiplying the gradient interpolation coefficient by the gradient amplitude, wherein each sub-region has 8 gradient directions, so 104 statistical blocks are in total, obtaining the gradient distribution characteristic value of the corresponding sub-region in the corresponding gradient direction by calculating the accumulated value of the contributions of all the pixel points to a certain statistical block, and obtaining 104-dimensional gradient distribution characteristic vectors in total, wherein the calculation formula of the difference coefficient is as follows:
the contribution k (n) of a pixel to the nth statistical block is:
k(n)=c·I(i,j)·w(i,j)·m(i,j)
the contribution values of all pixels contributing to the nth statistical block are accumulated to obtain k (n):
K(n)=∑k(n)
s6: and carrying out feature extraction and feature matching.
2. The image feature extraction and matching method of claim 1, wherein: s2, the secondary screening of the feature points by using the gradients in the X and Y directions in the candidate corner points specifically comprises the following steps: assuming that the number of candidate angular points remained after the initial screening is N, setting 70% of the average value of the gradients in the X and Y directions as a threshold, eliminating pixel points with gradient values smaller than the threshold, and remaining the pixel points with gradient values larger than the threshold as new candidate angular points.
3. The image feature extraction and matching method according to claim 1 or 2, wherein: the sub-pixel-level corner detection of S4, obtaining sub-pixel-level corner coordinates of the pixel-level corner obtained in S3 by iteratively optimizing the Harris position specifically is: the points within a given pixel range from any pixel-level corner point O found in the distance S3 include two types of points: the gray scale gradient value of the A-type point is 0, and the gradient direction of the B-type point is vertical to the vector O; the vector of the image origin pointing to the O point isThe vector of the image origin pointing to the ith point within a given pixel range from the corner point O isThe vector of the kth iterationSatisfies the following conditions:
wherein,is a vector of the gradient of the gray scale,is the gray gradient vector of the kth iteration, atSelecting vectors near corresponding coordinate pointsTo obtainIterate until a condition is satisfiedFor a set error, thenThe corresponding coordinates are the sub-pixel level corner coordinates of the corner O.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010204462.0A CN111444948B (en) | 2020-03-21 | 2020-03-21 | Image feature extraction and matching method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010204462.0A CN111444948B (en) | 2020-03-21 | 2020-03-21 | Image feature extraction and matching method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111444948A true CN111444948A (en) | 2020-07-24 |
CN111444948B CN111444948B (en) | 2022-11-18 |
Family
ID=71629600
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010204462.0A Active CN111444948B (en) | 2020-03-21 | 2020-03-21 | Image feature extraction and matching method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111444948B (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112819844A (en) * | 2021-01-29 | 2021-05-18 | 山东建筑大学 | Image edge detection method and device |
CN113111212A (en) * | 2021-04-01 | 2021-07-13 | 广东拓斯达科技股份有限公司 | Image matching method, device, equipment and storage medium |
CN114694162A (en) * | 2022-05-31 | 2022-07-01 | 深圳航天信息有限公司 | Invoice image recognition method and system based on image processing |
CN115861603A (en) * | 2022-12-29 | 2023-03-28 | 宁波星巡智能科技有限公司 | Interest region locking method, device, equipment and storage medium |
CN117114971A (en) * | 2023-08-01 | 2023-11-24 | 北京城建设计发展集团股份有限公司 | Pixel map-to-vector map conversion method and system |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104221031A (en) * | 2011-11-18 | 2014-12-17 | 美塔欧有限公司 | Method of matching image features with reference features and integrated circuit therefor |
CN105787912A (en) * | 2014-12-18 | 2016-07-20 | 南京大目信息科技有限公司 | Classification-based step type edge sub pixel localization method |
CN106096621A (en) * | 2016-06-02 | 2016-11-09 | 西安科技大学 | Based on vector constraint fall position detection random character point choosing method |
CN106127755A (en) * | 2016-06-21 | 2016-11-16 | 奇瑞汽车股份有限公司 | The image matching method of feature based and device |
EP3113077A1 (en) * | 2015-06-30 | 2017-01-04 | Lingaro Sp. z o.o. | A method and a system for image feature point description |
CN107909085A (en) * | 2017-12-01 | 2018-04-13 | 中国科学院长春光学精密机械与物理研究所 | A kind of characteristics of image Angular Point Extracting Method based on Harris operators |
CN107992791A (en) * | 2017-10-13 | 2018-05-04 | 西安天和防务技术股份有限公司 | Target following failure weight detecting method and device, storage medium, electronic equipment |
-
2020
- 2020-03-21 CN CN202010204462.0A patent/CN111444948B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104221031A (en) * | 2011-11-18 | 2014-12-17 | 美塔欧有限公司 | Method of matching image features with reference features and integrated circuit therefor |
CN105787912A (en) * | 2014-12-18 | 2016-07-20 | 南京大目信息科技有限公司 | Classification-based step type edge sub pixel localization method |
EP3113077A1 (en) * | 2015-06-30 | 2017-01-04 | Lingaro Sp. z o.o. | A method and a system for image feature point description |
CN106096621A (en) * | 2016-06-02 | 2016-11-09 | 西安科技大学 | Based on vector constraint fall position detection random character point choosing method |
CN106127755A (en) * | 2016-06-21 | 2016-11-16 | 奇瑞汽车股份有限公司 | The image matching method of feature based and device |
CN107992791A (en) * | 2017-10-13 | 2018-05-04 | 西安天和防务技术股份有限公司 | Target following failure weight detecting method and device, storage medium, electronic equipment |
CN107909085A (en) * | 2017-12-01 | 2018-04-13 | 中国科学院长春光学精密机械与物理研究所 | A kind of characteristics of image Angular Point Extracting Method based on Harris operators |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112819844A (en) * | 2021-01-29 | 2021-05-18 | 山东建筑大学 | Image edge detection method and device |
CN112819844B (en) * | 2021-01-29 | 2023-03-14 | 山东建筑大学 | Image edge detection method and device |
CN113111212A (en) * | 2021-04-01 | 2021-07-13 | 广东拓斯达科技股份有限公司 | Image matching method, device, equipment and storage medium |
CN113111212B (en) * | 2021-04-01 | 2024-05-17 | 广东拓斯达科技股份有限公司 | Image matching method, device, equipment and storage medium |
CN114694162A (en) * | 2022-05-31 | 2022-07-01 | 深圳航天信息有限公司 | Invoice image recognition method and system based on image processing |
CN115861603A (en) * | 2022-12-29 | 2023-03-28 | 宁波星巡智能科技有限公司 | Interest region locking method, device, equipment and storage medium |
CN115861603B (en) * | 2022-12-29 | 2023-09-26 | 宁波星巡智能科技有限公司 | Method, device, equipment and medium for locking region of interest in infant care scene |
CN117114971A (en) * | 2023-08-01 | 2023-11-24 | 北京城建设计发展集团股份有限公司 | Pixel map-to-vector map conversion method and system |
CN117114971B (en) * | 2023-08-01 | 2024-03-08 | 北京城建设计发展集团股份有限公司 | Pixel map-to-vector map conversion method and system |
Also Published As
Publication number | Publication date |
---|---|
CN111444948B (en) | 2022-11-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111444948B (en) | Image feature extraction and matching method | |
CN108596197B (en) | Seal matching method and device | |
CN111667506B (en) | Motion estimation method based on ORB feature points | |
CN109740606B (en) | Image identification method and device | |
CN111461113B (en) | Large-angle license plate detection method based on deformed plane object detection network | |
CN109712071B (en) | Unmanned aerial vehicle image splicing and positioning method based on track constraint | |
CN110766723B (en) | Unmanned aerial vehicle target tracking method and system based on color histogram similarity | |
CN109829853A (en) | A kind of unmanned plane image split-joint method | |
CN110084743B (en) | Image splicing and positioning method based on multi-flight-zone initial flight path constraint | |
CN111898428A (en) | Unmanned aerial vehicle feature point matching method based on ORB | |
CN102800099A (en) | Multi-feature multi-level visible light and high-spectrum image high-precision registering method | |
CN112085709A (en) | Image contrast method and equipment | |
CN111950559A (en) | Pointer instrument automatic reading method based on radial gray scale | |
CN111199558A (en) | Image matching method based on deep learning | |
CN107808165B (en) | Infrared image matching method based on SUSAN corner detection | |
CN109508674B (en) | Airborne downward-looking heterogeneous image matching method based on region division | |
CN113223098B (en) | Preprocessing optimization method for image color classification | |
CN110929598A (en) | Unmanned aerial vehicle-mounted SAR image matching method based on contour features | |
Feng et al. | A feature detection and matching algorithm based on Harris Algorithm | |
CN116681740A (en) | Image registration method based on multi-scale Harris corner detection | |
CN110766728A (en) | Combined image feature accurate matching algorithm based on deep learning | |
CN106067041B (en) | A kind of improved multi-target detection method based on rarefaction representation | |
CN116206139A (en) | Unmanned aerial vehicle image upscaling matching method based on local self-convolution | |
CN113096163B (en) | Satellite-borne SAR image high-precision matching method without priori lifting rail information | |
CN114943738A (en) | Sensor packaging curing adhesive defect identification method based on visual identification |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |