CN107808165B - Infrared image matching method based on SUSAN corner detection - Google Patents
Infrared image matching method based on SUSAN corner detection Download PDFInfo
- Publication number
- CN107808165B CN107808165B CN201710980664.2A CN201710980664A CN107808165B CN 107808165 B CN107808165 B CN 107808165B CN 201710980664 A CN201710980664 A CN 201710980664A CN 107808165 B CN107808165 B CN 107808165B
- Authority
- CN
- China
- Prior art keywords
- image
- matching
- point
- corner
- points
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention provides an infrared image matching method based on SUSAN corner detection, which comprises the following steps: reading an image I to be matched1And I2Extracting an adaptive threshold value T; traversing scanned image I with circular template1And image I2Calculating two image pixel points (x) respectively0,y0) The USAN area and the angular point response function value are adopted, the gravity center principle is adopted to restrain false angular points and the angular point position information is recorded; selecting a similarity measure NCC to carry out rough corner matching; and (4) screening and optimizing the coarse matching point pairs by using a random sampling consistency algorithm RANSAC to obtain more accurate matching points and complete image matching.
Description
Technical Field
The invention relates to an image matching technology, in particular to an infrared image matching method based on SUSAN corner detection.
Background
Infrared imaging guidance is an important one in precision guidance technology, and has become an important factor influencing modern high-tech wars. The infrared image matching technology is one of the key technologies of the infrared imaging guidance technology, and mainly is a matching process of two images with overlapped areas of the same scene shot at different time and different sensors or at different visual angles, so as to obtain more comprehensive and accurate image description for observation or further processing.
The corner points are important local features of the image, can effectively compress data volume while intensively containing important shape information, has certain translation, scaling and rotation invariance, and is hardly influenced by illumination conditions, so that matching based on the corner point features is a matching method which is most researched and most widely applied at present.
The corner detection algorithms are mainly divided into two categories, corner detection based on edge features and corner detection based on gray level changes. The second type of algorithm directly processes the pixel gray value, so that the defects of edge detection error, curvature calculation and the like in the first type of algorithm are avoided, and the method is the key point of current research. The most typical of such methods are Moravec, Harris, SUSAN, and the like.
The Moravec algorithm is an early corner detection method, and the core idea is that in a certain field of corners, the gray level change of each direction is large. The algorithm is simple and the operation speed is high. However, since the autocorrelation is only performed in four directions and the minimum value is taken, the method is particularly sensitive to image edges, isolated points and noise points, and the false detection rate is high.
The Harris algorithm was developed by improving the Moravec algorithm to detect the corners by differential operations and autocorrelation matrices. The method has the advantages of simple algorithm, uniform, reasonable and stable extracted angular points and the like, but the variance of the Gaussian function, the response function threshold value and the constant k are determined manually, and simultaneously, the edges are blurred due to Gaussian filtering, so that the angular point positioning precision in a detection result is not high, or the angular point cannot be effectively detected.
SUSAN is proposed by Smith S M and Brady J M based on the concept of homonuclear segmentation (USAN), and determines corner points by calculating the USAN area. The SUSAN algorithm has strong noise immunity, certain rotation invariance and accurate angular point positioning. However, the gray scale difference threshold is fixed, and false detection and missing detection are caused.
Disclosure of Invention
The invention aims to provide an infrared image matching method based on SUSAN corner detection, which is high in matching accuracy and short in matching time.
The technical scheme for realizing the purpose of the invention is as follows: an infrared image matching method based on SUSAN corner detection comprises the following steps: reading an image I to be matched1And I2Extracting an adaptive threshold value T; traversing scanned image I with circular template1And image I2Calculating two image pixel points (x) respectively0,y0) The USAN area and the angular point response function value are adopted, the gravity center principle is adopted to restrain false angular points and the angular point position information is recorded; selecting a similarity measure NCC to carry out rough corner matching; and (4) screening and optimizing the coarse matching point pairs by using a random sampling consistency algorithm RANSAC to obtain more accurate matching points and complete image matching.
Compared with the prior art, the invention has the following remarkable advantages:
(1) a self-adaptive gray difference threshold value calculation method is provided, and the problem that the threshold value of the traditional SUSAN algorithm needs to be set manually is solved. Taking a full image as a self-adaptive area, comparing the gray value of each row of pixels of the image, taking the maximum value and the minimum value of n gray values of each row of pixels to calculate a self-adaptive gray difference threshold value, respectively detecting angular points of the two images from the smaller value of the self-adaptive threshold values extracted from the two images, inhibiting asymmetric angular points, and simultaneously improving the robustness and the accuracy of image matching;
(2) a method for suppressing false corners is provided, and the corner detection accuracy is improved. Replacing the traditional USAN with a region which has the same gray value as the kernel pixel point in the response circle and is adjacent and communicated with the kernel pixel point, and judging the angular point by adopting the gravity center principle, thereby effectively inhibiting the false angular point and improving the image matching speed;
(3) the method for screening the complex angular points is provided, the detection accuracy of the complex angular points is improved, and when the USAN area is equal to a geometric threshold, a double-ring template is added to judge whether the points are edge points or angular points; after the candidate angular point is judged, the area of the point USAN is approximately solved again, so that the angular point response value is calculated, and the detection accuracy of the complex angular point is improved.
The invention is further described below with reference to the accompanying drawings.
Drawings
Fig. 1 is a flowchart of an infrared image matching method based on SUSAN corner detection according to the present invention.
Fig. 2(a) is a schematic view of an inner layer circular template of a double circular template, and fig. 2(b) is a schematic view of an outer layer circular template of the double circular template.
Fig. 3(a) is a diagram of results of detecting and matching corners of an infrared image based on Harris corner detection algorithm, and fig. 3(b) is a diagram of results of detecting and matching corners of an infrared image based on improved SUSAN corner detection algorithm herein.
Fig. 4 is a schematic diagram of data comparison of infrared image matching results based on Harris corner detection algorithm and the improved SUSAN corner detection algorithm herein.
Fig. 5(a) is an infrared image corner detection and matching result graph based on an original SUSAN algorithm, fig. 5(b) is an infrared image corner detection and matching result graph based on a modified zhanglian algorithm, fig. 5(c) is an infrared image corner detection and matching result graph based on a modified wangwei algorithm, fig. 5(d) is an infrared image corner detection and matching result graph based on a modified donghai algorithm, and fig. 5(e) is an infrared image corner detection and matching result graph based on a modified donghai algorithm.
Fig. 6 is a schematic diagram showing comparison of infrared image matching result data based on different SUSAN algorithms.
Detailed Description
With reference to fig. 1, an infrared image matching method based on SUSAN corner detection includes the following steps:
step 1, reading images I1 and I2 to be matched, and extracting an adaptive threshold T;
step 2, traversing and scanning the image I1 by the circular template, and calculating pixel points (x)0,y0) The USAN area of (A);
step 3, calculating a corner response function value of the pixel point, for the complex corner, adding a double-ring template to judge the property of the complex corner and calculating a corresponding corner response function value;
step 4, suppressing false corners by adopting a gravity center principle, and recording corner position information;
step 5, traversing and scanning the image I2 by the circular template, and calculating pixel points (x)0,y0) The USAN area of (A);
step 6, calculating a corner response function value of the pixel point, for the complex corner, adding a double-ring template to judge the property of the complex corner and calculating a corresponding corner response function value;
step 7, suppressing false corners by adopting a gravity center principle, and recording corner position information;
and 9, screening and optimizing the coarse matching point pair by using a random sample consensus (RANSAC) algorithm to obtain a more accurate matching point, and completing image matching.
Step 1, reading images I1 and I2 to be matched, and extracting an adaptive threshold T. The steps of extracting the adaptive threshold are as follows:
step 1.1, an adaptive threshold T of the image I1 is calculated1: selecting a whole image as a self-adaptive area, comparing the gray values of pixels in each row of the image, and taking the maximum value and the minimum value of s gray values of pixels in each row as Calculated T1
n is the number of columns of image pixels,
when T is1When the value of (1) accounts for 15% -30% of the delta I1, angular points under different contrasts can be extracted well, whereinRepresents the absolute contrast of the image, typically let k ∈ [0.1,0.3 ]],s∈[5,10]。
Step 1.2, the adaptive threshold T of the image I2 is calculated as described in step 1.12。
And step 1.3, comparing the threshold values obtained by calculating the two images to be matched, and taking the smaller value as the final self-adaptive threshold value T.
Step 2, traversing and scanning the image I1 by the circular template, and calculating pixel points (x)0,y0) The USAN area of (d). Calculating a pixel (x)0,y0) The step of processing the USAN area includes:
step 2.1, the circular template traverses and scans the image I1, calculates the difference between the gray value of the kernel and the absolute value of the gray value of the pixel point in the template except for the kernel, and compares the difference with a threshold value T to judge whether the pixel point belongs to the USAN area:
wherein, I (x, y) and I (x)0,y0) The method respectively represents the gray value of the template kernel and the gray value of the pixel points except the kernel, and the constraint condition that the pixels with similar gray values of the template kernel and the core point need to be communicated with the core point is added on the USAN judgment criterion. In actual operation, the following formula which is more stable and effective is often used to replace the above formula:
step 2.2, calculating pixel point (x) in image I1 by using the following formula0,y0) USAN area of (d):
where num is the number of pixels included in the circular template having a diameter of D, (x)i',yi') Is (x)0,y0) Pixel points in the circular template of the kernel.
And 3, calculating a corner response function value of the pixel point, and for the complex corner, adding a double-ring template shown in FIG. 2 to judge the property of the complex corner and calculating a corresponding corner response function value. The step of calculating the corner response function value of the pixel point comprises the following steps:
step 3.1, calculating the angular point response function value of the pixel points, the USAN value of each pixel point and a manually set threshold value TgJudging and comparing to obtain an initial response function value of the pixel point:
wherein the content of the first and second substances,nmax is the maximum value of the USAN area of the pixel point;
step 3.2, when n (x)0,y0)=TgAnd a geometric threshold TgAnd when the pixel gray values are equal, calculating the jumping times of the pixel gray values on the additional double-ring template, marking the core point as a candidate corner point if the jumping times are equal and more than 2, and recalculating the corner point response value.
At this time, the USAN area is approximated according to the following equation:
wherein S isUSANThe USAN area obtained by using the circular template is shown, S represents the area of the circular template, H represents the total length of the circular template, HUSANIndicating the number of pixels in the USAN region that are located in the ring template. Will SUSANSubstituting the formula (5) can obtain the corner response value of the point.
And 4, suppressing the false corner by adopting a gravity center principle. The gravity center principle comprises the following steps of:
step 4.1, pixel point (x) in image I0,y0) The center of gravity (x) of the USAN regiong,yg) Comprises the following steps:
and num is the total number of pixel points contained in the circular template.
Step 4.2, comparing the Euclidean distance between the point of the gravity center position and the kernel with a specified threshold value TwIf the following formula is satisfied, an initial response value R (x) is calculated0,y0)。
(xg-x0)2+(yg-y0)2>Tw (8)
With (x)i,yi) A small neighborhood of d x d in the central region, where i e d-1, N-d]If R (x)i,yi) Is the maximum value in the neighborhood, then (x)i,yi) Is a corner point.
Step 5, traversing and scanning the image I2 by the circular template, and calculating pixel points (x)0,y0) The USAN area, the specific procedure is as described in step 2.
And 6, calculating a corner response function value of the pixel point, adding a double-ring template to the complex corner, judging the property of the complex corner, and calculating a corresponding corner response function value, wherein the specific steps are as described in the step 3.
And 7, suppressing false corners by adopting a gravity center principle, and recording corner position information, wherein the specific steps are as described in the step 4.
And 8, selecting a similarity measure NCC to carry out rough corner matching. Let W1And W2Is the corner point p of I1iAnd the corner point q of I2iTwo windows of equal size, u, centered1And u2Average of the associated window pixel grays, I1 (x)i,yi) And I2 (x)i,yi) The point p to be matched in the images I1 and I2iAnd q isiThen the similarity measure between two corner points is defined as:
the specific steps of initial matching are as follows:
step 8.1, selecting a 7 × 7 related window with any given corner in the image I1 as a center, selecting a rectangular search window with the same size for the corners in the image I2, then selecting the corner in the image I1 and each corner NCC in the search window in the image I2 according to formula (9), and taking the corner with the largest number of relationships as a matching point of the given corner in the image I1, so as to obtain a matching point set.
And 8.2, giving any corner point in the image I2, searching the corner point with the maximum correlation coefficient in the corresponding window region in the image I1 as a matching point of the given corner point of the image I2, and obtaining a matching point set.
And 8.3, finally searching the same matching corner pairs in the two obtained matching point sets, and judging the same corner pairs as matching.
And 9, screening and optimizing the coarse matching point pair by using a random sample consensus (RANSAC) algorithm to obtain a more accurate matching point, and completing image matching. The geometric relationship between the reference image and the image to be matched is generally represented by a planar perspective transformation:
the transformation can completely describe the transformation relation between the reference image and the image to be matched, and 8 parameters of a transformation matrix H can be obtained by finding out 4 pairs of matching points by using a RANSAC method. The idea of the RANSAC algorithm is: firstly, randomly selecting two points to form a straight line, calculating how many points can be contained in the straight line within a certain allowable error range, wherein the contained points are called interior points, and then recalculating a new straight line according to the interior points, so that the calculation is repeated until the number of the interior points is not changed and the maximum number of the interior points is reached, thereby obtaining the best estimation.
Given a data point set a consisting of N data points, the RANSAC algorithm steps are as follows:
step 9.1, randomly selecting 4 pairs of points, and solving a matrix H according to the 4 pairs of points;
9.2, calculating the coordinate positions corresponding to the rest (N-4) points according to the matrix H;
9.3, calculating the vertical distance d between the coordinate position of the corresponding point obtained by all the points through H matrix calculation and the actual position;
9.4, calculating the inner point of H according to the principle that the inner point is smaller than the set threshold value, and re-synthesizing H in the inner point area;
and 9.5, repeating the step N times of random sampling to obtain a maximum interior point set.
The method fully calculates all data to obtain an optimized matching result.
The invention is further described below with reference to simulation examples.
The invention provides an infrared image matching method based on SUSAN corner detection by taking infrared image sequences shot at different times as source files and taking a Matlab (R2012b) software platform as a basis, and the infrared images are matched by using the method.
The infrared image size used in this example is 320 x 256 pixels, where the relative position of the vehicle changes and the remaining objects remain unchanged. The microcomputer configuration for the test is as follows: CPU is AMDA62.1GHz and memory is 4.0 GB.
Fig. 3(a) is a diagram of results of detecting and matching corners of an infrared image based on Harris corner detection algorithm, and fig. 3(b) is a diagram of results of detecting and matching corners of an infrared image based on improved SUSAN corner detection algorithm herein. Fig. 4 is a comparison of infrared image matching result data based on Harris corner detection algorithm and the improved SUSAN corner detection algorithm herein.
As can be seen from the images in FIGS. 3 and 4, the angular points detected by the method are more accurate, the number of the same-name point pairs is far greater than that of the Harris algorithm, and the matching accuracy is higher. The Harris algorithm corner detection time is far shorter than that of the SUSAN algorithm, but the feature point matching time is far longer than that of the SUSAN algorithm, so that the algorithm is superior to the Harris algorithm in the total image matching time. The corner detection algorithm based on SUSAN has great advantages in infrared image matching.
Fig. 5(a) is an infrared image corner detection and matching result graph based on an original SUSAN algorithm, fig. 5(b) is an infrared image corner detection and matching result graph based on a modified zhanglian algorithm, fig. 5(c) is an infrared image corner detection and matching result graph based on a modified wangwei algorithm, fig. 5(d) is an infrared image corner detection and matching result graph based on a modified donghai algorithm, and fig. 5(e) is an infrared image corner detection and matching result graph based on a modified donghai algorithm. Fig. 6 is a comparison of infrared image matching result data based on the different SUSAN algorithms described above.
As can be seen from fig. 5 and 6, the image matching accuracy of the present invention is the highest. Although the number of pairs of the fine matching angular points of the image based on the Wang Wei improved algorithm is the largest, the matching accuracy is greatly reduced due to the detection of excessive invalid angular points. In 5 algorithms, the corner detection time of the algorithm is longest, but the difference is not large; the feature point matching time is shorter than other three improved algorithms, is only second to the original SUSAN algorithm, and the total image matching time is shorter. The results demonstrate that the improved algorithm herein has better image matching integration performance than other SUSAN algorithms.
Claims (6)
1. An infrared image matching method based on SUSAN corner detection is characterized by comprising the following steps:
reading an image I to be matched1And I2Extracting an adaptive threshold value T;
traversing scanned image I with circular template1And image I2Calculating two image pixel points (x) respectively0,y0) The USAN area and the angular point response function value are adopted, the gravity center principle is adopted to restrain false angular points and the angular point position information is recorded;
selecting a similarity measure NCC to carry out rough corner matching;
screening and optimizing the coarse matching point pairs by using a random sampling consistency algorithm RANSAC to obtain more accurate matching points and complete image matching;
the specific process for extracting the adaptive threshold value T is as follows:
computing adaptive thresholds T1 and T2 for image I1 and image I2
Wherein k is a constant, (i, j) is a pixel index, n is the number of columns of image pixels, s is the number of the maximum value and the minimum value of the gray scale of each column of pixels,s gray maximum values and minimum values respectively representing each column of pixels of the image;
and comparing the T1 with the T2, and taking the smaller value as the final adaptive threshold value T.
2. Method according to claim 1, characterized in that two image pixels (x) are calculated0,y0) The specific process of the area of the USAN is as follows:
the confidence C (x) is calculated by traversing the scanned images I1 and I2 using circular templates0,y0)
Wherein, I (x, y) and I (x)0,y0) Respectively representing the gray value of the kernel of the circular template and the gray value of the pixel points except the kernel;
calculating a pixel (x)0,y0) USAN area n (x)0,y0)
Where num is the number of pixels included in the circular template having a diameter of D, (x)i,yi) Is (x)0,y0) The pixel points in the circular template are kernels.
3. Method according to claim 2, characterized in that a pixel point (x)0,y0) The method for obtaining the function value of angular point responseThe method comprises the following steps:
obtaining initial response function value R (x) of pixel point0,y0)
Wherein the content of the first and second substances,nmax is the maximum value of the USAN area of the pixel point;
if n (x)0,y0)=TgCalculating the jump times of the pixel gray value on the additional double-ring template, if the jump times of the inner ring and the outer ring are equal and more than 2, recalculating the USAN area SUSANAnd substituting n (x) into formula (4)0,y0) Obtaining the angular point response value of the point
Wherein S isUSANUSAN area obtained for the circular template, S represents the circular template area, H represents the total length of the circular template, HUSANIndicating the number of pixels in the USAN region that are located in the ring template.
4. The method of claim 3, wherein the specific process of suppressing the false corner points by using the principle of center of gravity is as follows:
respectively acquiring pixel points (x) in images I1 and I20,y0) The center of gravity (x) of the USAN regiong,yg)
Comparing the Euclidean distance between the point of the barycentric location and the kernel with a specified threshold value TwIf the magnitude satisfies the formula (8), an initial response value R (x) is calculated0,y0)
(xg-x0)2+(yg-y0)2>Tw; (8)
If R (x)i,yi) At the image pixel (x)i,yi) Is the maximum value in the neighborhood of the central region d x d, then (x)i,yi) Being the corner points.
5. The method according to claim 4, wherein the specific process of selecting the similarity measure NCC for coarse corner matching is as follows:
for images I1 and I2, selecting a window with any given corner point in the images as the center, carrying out NCC on the corner point in one image and each corner point of the window in the other image according to formula (9), taking the corner point with the maximum relation number as the matching point of the given corner point of the image, and combining all the corner points of the image into a matching point set
Wherein a and b are index of image, u is average value of window pixel gray scale, and I (x)i,yi) The gray values of the points to be matched in the two images are obtained;
searching the same matching corner point pairs in the two obtained matching point sets, and judging the same corner point pairs as matching.
6. The method of claim 5, wherein the random sample consensus RANSAC coarse matching point pair is selected and optimized by:
step 9.1, representing the geometric relationship between I1 and I2 by using planar perspective transformation, as shown in formula (10)
(x, y) represents the pixel position of one image, (x ', y') represents the corresponding pixel position of the (x, y) matching corner point pair in the other image;
step 9.2, randomly selecting 4 pairs of matching angle points, and solving a matrix H according to the 4 pairs of matching angle points by using a combined formula (10);
step 9.3, according to the formula (11), knowing the remaining (N-4) (x, y) to obtain the corresponding pixel positions (x ", y"), where N is the number of matching corner pairs;
step 9.4, calculating the vertical distance d between all the (x ', y') and (x ', y');
step 9.5, if d is smaller than the set threshold value, (x, y) and (x ', y') are inner points of H, and H is synthesized again in the inner point area;
and 9.6, repeating the step N times of random sampling to obtain a maximum interior point set.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710980664.2A CN107808165B (en) | 2017-10-19 | 2017-10-19 | Infrared image matching method based on SUSAN corner detection |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710980664.2A CN107808165B (en) | 2017-10-19 | 2017-10-19 | Infrared image matching method based on SUSAN corner detection |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107808165A CN107808165A (en) | 2018-03-16 |
CN107808165B true CN107808165B (en) | 2021-04-16 |
Family
ID=61592871
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710980664.2A Active CN107808165B (en) | 2017-10-19 | 2017-10-19 | Infrared image matching method based on SUSAN corner detection |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107808165B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109100788B (en) * | 2018-07-06 | 2020-02-07 | 东北石油大学 | Seismic data non-local mean de-noising method |
CN109285140A (en) * | 2018-07-27 | 2019-01-29 | 广东工业大学 | A kind of printed circuit board image registration appraisal procedure |
CN115187802B (en) * | 2022-09-13 | 2022-11-18 | 江苏东控自动化科技有限公司 | Accurate control method for pipeline inspection trolley |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1686051A (en) * | 2005-05-08 | 2005-10-26 | 上海交通大学 | Canthus and pupil location method based on VPP and improved SUSAN |
US8456711B2 (en) * | 2009-10-30 | 2013-06-04 | Xerox Corporation | SUSAN-based corner sharpening |
CN103295222A (en) * | 2013-02-21 | 2013-09-11 | 南京理工大学 | Implement method and device for infrared image registration |
CN103839272A (en) * | 2014-03-25 | 2014-06-04 | 重庆大学 | Brain magnetic resonance image registration method based on K-means clustering method |
CN105096317A (en) * | 2015-07-03 | 2015-11-25 | 吴晓军 | Fully automatic calibration method for high performance camera under complicated background |
-
2017
- 2017-10-19 CN CN201710980664.2A patent/CN107808165B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1686051A (en) * | 2005-05-08 | 2005-10-26 | 上海交通大学 | Canthus and pupil location method based on VPP and improved SUSAN |
US8456711B2 (en) * | 2009-10-30 | 2013-06-04 | Xerox Corporation | SUSAN-based corner sharpening |
CN103295222A (en) * | 2013-02-21 | 2013-09-11 | 南京理工大学 | Implement method and device for infrared image registration |
CN103839272A (en) * | 2014-03-25 | 2014-06-04 | 重庆大学 | Brain magnetic resonance image registration method based on K-means clustering method |
CN105096317A (en) * | 2015-07-03 | 2015-11-25 | 吴晓军 | Fully automatic calibration method for high performance camera under complicated background |
Non-Patent Citations (6)
Title |
---|
A Novel Speed-up Feature Matching Algorithm for Image Registration using SUSAN and RANSAC;Monica P. Chanchlani et al;《International Journal of Engineering and Advanced Technology (IJEAT)》;20130630;第2卷(第5期);正文第II部分 * |
An improved SUSAN corner detection algorithm based on adaptive threshold;Yang Xingfang[ et al;《2010 2nd International Conference on Signal Processing Systems (ICSPS)》;20100707;正文第III部分 * |
Robust fast corner detector based on filled circle and outer ring mask;Yuanxiu Xing et al;《IET Image Processing》;20160324;第10卷(第4期);第314-324页 * |
基于SUSAN角点检测的匹配技术;张变莲;《西安文理学院学报:自然科学版》;20091031;第12卷(第4期);第55-58页 * |
基于环形模板的SUSAN角点检测算法;唐坚刚等;《信息技术》;20140131;第31-34页 * |
面向影像匹配的SUSAN角点检测;王巍等;《遥感学报》;20110930;第940-956页 * |
Also Published As
Publication number | Publication date |
---|---|
CN107808165A (en) | 2018-03-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108629775B (en) | Thermal state high-speed wire rod surface image processing method | |
CN102156996B (en) | Image edge detection method | |
CN106960449B (en) | Heterogeneous registration method based on multi-feature constraint | |
CN114418957A (en) | Global and local binary pattern image crack segmentation method based on robot vision | |
JP5431362B2 (en) | Feature-based signature for image identification | |
US20080292192A1 (en) | Human detection device and method and program of the same | |
CN107808165B (en) | Infrared image matching method based on SUSAN corner detection | |
WO2017193414A1 (en) | Image corner detection method based on turning radius | |
CN107862708A (en) | A kind of SAR and visible light image registration method | |
CN108491498A (en) | A kind of bayonet image object searching method based on multiple features detection | |
CN110765992A (en) | Seal identification method, medium, equipment and device | |
CN111524139B (en) | Bilateral filter-based corner detection method and system | |
CN111222507A (en) | Automatic identification method of digital meter reading and computer readable storage medium | |
CN114897705A (en) | Unmanned aerial vehicle remote sensing image splicing method based on feature optimization | |
CN110232694B (en) | Infrared polarization thermal image threshold segmentation method | |
CN114970590A (en) | Bar code detection method | |
CN114549400A (en) | Image identification method and device | |
CN117474918A (en) | Abnormality detection method and device, electronic device, and storage medium | |
CN113095385A (en) | Multimode image matching method based on global and local feature description | |
CN111340134A (en) | Rapid template matching method based on local dynamic warping | |
CN116206139A (en) | Unmanned aerial vehicle image upscaling matching method based on local self-convolution | |
CN111738127B (en) | Entity book in-place detection method and device, electronic equipment and storage medium | |
JP4560434B2 (en) | Change region extraction method and program of the method | |
JP2007140729A (en) | Method and device detecting position and attitude of article | |
CN114092345A (en) | Noise robust corner detection method based on multi-scale local similarity |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |