CN107247953B - Feature point type selection method based on edge rate - Google Patents

Feature point type selection method based on edge rate Download PDF

Info

Publication number
CN107247953B
CN107247953B CN201710389384.4A CN201710389384A CN107247953B CN 107247953 B CN107247953 B CN 107247953B CN 201710389384 A CN201710389384 A CN 201710389384A CN 107247953 B CN107247953 B CN 107247953B
Authority
CN
China
Prior art keywords
image
edge
edge rate
feature point
rate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710389384.4A
Other languages
Chinese (zh)
Other versions
CN107247953A (en
Inventor
林秋华
田敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian University of Technology
Original Assignee
Dalian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian University of Technology filed Critical Dalian University of Technology
Priority to CN201710389384.4A priority Critical patent/CN107247953B/en
Publication of CN107247953A publication Critical patent/CN107247953A/en
Application granted granted Critical
Publication of CN107247953B publication Critical patent/CN107247953B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

A feature point type selection method based on edge rate belongs to the field of computer vision. According to the invention, before feature point detection, structural information detection and classification are carried out on the image, a basis is provided for selecting a feature point type (spot or corner) suitable for the image, and the problem that matching performance is reduced and even fails due to unsuitable feature points is solved. For an image to be matched, calculating the edge of the image by using a Canny edge detection algorithm, then calculating the edge rate, and finally classifying the image according to the relationship between the edge rate and the high and low thresholds: if the edge rate is greater than the high threshold, the image structure information is very obvious, and the corner point characteristic is adopted; if the edge rate is less than the low threshold value, the image structure information is very unobvious, and the spot characteristics are adopted; if the edge rate is between the high and low thresholds, the image features are not obvious, and either the blobs or the corners can be used. The method is quick and effective, and can realize image matching with better performance by combining with the conventional image matching algorithm.

Description

Feature point type selection method based on edge rate
Technical Field
The invention relates to the field of computer vision, in particular to a feature point type selection method based on an edge rate.
Background
In the existing image matching algorithm, the method based on local feature points is most widely applied. The spots and the corners are two typical feature points, and a plurality of excellent algorithms such as SIFT and FAST provide technical support for the detection of the spots and the corners. In fact, the performance of two feature points is different for different images. The corner points are suitable for images with obvious structural information, and because the images contain more edges, intersection angles and the like, more corner points can be detected. The blobs are suitable for the image with inconspicuous structural information, and more blobs can be detected at this time, but the number of the detected corner points is less, which may cause the failure of image matching. At present, in image matching application, people often use a certain feature point detection algorithm fixedly, for example, some adopt an SIFT algorithm, namely, a speckle detection mode, and some adopt an FAST algorithm, namely, a corner detection mode. Some applications do not select a feature point category, but select a feature point category in the following two manners: one is prior information on structural characteristics of the image before image matching, and spots or corners are selected accordingly; and the other method is to carry out SIFT and FAST matching test on a small part of images, match all the images by using an algorithm with better selectivity, namely, select the feature points through a pre-matching test.
However, in practical applications such as robot visual navigation, the captured images are in a sequence, and the scene changes continuously, so the structural information of the images also changes continuously. In this case, if fixed spots or corners are used, matching may fail due to inadequate image characteristics. For example, in the Mikolajczyk image library (see, MIKOLAJCZYK, SCHMID C. application evaluation of local descriptors, IEEE Transactions on Pattern analysis and Machine analysis, vol.27, no 10, pp.1615-1630,2005), the pit image structure information is significant, and the number of detectable FAST corner points (FIG. 1(b)) is much larger than the SIFT spot points (FIG. 1 (a)). In contrast, the bark images lack structural information, and the number of detectable FAST corners (fig. 2(b)) is much smaller than the number of SIFT spots (fig. 2 (a)). Moreover, corner points detected by the bark images cannot be correctly matched, and after the false matching is eliminated by RANSAC, the number of correct matching points is equal to zero. Therefore, it is necessary to intelligently detect the structural information of each image before matching, and if the structural information is very obvious, the angular point feature is adopted; if the structural information is not obvious, adopting a spot characteristic; in other cases, both may be used, i.e. without the need for a handover algorithm.
Disclosure of Invention
The invention provides a feature point type selection method based on edge rate, which is used for detecting and classifying structural information of an image before feature point detection, providing basis for selecting a feature point type (spot or corner) suitable for the image and solving the problem of reduced matching performance and even failure caused by unsuitable feature points.
The technical scheme of the invention is that for an image to be matched, a Canny edge detection algorithm (see CANNYJ.A. computational algorithm to edge detection. IEEE transfer on pattern analysis and Machine analysis, vol.8, No.6, pp.679-698,1986) is used for calculating the edge of the image, the edge rate and the high and low threshold values thereof are defined, the edge rate is calculated according to the edge detection result, and the image is classified according to the relationship between the edge rate and the high and low threshold values: if the edge rate is greater than the high threshold, the structural information of the image is very obvious, and the angular point characteristic is adopted; if the edge rate is smaller than the low threshold value, the structural information of the image is very unobvious, and the spot characteristics are adopted; if the edge rate is between the high and low thresholds, the image features are not obvious, and both the spots and the corners can be used, the existing feature point detection algorithm can be continuously used without switching.
The method comprises the following specific steps:
firstly, Gaussian filtering is carried out on an input image to eliminate noise. The expression for gaussian filtering of an image is:
S(x,y)=G(x,y;σ)*I(x,y) (1)
in the formula (I), the compound is shown in the specification,
Figure BDA0001308137320000031
the standard deviation is a Gaussian function with sigma, (x, y) is the coordinates of image pixel points, I (x, y) is an original gray image, S (x, y) is an image after filtering, and x is convolution operation. Assuming that the size of the gaussian smoothing window (also called convolution kernel) is s × s, and s is odd, the relation between σ and s is:
Figure BDA0001308137320000032
secondly, setting a high threshold T1 and a low threshold T2, and performing edge detection on the image by using a Canny algorithm to obtain a binary image, wherein the gray value of an edge point is 255, and the gray value of a non-edge point is 0;
thirdly, calculating the number of edge points, namely counting the number of pixel points with the gray value of 255 in the binary image obtained in the second step, and recording the number as n;
and fourthly, solving the edge rate. The edge ratio is defined as follows:
Figure BDA0001308137320000033
in the formula, n is the number of edge points, w is the width of an image, h is the height of the image, a is the width of an image edge pixel, a row and a column pixels are removed from the upper, lower, left and right sides of the image in the formula (3), and a is 1-3;
and fifthly, classifying the images according to the relation between the edge rate and the threshold value. The threshold values comprise a high threshold value H _ TH and a low threshold value L _ TH; when R > H _ TH, classifying the image as a very clear type of structural information; otherwise, comparing the edge rate with a low threshold, and classifying the image into a type with very inconspicuous structural information when R < L _ TH; otherwise, classifying the image as an unobtrusive feature;
and sixthly, selecting the type of the feature point. Detecting the corner characteristic of the image with very obvious structural information; for an image with very inconspicuous structural information, adopting spot characteristic detection; for an image with an inconspicuous characteristic, a spot or a corner can be used.
It should be noted that the high threshold and the low threshold of the edge rate are determined by combining the gaussian smoothing window size and by an image library experiment test. Table 1 shows the edge rate thresholds when the image library selects all images in the Mikolajczyk image library, and the gaussian smoothing window sizes are respectively selected to be 7 × 7, 9 × 9, and 11 × 11. The gaussian smoothing windows are different in size, and the high and low thresholds will be different.
TABLE 1 edge rate high and low thresholds for three Gaussian smoothing window sizes
Figure BDA0001308137320000041
The method has the advantages that in the image matching application based on the local feature points, a basis is provided for selecting the spot or corner features, so that not only can the matching failure caused by improper feature point selection (such as the image matching failure caused by the application of corner points in a bark image) be avoided, but also the performance of the image matching algorithm is better because the proper feature points are selected. And the speed of edge detection and the speed of edge rate calculation are both high, so that the method is quick and effective, and can realize image matching with better performance by combining with the conventional image matching algorithm.
Drawings
FIG. 1 is a comparison of spots and corners (both indicated by white dots) detected in a boat image. Wherein (a) is a spot; (b) being the corner points.
Figure 2 is a comparison of spots and corners (both indicated by white dots) detected in a bark image. Wherein (a) is a spot; (b) being the corner points.
Fig. 3 is a flow chart of the present invention.
Detailed Description
An embodiment of the present invention is described in detail below with reference to the accompanying drawings.
There is a flat image with a resolution of 850 × 680. The flow of image classification and feature point selection using the present invention is shown in fig. 3.
Firstly, performing Gaussian filtering on an input image by using a formula (1); wherein, the size of the gaussian smoothing window is selected to be 9 × 9, and the standard deviation σ is obtained to be 1.85 according to the formula (2);
secondly, setting a high threshold T1 to be 150 and a low threshold T2 to be 50, carrying out edge detection on the image by using a Canny algorithm to obtain a binary image, wherein pixel points with the gray value of 255 are edge points;
and thirdly, calculating the number of edge points. Counting the number of pixel points with the gray value of 255 in the binary image obtained in the second step to obtain n which is 43692;
fourthly, taking 3 to obtain the edge rate:
Figure BDA0001308137320000051
and fifthly, judging the image type according to the relation between the edge rate and the threshold value. Because the size of the gaussian smoothing window is 9 × 9, looking up table 1, the high threshold H _ TH is 0.045, and the low threshold L _ TH is 0.025. Knowing that the edge rate R of the image is 0.077, see the fifth step, so that R > H _ TH, the image is determined to be an image with very clear structure information;
and sixthly, selecting the type of the feature point. And for the image with very obvious structural information, detecting the feature points by adopting the corner point features.

Claims (3)

1. A feature point type selection method based on edge rate is characterized by comprising the following steps:
firstly, carrying out Gaussian filtering on an input image to eliminate noise; the expression for gaussian filtering of an image is:
S(x,y)=G(x,y;σ)*I(x,y) (1)
in the formula (I), the compound is shown in the specification,
Figure FDA0002386104320000011
the method is characterized in that the method is a Gaussian function with standard deviation of sigma, (x, y) is coordinates of image pixel points, I (x, y) is an original gray image, S (x, y) is an image after filtering, and x is convolution operation; assuming that the size of the gaussian smoothing window is s × s, and s is an odd number, the relationship between σ and s is:
Figure FDA0002386104320000012
secondly, setting a high threshold T1 and a low threshold T2, and performing edge detection on the image by using a Canny algorithm to obtain a binary image, wherein the gray value of an edge point is 255, and the gray value of a non-edge point is 0;
thirdly, calculating the number of edge points, namely counting the number of pixel points with the gray value of 255 in the binary image obtained in the second step, and recording the number as n;
fourthly, calculating the edge rate; the edge ratio is defined as follows:
Figure FDA0002386104320000013
in the formula, n is the number of edge points, w is the width of an image, h is the height of the image, and a is the width of an image edge pixel, and the row and column a of pixels are removed from the upper, lower, left and right sides of the image by the formula (3);
fifthly, classifying the images according to the relation between the edge rate and the threshold value; the threshold values comprise a high threshold value H _ TH and a low threshold value L _ TH; when R > H _ TH, the image is classified as a very clear type of structural information; otherwise, comparing the edge rate with a low threshold, and classifying the image into a type with very inconspicuous structural information when R is less than L _ TH; otherwise, classifying the image as an unobtrusive feature;
sixthly, selecting the type of the feature point; detecting the corner characteristic of the image with very obvious structural information; for an image with very inconspicuous structural information, adopting spot characteristic detection; for the image with the unobvious features, the detection of the features of the spots or the corners can be adopted.
2. The method according to claim 1, wherein the method comprises: the gaussian smoothing window size and the edge rate threshold are selected as shown in table 1:
TABLE 1 edge rate high and low thresholds for three Gaussian smoothing window sizes
Figure FDA0002386104320000021
3. The method according to claim 1 or 2, wherein the method comprises: the width a of the image edge pixel in the formula (3) is 1-3.
CN201710389384.4A 2017-05-31 2017-05-31 Feature point type selection method based on edge rate Active CN107247953B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710389384.4A CN107247953B (en) 2017-05-31 2017-05-31 Feature point type selection method based on edge rate

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710389384.4A CN107247953B (en) 2017-05-31 2017-05-31 Feature point type selection method based on edge rate

Publications (2)

Publication Number Publication Date
CN107247953A CN107247953A (en) 2017-10-13
CN107247953B true CN107247953B (en) 2020-05-19

Family

ID=60017701

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710389384.4A Active CN107247953B (en) 2017-05-31 2017-05-31 Feature point type selection method based on edge rate

Country Status (1)

Country Link
CN (1) CN107247953B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111429394B (en) * 2019-01-08 2024-03-01 阿里巴巴集团控股有限公司 Image-based detection method and device, electronic equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102722731A (en) * 2012-05-28 2012-10-10 南京航空航天大学 Efficient image matching method based on improved scale invariant feature transform (SIFT) algorithm
CN102915540A (en) * 2012-10-10 2013-02-06 南京大学 Image matching method based on improved Harris-Laplace and scale invariant feature transform (SIFT) descriptor
CN104036492A (en) * 2014-05-21 2014-09-10 浙江大学 Speckle extraction and adjacent point vector method-based fruit image matching method
CN104166995A (en) * 2014-07-31 2014-11-26 哈尔滨工程大学 Harris-SIFT binocular vision positioning method based on horse pace measurement
CN104318559A (en) * 2014-10-21 2015-01-28 天津大学 Quick feature point detecting method for video image matching
CN105184786A (en) * 2015-08-28 2015-12-23 大连理工大学 Floating-point-based triangle characteristic description method
CN106096621A (en) * 2016-06-02 2016-11-09 西安科技大学 Based on vector constraint fall position detection random character point choosing method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8644606B2 (en) * 2011-01-24 2014-02-04 Steven White Method for visual image detection

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102722731A (en) * 2012-05-28 2012-10-10 南京航空航天大学 Efficient image matching method based on improved scale invariant feature transform (SIFT) algorithm
CN102915540A (en) * 2012-10-10 2013-02-06 南京大学 Image matching method based on improved Harris-Laplace and scale invariant feature transform (SIFT) descriptor
CN104036492A (en) * 2014-05-21 2014-09-10 浙江大学 Speckle extraction and adjacent point vector method-based fruit image matching method
CN104166995A (en) * 2014-07-31 2014-11-26 哈尔滨工程大学 Harris-SIFT binocular vision positioning method based on horse pace measurement
CN104318559A (en) * 2014-10-21 2015-01-28 天津大学 Quick feature point detecting method for video image matching
CN105184786A (en) * 2015-08-28 2015-12-23 大连理工大学 Floating-point-based triangle characteristic description method
CN106096621A (en) * 2016-06-02 2016-11-09 西安科技大学 Based on vector constraint fall position detection random character point choosing method

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
A performance evaluation of local descriptors;K. Mikolajczyk;C. Schmid;《IEEE Transactions on Pattern Analysis and Machine Intelligence 》;20051031;第27卷(第10期);第1615-1630页 *
Comparative Analysis of Detection Algorithms for Corner and Blob Features in Image Processing;Byung-Jae 等;《International Journal of Fuzzy Logic and Intelligent systems》;20131231;第13卷(第4期);第284-290页 *
基于FPGA的图像多尺度特征点提取及匹配;刘桂华;《电视技术》;20160531(第9期);第103-107、111页 *
基于Harris角点和SIFT描述符的高分辨率遥感影像匹配算法;陈梦婷 等;《中国图象图形学报》;20121130(第11期);第1453-1459页 *
基于结构特征的图像匹配算法及应用;鲍文霞;《万方学位论文》;20111130;第1-127页 *

Also Published As

Publication number Publication date
CN107247953A (en) 2017-10-13

Similar Documents

Publication Publication Date Title
CN109409366B (en) Distorted image correction method and device based on angular point detection
US8233716B2 (en) System and method for finding stable keypoints in a picture image using localized scale space properties
Zivkovic et al. An EM-like algorithm for color-histogram-based object tracking
US11445214B2 (en) Determining variance of a block of an image based on a motion vector for the block
Nonaka et al. Evaluation report of integrated background modeling based on spatio-temporal features
US9619733B2 (en) Method for generating a hierarchical structured pattern based descriptor and method and device for recognizing object using the same
US10748023B2 (en) Region-of-interest detection apparatus, region-of-interest detection method, and recording medium
CN107784308B (en) Saliency target detection method based on chain type multi-scale full-convolution network
US9235779B2 (en) Method and apparatus for recognizing a character based on a photographed image
WO1997021189A1 (en) Edge peak boundary tracker
KR101436369B1 (en) Apparatus and method for detecting multiple object using adaptive block partitioning
US10110846B2 (en) Computationally efficient frame rate conversion system
JP4724638B2 (en) Object detection method
CN109729298B (en) Image processing method and image processing apparatus
US10136103B2 (en) Identifying consumer products in images
Ding et al. Recognition of hand-gestures using improved local binary pattern
US9508018B2 (en) Systems and methods for object detection
US9858481B2 (en) Identifying consumer products in images
CN107247953B (en) Feature point type selection method based on edge rate
CN112215266B (en) X-ray image contraband detection method based on small sample learning
CN107403127B (en) Vehicle unloading state monitoring method based on image ORB characteristics
CN109101874B (en) Library robot obstacle identification method based on depth image
CN115619791B (en) Article display detection method, device, equipment and readable storage medium
JP6403207B2 (en) Information terminal equipment
US10372750B2 (en) Information processing apparatus, method, program and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant