CN106709499A - SIFT image feature point extraction method based on Canny operator and Hilbert-Huang transform - Google Patents

SIFT image feature point extraction method based on Canny operator and Hilbert-Huang transform Download PDF

Info

Publication number
CN106709499A
CN106709499A CN201710118809.8A CN201710118809A CN106709499A CN 106709499 A CN106709499 A CN 106709499A CN 201710118809 A CN201710118809 A CN 201710118809A CN 106709499 A CN106709499 A CN 106709499A
Authority
CN
China
Prior art keywords
point
image
edge
group
sift
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710118809.8A
Other languages
Chinese (zh)
Inventor
王晓楠
黄登山
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN201710118809.8A priority Critical patent/CN106709499A/en
Publication of CN106709499A publication Critical patent/CN106709499A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an SIFT image feature point extraction method based on the Canny operator and Hilbert-Huang transform, and relates to the field of image processing. The method is an improved method for SIFT image feature point extraction based on the Canny operator and Hilbert-Huang transform. The method employs a technical scheme of the fusion of the Canny operator and Hilbert-Huang transform, irons out the defects that a conventional SIFT algorithm is not good in instantaneity, the number of feature point is smaller sometimes and the feature points of a target with the smooth edge cannot be extracted, can effectively improve the anti-noise capability of the SIFT feature points, improves the stability of an algorithm, and provides a good basis for the three-dimensional reconstruction and target positioning.

Description

The SIFT image characteristic points converted based on Canny operators and Hillbert-Huang are carried Take method
Technical field
The present invention relates to image processing field, more particularly, to a kind of method of Image Feature Point Matching.
Background technology
Images match as an indispensable important research means in industry, agricultural, business and military field, Constantly furtherd investigate by people always in recent years, at home and abroad mainly there are following means:Color or gray feature are carried Take, texture Edge Gradient Feature, image algebra feature extraction, image transform coefficients feature extraction, wherein, SIFT (Scale- Invariant feature transform) algorithm receives illumination intensity effect small with it, good to target occlusion result, good The characteristics of good robustness and real time high-speed, by many scholar's extensive uses in feature point detection.
SIFT algorithms are a kind of Image Feature Point Matching algorithms proposed in 2004 by Lowe, by determining that yardstick is empty Between, detection key point position, determine key point position and describe several steps to choosing key point, finally realize image Match somebody with somebody, but SIFT algorithms there is also defect, in SIFT algorithms carry out resulting key after rim detection using DOG operators Point, due to skirt response, it is not much real key point to have.
In rim detection, Canny operators have many-sided optimization such as filtering, enhancing and detection, by noise jamming Property is small, but in Canny operator process steps, during using Gauss operator to picture smooth treatment, it may appear that excess smoothness and side The situation of edge displacement, the Hillbert-Huang of two dimension is converted for picture breakdown, can be by picture breakdown into corresponding interior Accumulate modular function, become apparent from edge contour.
The content of the invention
In order to overcome the deficiencies in the prior art, the present invention to provide one kind and become based on Canny operators and Hillbert-Huang The method that the improved SIFT image characteristic points for changing are extracted, is improved to SIFT image matching algorithms, is tested by experimental result Card, image characteristic point extracts accuracy rate and improves a lot.
The technical solution adopted for the present invention to solve the technical problems is comprised the following steps:
Step 1:Original image I (x, y) to being input into defines metric space L (x, y, σ), builds gaussian pyramid:
L (x, y, σ)=G (x, y, σ) * I (x, y) (1)
Wherein G (x, y, σ) is Gaussian function, and m, n are the dimension of Gaussian template, and σ is the metric space factor;
Step 2:Build difference of Gaussian pyramid
(1) by image pyramid point o groups, one group is referred to as an Octave, is multilayer per component, and total number of plies is S in group, The each group of top layer of image generates 3 width images with Gaussian Blur, and every group of gaussian pyramid then has S+3 tomographic images;
(2) original image is carried out using dot interlace sampling method down-sampled, is embodied as:Scale factor is that 2 pairs of original images are every Taken a bit every the row of a line one;
(3) number of plies of gaussian pyramid is determined:Original image be pyramidal bottom first layer, every time it is down-sampled obtained by New images are an image of pyramidal last layer, and each pyramid is total to n-layer, and pyramidal number of plies n is according to the original of image The size of size and tower top image is together decided on, and its computing formula is as follows:
N=log2{ min (M, N) }-t (3)
Wherein, M, N are the pixel value of original image, and t is the logarithm value of the minimum dimension of tower top image;
(4) yardstick of a certain tomographic image in gaussian pyramid group is calculated
Wherein σ0It is datum layer yardstick, is the index that 1.6, o is group octave according to SIFT algorithms value, s is group internal layer Index, according to SIFT algorithms value be 3, the yardstick coordinate σ of key point is the layer in group and group where key point, different groups Yardstick coordinate is identical in the group of identical layer;
(5) DOG operators are calculated
D (x, y, σ)=(G (x, y, k σ)-G (x, y, σ)) * I (x, y)=L (x, y, k σ)-L (x, y, σ) (5)
Wherein k is the inverse of total number of plies in group, i.e.,:
Step 3:The detection of spatial extrema point is carried out to image, is comprised the following steps that:
DOG operators D (x, y, the σ) Taylor expansion that will be calculated in step 2, obtains
Wherein X=(x, y, σ)T, the Taylor expansion derivation end value for making formula (5) is zero, obtains center offset
Wherein,The side-play amount at relative interpolation center is represented, according to SIFT algorithms in any one dimension, That is x, y, any one numerical value in σ three are considered as interpolation center more than 0.5, then and have offset;
Step 4:By Canny operators and Hillbert-Huang conversion fusions, comprise the following steps that:
(1) intrinsic mode function (Intrinsic Mode Function, IMF) is solved, wherein, I (x, y) is original image, MI (x, y) represents averaged to image curved surface, mkI (x, y) represents that kth time is averaged curved surface to image mI (x, y), original image with The difference of mI (x, y) is first IMF components c1, after obtaining first IMF component, by original image and c1Difference mI (x, y) make For pending part continues to decompose, then second IMF components c is can obtain2, n-th decomposition of IMF components is can obtain by that analogy Expression formula, specific formula is as follows:
c1=I (x, y)-mI (x, y)
c2=I (x, y)-c1- m [I (x, y)-c1]=mI (x, y)-m2I (x, y)
c3=I (x, y)-c1-c2- m [I (x, y)-c1-c2]=m2I (x, y)-m3I (x, y)
cn=mn-1I (x, y)-mnI (x, y) (9)
By c in formula (9)1To cnIt is added, obtains two-dimentional IMF:
(2) gradient of image is calculated
1) the x directions partial differentials E of image I (x, y) is determinedx
2) the partial differential E in the y directions of image I (x, y) is determinedy
3) edge gradient intensity A (i, j) of point (i, j) on image I (x, y) is determined:
4) gradient direction α (i, j) of point (i, j) on image I (x, y) is determined:
(3) eliminate erroneous point using non-maximum restraining and obtain Single pixel edge point
(4) border is obtained using dual threshold binaryzation
Specific implementation step is:
1) high-low threshold value that detection needs is preset, definition Low threshold is T1, high threshold is T2, and T2=2T1, T1= 12;
2) dual threshold treatment is carried out to image, for any edge pixel values in T1With T2Between, if edge can be passed through A pixel is connected to more than T2And edge all pixels are more than minimum threshold T1Then reservation, otherwise abandon, edge gradient is strong Degree A (i, j) is then edge more than high threshold, and edge gradient intensity A (i, j) is not then edge less than Low threshold, and edge gradient is strong Degree A (i, j) judges in 8 neighborhoods with the presence or absence of the gradient magnitude higher than high threshold between high threshold and Low threshold, exists It is then edge, is not otherwise edge;
(5) frontier tracing, obtains edge image;
1) edge gradient intensity A (i, j) is less than T1The gray value of pixel be set to 0, obtain image 1;
2) edge gradient intensity A (i, j) is less than T2The gray value of pixel be set to 0, obtain image 2;
3) image 2 is scanned, when pixel p (x, y) of first non-zero gray scale is run into, with p (x, y) as starting point Tracking contour line, until the end of scan of image 2, the terminal of contour line is q (x, y), and now tracking terminates;
4) q (x, y) puts the corresponding point s (x, y) of same position in the image 1 in finding out image 2, and 8 when s (x, y) points are adjacent There are non-zero pixels s (x, y) to exist near field, then included in image 2 as r (x, y) points, since r (x, y), weight Multiple step 1), untill in image 1 and image 2 without non-zero pixels, now complete to the contour line comprising p (x, y) Link;
5) return to step 1), find next contour line, repeat step 1), 2) and 3), it is new until be can not find in image 2 Contour line, i.e., without non-zero gray-scale pixels untill, for Canny operator edge detections eliminate skirt response after obtain key point X= (x, y, σ)T, key point X=(x, y, σ)TComposition characteristic point point set R2
Step 5:
1) feature point set R is obtained using SIFT methods1, by characteristic point point set R1In point and step 4 in the characteristic point that obtains Point set R2Institute a little compares two-by-two, judges whether coordinate is equal, identical, casts out R1In point, differ R then1In point with R2The point set R in 3 × 3 neighborhoods where middle corresponding points3Compare, it is identical, cast out R1In point, otherwise, by R1In point and R2In Other marginal points not compared compare, if it is any once compare have and R1The middle point coordinates identical point for participating in comparing, Then cast out R1In the point, otherwise retain;
2) calculation flag point f1And f2, wherein f1=R1-R2, f2=size ((R3-R1), 1), f1And f2It is index point, f1It is Point set R1With R2Difference set, work as f1For 0 when, remove point set R1In corresponding points, f1For 1 when then retain point set R1In corresponding points make It is candidate point, works as f2For 8 when, retain point set R1In corresponding points as candidate point, f2For 7 when, then cast out R1In this point;
3) 2) the middle candidate point for determining is calculated into its direction, is comprised the following steps that:
Solve the amplitude of key point gradient:
Solve the direction of key point gradient:
θ (x, y)=tan-1((L (x, y+1)-L (x, y) -1))/(L (x+1, y)-L (x-1, y))) (16)
Step 6:
Step 1 to step 5 is emulated using Matlab, and is marked in picture using plot () method in Matlab Go out the characteristic point of matching.
The beneficial effects of the invention are as follows the technical side as a result of Canny operators and Hillbert-Huang conversion fusions Case, improves that original SIFT algorithm real-times are not high, and characteristic point is sometimes less and target of to the smooth of the edge cannot be carried accurately The defect of characteristic point is taken, the noise resisting ability of SIFT feature can be effectively enhanced, improve the stability of algorithm, be three Dimension reconstruction and target positioning etc. are there is provided good basis.
Brief description of the drawings
Fig. 1 is blending algorithm technology path schematic diagram of the invention.
Fig. 2 is former SIFT algorithm characteristics point, feature of present invention point and the block diagram for removing characteristic point.
Fig. 3 is test pictures one.
Fig. 4 is test pictures two.
Fig. 5 is test pictures three.
Specific embodiment
The present invention is further described with reference to the accompanying drawings and examples, and the present invention includes but are not limited to following implementations Example.
For the defect that SIFT algorithms are present, in SIFT algorithms carried out using DOG operators it is resulting after rim detection Key point, due to skirt response, it is not much real key point to have.In rim detection, Canny operators have Many-sided optimization such as filtering, enhancing and detection, it is small by noise jamming, but in Canny operator process steps, using height When this operator is to picture smooth treatment, it may appear that the situation of excess smoothness and edge displacement.The Hillbert-Huang of two dimension becomes Use instead in picture breakdown, can by picture breakdown into it is corresponding it is interior accumulate modular function, become apparent from edge contour.So will Canny and Hillbert-Huang conversion is merged, and carries out endpoint detections, then in the marginal point that will be obtained and SIFT algorithms The marginal point for obtaining compares, the key point that both detect simultaneously, the characteristic point that should as choose.
Main technical schemes of the invention are comprised the following steps:
Step 1:Original image I (x, y) to being input into defines metric space L (x, y, σ), builds gaussian pyramid:
L (x, y, σ)=G (x, y, σ) * I (x, y) (1)
Wherein G (x, y, σ) is Gaussian function, and m, n are the dimension of Gaussian template, and σ is the metric space factor, is worth smaller expression It is fewer that image is smoothed, and corresponding yardstick is smaller;
Step 2:Build difference of Gaussian pyramid
(1) by image pyramid point o groups, one group is referred to as an Octave, is multilayer per component, and total number of plies is S in group, often One group of first and last two-layer of image cannot carry out ratio of extreme values compared with order to meet the continuity of dimensional variation, on each group of top of image Layer generates 3 width images with Gaussian Blur, and every group of gaussian pyramid then has S+3 tomographic images;
(2) original image is carried out down-sampled, using dot interlace sampling method, is embodied as:Scale factor is that 2 pairs of original images are every Taken a bit every the row of a line one;
(3) number of plies of gaussian pyramid is determined:Original image be pyramidal bottom first layer, every time it is down-sampled obtained by New images are an image of pyramidal last layer, and each pyramid is total to n-layer, and pyramidal number of plies n is according to the original of image The size of size and tower top image is together decided on, and its computing formula is as follows:
N=log2{ min (M, N) }-t (3)
Wherein, M, N are the pixel value of original image, and t is the logarithm value of the minimum dimension of tower top image;
(4) yardstick of a certain tomographic image in gaussian pyramid group is calculated
Wherein σ0It is datum layer yardstick, is the index that 1.6, o is group octave according to SIFT algorithms value, s is group internal layer Index, be 3 according to SIFT algorithms value, the yardstick coordinate σ of key point is the layer in group and group as where key point, different Yardstick coordinate is identical in the group of group identical layer;
(5) DOG operators are calculated
D (x, y, σ)=(G (x, y, k σ)-G (x, y, σ)) * I (x, y)=L (x, y, k σ)-L (x, y, σ) (5)
Wherein k is the inverse of total number of plies in group, i.e.,:
Step 3:The detection of spatial extrema point is carried out to image, is comprised the following steps that:
(1) DOG operators D (x, y, the σ) Taylor expansion that will be calculated in step 2, obtains
Wherein X=(x, y, σ)T, the Taylor expansion derivation end value for making formula (5) is zero, obtains center offset
Wherein,The side-play amount at relative interpolation center is represented, according to SIFT algorithms in any one dimension, That is x, y, any one numerical value in σ three are considered as interpolation center more than 0.5, then and have offset;
Step 4:By Canny operators and Hillbert-Huang conversion fusions, comprise the following steps that:
(1) intrinsic mode function (Intrinsic Mode Function, IMF) is solved, wherein, I (x, y) is original image, Ml (x, y) represents averaged to image curved surface, mkI (x, y) represents that kth time is averaged curved surface to image mI (x, y), original image with The difference of mI (x, y) is first IMF components c1, after obtaining first IMF component, by original image and c1Difference mI (x, y) make For pending part continues to decompose, then second IMF components c is can obtain2, n-th decomposition of IMF components is can obtain by that analogy Expression formula, specific formula is as follows:
c1=I (x, y)-mI (x, y)
c2=I (x, y)-c1- m [I (x, y)-c1]=mI (x, y)-m2I (x, y)
c3=I (x, y)-c1-c2- m [I (x, y)-c1-c2]=m2I (x, y)-m3I (x, y)
cn=mn-1I (x, y)-mnI (x, y) (9)
By c in formula (9)1To cnIt is added, obtains two-dimentional IMF:
(2) gradient of image is calculated
1) the x directions partial differentials E of image I (x, y) is determinedx
2) the partial differential E in the y directions of image I (x, y) is determinedy
3) edge gradient intensity A (i, j) of point (i, j) on image I (x, y) is determined:
4) gradient direction α (i, j) of point (i, j) on image I (x, y) is determined:
(3) erroneous point is eliminated using non-maximum restraining and Single pixel edge point is obtained
(4) border is obtained using dual threshold binaryzation
Specific implementation step is:
1) high-low threshold value that detection needs is preset, definition Low threshold is T1, high threshold is T2, and T2=2T1, T1= 12;
2) dual threshold treatment is carried out to image, for any edge pixel values in T1With T2Between, if edge can be passed through A pixel is connected to more than T2And edge all pixels are more than minimum threshold T1Then reservation, otherwise abandon, edge gradient is strong Degree A (i, j) is then edge more than high threshold, and edge gradient intensity A (i, j) is not then edge less than Low threshold, and edge gradient is strong Degree A (i, j) judges in 8 neighborhoods with the presence or absence of the gradient magnitude higher than high threshold between high threshold and Low threshold, exists It is then edge, is not otherwise edge;
(5) frontier tracing, obtains edge image;
1) edge gradient intensity A (i, j) is less than T1The gray value of pixel be set to 0, obtain image 1;
2) edge gradient intensity A (i, j) is less than T2The gray value of pixel be set to 0, image 2 is obtained, due to image 2 Threshold value is higher, the most of noise of removal, but also have lost useful marginal information simultaneously, and the threshold value of image 1 is relatively low, remains More information, based on image 2, the edge of image is linked with image 1 for supplement;
3) image 2 is scanned, when pixel p (x, y) of first non-zero gray scale is run into, with p (x, y) as starting point Tracking contour line, until the end of scan of image 2, the terminal of contour line is q (x, y), and now tracking terminates;
4) q (x, y) puts the corresponding point s (x, y) of same position in the image 1 in finding out image 2, and 8 when s (x, y) points are adjacent There are non-zero pixels s (x, y) to exist near field, then included in image 2 as r (x, y) points, since r (x, y), weight Multiple step 1), untill in image 1 and image 2 without non-zero pixels, now complete to the contour line comprising p (x, y) Link;
5) return to step 1), find next contour line, repeat step 1), 2) and 3), it is new until be can not find in image 2 Contour line, i.e., without non-zero gray-scale pixels untill, for Canny operator edge detections eliminate skirt response after obtain key point X= (x, y, σ)T, key point X=(x, y, σ)TComposition characteristic point point set R2
Step 5:
1) feature point set R is obtained using SIFT methods1, by characteristic point point set R1In point and step 4 in the characteristic point that obtains Point set R2Institute a little compares two-by-two, judges whether coordinate is equal, identical, casts out R1In point, differ R then1In point with R2The point set R in 3 × 3 neighborhoods where middle corresponding points3Compare, it is identical, cast out R1In point, otherwise, by R1In point and R2In Other marginal points not compared compare, if it is any once compare have and R1The middle point coordinates identical point for participating in comparing, Then cast out R1In the point, otherwise retain;
2) calculation flag point f1And f2, wherein f1=R1-R2, f2=size ((R3-R1), 1), f1And f2It is index point, f1It is Point set R1With R2Difference set, work as f1For 0 when, remove point set R1In corresponding points, f1For 1 when then retain point set R1In corresponding points make It is candidate point, works as f2For 8 when, retain point set R1In corresponding points as candidate point, f2For 7 when, then give up R1In this point;
3) 2) the middle candidate point for determining is calculated into its direction, is comprised the following steps that:
Solve the amplitude of key point gradient:
Solve the direction of key point gradient:
θ (x, y)=tan-1((L (x, y+1)-L (x, y-1))/(L (x+1, y)-L (x-1, y))) (16)
Step 6:
Step 1 to step 5 is emulated using Matlab, and is marked in picture using plot () method in Matlab Go out the characteristic point of matching, the three width pictures for choosing Fig. 3, Fig. 4 and Fig. 5 are tested, and are calculated the quantity of characteristic point.
Fig. 2 is three groups of block diagrams of the matching characteristic point obtained using three test pictures, wherein test one is to use Fig. 3 Picture experimental result, test two is experimental result using the picture of Fig. 4, and test three is the experiment using the picture of Fig. 5 As a result, three groups of data in each group of block diagram respectively are the feature point number of former SIFT algorithms detection, present invention detection Feature point number and the characteristic point for removing number, the characteristic point of described removing is to use the present invention and original SIFT algorithm ratios Compared with the number of the characteristic point for being removed, i.e. original SIFT algorithms detect the difference of characteristic point with the present invention, and the ordinate of Fig. 2 is The number of characteristic point, can will become apparent from the present invention more more accurate than former SIFT arithmetic results from Fig. 2.
The number of the characteristic point that the present invention is extracted, the wherein image of Fig. 5 is complex, if entered with original SIFT algorithms Row feature point extraction, it will many due to the wrong point of skirt response generation, the present invention not only robustness increase occur, more improve Accuracy.

Claims (1)

1. a kind of method that SIFT image characteristic points converted based on Canny operators and Hillbert-Huang are extracted, its feature It is to comprise the steps:
Step 1:Original image I (x, y) to being input into defines metric space L (x, y, σ), builds gaussian pyramid:
L (x, y, σ)=G (x, y, σ) * I (x, y) (1)
G ( x , y , σ ) = 1 2 πσ 2 e - ( x - m / 2 ) 2 + ( y - n / 2 ) 2 2 σ 2 - - - ( 2 )
Wherein G (x, y, σ) is Gaussian function, and m, n are the dimension of Gaussian template, and σ is the metric space factor;
Step 2:Build difference of Gaussian pyramid
(1) by image pyramid point o groups, one group is referred to as an Octave, is multilayer per component, and total number of plies is S in group, each The top layer of group image generates 3 width images with Gaussian Blur, and every group of gaussian pyramid then has S+3 tomographic images;
(2) original image is carried out using dot interlace sampling method down-sampled, is embodied as:Scale factor is 2 pairs of original images every one The row of row one take a bit;
(3) number of plies of gaussian pyramid is determined:Original image is pyramidal bottom first layer, every time down-sampled resulting new figure As an image for pyramidal last layer, the common n-layer of each pyramid, original sizes of the pyramidal number of plies n according to image Size with tower top image is together decided on, and its computing formula is as follows:
N=log2{ min (M, N) }-t (3)
Wherein, M, N are the pixel value of original image, and t is the logarithm value of the minimum dimension of tower top image;
(4) yardstick of a certain tomographic image in gaussian pyramid group is calculated
σ ( o , s ) = σ 0 2 o + 1 s - - - ( 4 )
Wherein σ0It is datum layer yardstick, is the index that 1.6, o is group octave according to SIFT algorithms value, s is the rope of group internal layer Draw, be 3 according to SIFT algorithms value, the yardstick coordinate σ of key point is the layer in group and group where key point, difference group is identical Yardstick coordinate is identical in the group of layer;
(5) DOG operators are calculated
D (x, y, σ)=(G (x, y, k σ)-G (x, y, σ)) * I (x, y)=L (x, y, k σ)-L (x, y, σ) (5)
Wherein k is the inverse of total number of plies in group, i.e.,:
k = 2 1 s - - - ( 6 )
Step 3:The detection of spatial extrema point is carried out to image, is comprised the following steps that:
DOG operators D (x, y, the σ) Taylor expansion that will be calculated in step 2, obtains
D ( X ) = D + ∂ D T ∂ X X + 1 2 X T ∂ 2 D ∂ X 2 X - - - ( 7 )
Wherein X=(x, y, σ)T, the Taylor expansion derivation end value for making formula (5) is zero, obtains center offset
X ^ = - ∂ 2 D - 1 ∂ X 2 ∂ D ∂ X - - - ( 8 )
Wherein,The side-play amount at relative interpolation center is represented, according to SIFT algorithms in any one dimension, i.e. x, Any one numerical value in y, σ three is considered as interpolation center more than 0.5, then and has offset;
Step 4:By Canny operators and Hillbert-Huang conversion fusions, comprise the following steps that:
(1) intrinsic mode function (Intrinsic Mode Function, IMF) is solved, wherein, I (x, y) is original image, mI (x, y) represents averaged to image curved surface, mkI (x, y) represents kth secondary averaged to image mI (x, y) curved surface, original image and mI The difference of (x, y) is first IMF components c1, after obtaining first IMF component, by original image and c1Difference mI (x, y) conduct Pending part continues to decompose, then can obtain second IMF components c2, n-th breakdown of IMF components is can obtain by that analogy Up to formula, specific formula is as follows:
c1=I (x, y)-mI (x, y)
c2=I (x, y)-c1- m [I (x, y)-c1]=mI (x, y)-m2I (x, y)
c3=I (x, y)-c1-c2- m [I (x, y)-c1-c2]=m2I (x, y)-m3I (x, y)
cn=mn-1I (x, y)-mnI (x, y) (9)
By c in formula (9)1To cnIt is added, obtains two-dimentional IMF:
I ( x , y ) = Σ i = 1 n c n + m n I ( x , y ) - - - ( 10 )
(2) gradient of image is calculated
1) the x directions partial differentials E of image I (x, y) is determinedx
E x = ∂ G ( x , y , σ ) ∂ x * I ( x , y ) - - - ( 11 )
2) the partial differential E in the y directions of image I (x, y) is determinedy
E y = ∂ D ( x , y , σ ) ∂ x * I ( x , y ) - - - ( 12 )
3) edge gradient intensity A (i, j) of point (i, j) on image I (x, y) is determined:
A ( i , j ) = E x 2 ( i , j ) + E y 2 ( i , j ) - - - ( 13 )
4) gradient direction α (i, j) of point (i, j) on image I (x, y) is determined:
α ( i , j ) = arctan E x ( i , j ) E y ( i , j ) - - - ( 14 )
(3) eliminate erroneous point using non-maximum restraining and obtain Single pixel edge point
(4) border is obtained using dual threshold binaryzation
Specific implementation step is:
1) high-low threshold value that detection needs is preset, definition Low threshold is T1, high threshold is T2, and T2=2T1, T1=12;
2) dual threshold treatment is carried out to image, for any edge pixel values in T1With T2Between, if can be connected by edge It is more than T to a pixel2And edge all pixels are more than minimum threshold T1Then reservation, otherwise abandon, edge gradient intensity A (i, j) is then edge more than high threshold, and edge gradient intensity A (i, j) is not then edge, edge gradient intensity A less than Low threshold (i, j) judges in 8 neighborhoods with the presence or absence of the gradient magnitude higher than high threshold between high threshold and Low threshold, exists then It is edge, is not otherwise edge;
(5) frontier tracing, obtains edge image;
1) edge gradient intensity A (i, j) is less than T1The gray value of pixel be set to 0, obtain image 1;
2) edge gradient intensity A (i, j) is less than T2The gray value of pixel be set to 0, obtain image 2;
3) image 2 is scanned, is that starting point is tracked with p (x, y) when pixel p (x, y) of first non-zero gray scale is run into Contour line, until the end of scan of image 2, the terminal of contour line is q (x, y), and now tracking terminates;
4) q (x, y) puts the corresponding point s (x, y) of same position in the image 1 in finding out image 2, when 8 proximities of s (x, y) points There are non-zero pixels s (x, y) to exist in domain, then included in image 2 as r (x, y) points, since r (x, y), repeat to walk It is rapid 1), until, without untill non-zero pixels, now completing the company to the contour line comprising p (x, y) in image 1 and image 2 Knot;
5) return to step 1), find next contour line, repeat step 1), 2) and 3), until can not find new profile in image 2 Line, i.e., without non-zero gray-scale pixels untill, for Canny operator edge detections eliminate skirt response after obtain key point X=(x, y, σ)T, key point X=(x, y, σ)TComposition characteristic point point set R2
Step 5:
1) feature point set R is obtained using SIFT methods1, by characteristic point point set R1In point and step 4 in the characteristic point point set that obtains R2Institute a little compares two-by-two, judges whether coordinate is equal, identical, casts out R1In point, differ R then1In point and R2In The point set R in 3 × 3 neighborhoods where corresponding points3Compare, it is identical, cast out R1In point, otherwise, by R1In point and R2In Other marginal points not compared compare, if it is any once compare have and R1The middle point coordinates identical point for participating in comparing, then Cast out R1In the point, otherwise retain;
2) calculation flag point f1And f2, wherein f1=R1-R2, f2=size ((R3-R1), 1), f1And f2It is index point, f1It is point set R1With R2Difference set, work as f1For 0 when, remove point set R1In corresponding points, f1For 1 when then retain point set R1In corresponding points as time Reconnaissance, works as f2For 8 when, retain point set R1In corresponding points as candidate point, f2For 7 when, then cast out R1In this point;
3) 2) the middle candidate point for determining is calculated into its direction, is comprised the following steps that:
Solve the amplitude of key point gradient:
m ( x , y ) = ( t 1 ( x + 1 , y ) - L ( x - 1 , y ) ) 2 + ( L ( x , y + 1 ) - L ( x , y - 1 ) ) 2 - - - ( 15 )
Solve the direction of key point gradient:
θ (x, y)=tan-1((L (x, y+1)-L (x, y-1))/(L (x+1, y)-L (x-1, y))) (16)
Step 6:
Step 1 to step 5 is emulated using Matlab, and is marked in picture using plot () method in Matlab The characteristic point matched somebody with somebody.
CN201710118809.8A 2017-03-02 2017-03-02 SIFT image feature point extraction method based on Canny operator and Hilbert-Huang transform Pending CN106709499A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710118809.8A CN106709499A (en) 2017-03-02 2017-03-02 SIFT image feature point extraction method based on Canny operator and Hilbert-Huang transform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710118809.8A CN106709499A (en) 2017-03-02 2017-03-02 SIFT image feature point extraction method based on Canny operator and Hilbert-Huang transform

Publications (1)

Publication Number Publication Date
CN106709499A true CN106709499A (en) 2017-05-24

Family

ID=58917473

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710118809.8A Pending CN106709499A (en) 2017-03-02 2017-03-02 SIFT image feature point extraction method based on Canny operator and Hilbert-Huang transform

Country Status (1)

Country Link
CN (1) CN106709499A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107276204A (en) * 2017-06-28 2017-10-20 苏州华商新能源有限公司 One kind energy-conservation loading robot
CN107392929A (en) * 2017-07-17 2017-11-24 河海大学常州校区 A kind of intelligent target detection and dimension measurement method based on human vision model
CN107577979A (en) * 2017-07-26 2018-01-12 中科创达软件股份有限公司 DataMatrix type Quick Response Codes method for quickly identifying, device and electronic equipment
CN109447091A (en) * 2018-10-19 2019-03-08 福建师范大学 A kind of characteristics of image point extracting method with accurate coordinate
CN110490815A (en) * 2019-07-12 2019-11-22 西安理工大学 A kind of region adaptivity denoising method based on division Bregman algorithm
CN110706243A (en) * 2019-09-30 2020-01-17 Oppo广东移动通信有限公司 Feature point detection method and device, storage medium and electronic equipment
CN112230345A (en) * 2020-11-06 2021-01-15 桂林电子科技大学 Optical fiber auto-coupling alignment apparatus and method
CN112541507A (en) * 2020-12-17 2021-03-23 中国海洋大学 Multi-scale convolutional neural network feature extraction method, system, medium and application

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105894037A (en) * 2016-04-21 2016-08-24 北京航空航天大学 Whole supervision and classification method of remote sensing images extracted based on SIFT training samples
CN106204429A (en) * 2016-07-18 2016-12-07 合肥赑歌数据科技有限公司 A kind of method for registering images based on SIFT feature

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105894037A (en) * 2016-04-21 2016-08-24 北京航空航天大学 Whole supervision and classification method of remote sensing images extracted based on SIFT training samples
CN106204429A (en) * 2016-07-18 2016-12-07 合肥赑歌数据科技有限公司 A kind of method for registering images based on SIFT feature

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
侯鹏庆: "单CCD多光谱铸坯表面测温仪的研究", 《中国优秀硕士学位论文全文数据库 工程科技I辑》 *
刘长春: "一种新型超精密光学元件瑕疵检测装置开发", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
郑伟涛: "多基线影像三维重建关键技术研究", 《中国优秀硕士学位论文全文数据库 基础科学辑》 *
黄登山 等: "基于Canny和Hillbert-Huang变换的改进的SIFT算法研究", 《西北工业大学学报》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107276204A (en) * 2017-06-28 2017-10-20 苏州华商新能源有限公司 One kind energy-conservation loading robot
CN107392929A (en) * 2017-07-17 2017-11-24 河海大学常州校区 A kind of intelligent target detection and dimension measurement method based on human vision model
CN107392929B (en) * 2017-07-17 2020-07-10 河海大学常州校区 Intelligent target detection and size measurement method based on human eye vision model
CN107577979A (en) * 2017-07-26 2018-01-12 中科创达软件股份有限公司 DataMatrix type Quick Response Codes method for quickly identifying, device and electronic equipment
CN107577979B (en) * 2017-07-26 2020-07-03 中科创达软件股份有限公司 Method and device for quickly identifying DataMatrix type two-dimensional code and electronic equipment
CN109447091A (en) * 2018-10-19 2019-03-08 福建师范大学 A kind of characteristics of image point extracting method with accurate coordinate
CN110490815A (en) * 2019-07-12 2019-11-22 西安理工大学 A kind of region adaptivity denoising method based on division Bregman algorithm
CN110706243A (en) * 2019-09-30 2020-01-17 Oppo广东移动通信有限公司 Feature point detection method and device, storage medium and electronic equipment
CN110706243B (en) * 2019-09-30 2022-10-21 Oppo广东移动通信有限公司 Feature point detection method and device, storage medium and electronic equipment
CN112230345A (en) * 2020-11-06 2021-01-15 桂林电子科技大学 Optical fiber auto-coupling alignment apparatus and method
CN112541507A (en) * 2020-12-17 2021-03-23 中国海洋大学 Multi-scale convolutional neural network feature extraction method, system, medium and application
CN112541507B (en) * 2020-12-17 2023-04-18 中国海洋大学 Multi-scale convolutional neural network feature extraction method, system, medium and application

Similar Documents

Publication Publication Date Title
CN106709499A (en) SIFT image feature point extraction method based on Canny operator and Hilbert-Huang transform
CN104318548B (en) Rapid image registration implementation method based on space sparsity and SIFT feature extraction
CN108805904B (en) Moving ship detection and tracking method based on satellite sequence image
Kim et al. Adaptive smoothness constraints for efficient stereo matching using texture and edge information
CN105741276B (en) A kind of ship waterline extracting method
Sharma et al. A comparative study of edge detectors in digital image processing
CN108898610A (en) A kind of object contour extraction method based on mask-RCNN
CN108319949A (en) Mostly towards Ship Target Detection and recognition methods in a kind of high-resolution remote sensing image
CN102609701B (en) Remote sensing detection method based on optimal scale for high-resolution SAR (synthetic aperture radar)
Rahmatullah et al. Integration of local and global features for anatomical object detection in ultrasound
CN106296638A (en) Significance information acquisition device and significance information acquisition method
CN110135438B (en) Improved SURF algorithm based on gradient amplitude precomputation
CN106683076A (en) Texture feature clustering-based locomotive wheelset tread damage detection method
CN108846844B (en) Sea surface target detection method based on sea antenna
CN102298773A (en) Shape-adaptive non-local mean denoising method
CN102903111B (en) Large area based on Iamge Segmentation low texture area Stereo Matching Algorithm
CN111080574A (en) Fabric defect detection method based on information entropy and visual attention mechanism
CN109886989A (en) A kind of automatic tracing of horizons method of Ground Penetrating Radar based on Canny operator
CN112329764A (en) Infrared dim target detection method based on TV-L1 model
CN105844643B (en) Distorted image detection method
Zhao et al. Analysis of image edge checking algorithms for the estimation of pear size
CN112163606B (en) Infrared small target detection method based on block contrast weighting
CN111105390B (en) Improved sea-sky-line detection and evaluation method
CN104156956B (en) A kind of multicorner edge detection operator method recognized based on Gauss wavelet one-dimensional peak value
CN110874599A (en) Ship detection method based on image recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20170524

WD01 Invention patent application deemed withdrawn after publication