CN104598911A - Image characterization method based on DoG function - Google Patents

Image characterization method based on DoG function Download PDF

Info

Publication number
CN104598911A
CN104598911A CN201510048398.0A CN201510048398A CN104598911A CN 104598911 A CN104598911 A CN 104598911A CN 201510048398 A CN201510048398 A CN 201510048398A CN 104598911 A CN104598911 A CN 104598911A
Authority
CN
China
Prior art keywords
dog
convolution
sampled point
yardstick
gradient direction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510048398.0A
Other languages
Chinese (zh)
Other versions
CN104598911B (en
Inventor
王蕴红
翁大伟
黄迪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN201510048398.0A priority Critical patent/CN104598911B/en
Publication of CN104598911A publication Critical patent/CN104598911A/en
Application granted granted Critical
Publication of CN104598911B publication Critical patent/CN104598911B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • G06V10/464Salient features, e.g. scale invariant feature transforms [SIFT] using a plurality of salient features, e.g. bag-of-words [BoW] representations

Abstract

The invention provides an Image characterization method based on a DoG function. The method mainly includes the following steps: extracting sampling points and building a sampling point template; acquiring gradient maps of an image to be characterized in N directions; acquiring S+1 gradient direction maps of Gaussian convolution in each direction; acquiring S gradient direction maps of DoG convolution aiming at the S+1 gradient direction maps of Gaussian convolution in each direction; extracting pixel values for the points with identical positions in the gradient direction maps of the DoG convolution in different gradient directions according to the DoG convolution size of each sampling point and the position of the sampling points in the sampling point template; finally forming a total characteristic vector to characterize the image to be characterized. The method can achieve higher distinction performance and better robustness when used for characterizing the image.

Description

A kind of image-characterization methods based on DoG function
Technical field
The present invention relates to computer vision field, particularly relate to a kind of image-characterization methods based on DoG function.
Background technology
Image-characterization methods is also known as descriptor, underlying issue and the key issue of computer vision field, be widely used in the traditional computer visual tasks such as large-scale image retrieval, panoramic mosaic and in a lot of identification mission, as object identification, recognition of face etc.
In prior art, descriptor has a lot of class, wherein a class is the method based on image gradient, representational algorithm SIFT in this way, GLOH is example, when SIFT and GLOH calculates, centered by the point of interest of image to be characterized, this point of interest is characterized with the gradient orientation histogram that the gradient of point of interest peripheral region is formed, in this point of interest peripheral region each pixel the weighting of gradient modulus value after distribute to the respective direction of histogram of gradients, the size of weights used is computed weighted with this pixel to the Distance geometry pixel of point of interest to the inversely proportional relation of distance of zone boundary point to the gradient of each pixel.
But, adopt foregoing description of prior art to carry out characterization image, all there is the problem that ga s safety degree is low and robustness is bad.
Summary of the invention
The invention provides a kind of image-characterization methods based on DoG function, the method ga s safety degree is high, and robustness is good simultaneously.
The invention provides a kind of image-characterization methods based on DoG function, comprising:
S concentric circles is set, a described S concentric circles from inside to outside radius exponentially m increase, described S be more than or equal to 2 integer;
Each described concentric circle extracts T sampled point, extracts 1 sampled point in the described concentric circles center of circle and form sampled point template, the DoG convolution yardstick ∑ of the sampled point on i-th concentric circle d,i=η R i, wherein, R irepresent i-th concentrically ringed radius, i=1,2, S, the DoG convolution yardstick of the sampled point on the concentric circle that the DoG convolution yardstick of the sampled point in the described concentric circles center of circle is minimum with radius is identical, and described DoG convolution yardstick refers to the standard deviation of the small scale Gaussian function that DoG function comprises;
DoG convolution yardstick according to described S concentrically ringed sampled point template obtains S+1 gaussian kernel yardstick ∑ g,j, j=1,2 ..., S, S+1, wherein, ∑ g,r=∑ d,r, r=1,2 ..., S; ∑ g, s+1=∑ d,sm;
Obtain the gradient map in N number of direction of image to be characterized, described N be more than or equal to 1 integer;
For the gradient map in each direction, adopt the gaussian kernel of a described S+1 yardstick to carry out Gaussian convolution process, obtain the gradient direction figure of the S+1 width Gaussian convolution in each direction;
For the gradient direction figure of the S+1 width Gaussian convolution in each direction, deduct the gradient direction figure of the Gaussian convolution of large scale with the gradient direction figure of the Gaussian convolution of small scale in the gradient direction figure of the two width Gaussian convolutions that gaussian kernel scale size is adjacent, obtain the gradient direction figure of S width DoG convolution, the corresponding DoG convolution yardstick of gradient direction figure of the N width DoG convolution of different gradient direction;
According to the DoG convolution yardstick of each sampled point and described sampled point in the position of sampled point template, in the gradient direction figure of DoG convolution corresponding to the DoG convolution yardstick of described sampled point, extract the pixel value of the point with the position same position of described sampled point in sampled point template;
In the gradient direction figure of the N width DoG convolution of correspondence, extract the pixel value with the point of the position same position of described sampled point in sampled point template according to the DoG convolution yardstick of described sampled point, construct the proper vector of described sampled point;
The total characteristic vector formed by the proper vector of all described sampled points characterizes described image to be characterized.
As above based on the image-characterization methods of DoG function, characterize described image to be characterized with the total characteristic vector of the proper vector composition of all described sampled points, comprising:
By the proper vector of the sampled point of same DoG convolution yardstick, splice successively according to the position of sampled point in sampled point template, obtain each DoG convolution scale feature vector;
According to the size order of S DoG convolution yardstick, splice the proper vector composition total characteristic vector of described S DoG convolution yardstick successively;
Described total characteristic vector is adopted to characterize image to be characterized.
As above based on the image-characterization methods of DoG function, in the gradient direction figure of the N width DoG convolution of correspondence, the pixel value with the point of the position same position of described sampled point in sampled point template is extracted according to the DoG convolution yardstick of described sampled point, construct the proper vector of described sampled point, comprising:
According to formula h Σ d , c ( l k ( ϵ 0 , v 0 , R c ) ) = [ D θ 1 Σ d , c ( l k ( ϵ 0 , v 0 , R c ) ) , . . . , D θ N Σ d , c ( l k ( ϵ 0 , v 0 , R c ) ) ] , Obtain the proper vector of sampled point;
Wherein, l k0, v 0, R c) represent distance center point (ε 0, v 0) distance be R cconcentric circles on K sampled point, represent that there is DoG convolution yardstick ∑ d,cdoG convolution gradient direction figure at position l k0, v 0, R c) pixel value of point at place.
As above based on the image-characterization methods of DoG function, described the proper vector of the sampled point of same DoG convolution yardstick to be spliced successively according to the position of sampled point in sampled point template, obtains the proper vector of each DoG convolution yardstick, comprising:
According to formula H Σ d , c ( ϵ 0 , v 0 , R c ) = [ h Σ d , c ( l 1 ( ϵ 0 , v 0 , R c ) ) , . . . , h Σ d , c ( l T ( ϵ 0 , v 0 , R c ) ) ] Obtain DoG convolution yardstick ∑ d,cproper vector;
Wherein, represent DoG convolution yardstick ∑ d,cthe proper vector of K sampled point.
As above based on the image-characterization methods of DoG function, the described size order according to S DoG convolution yardstick, splices the total characteristic vector of the proper vector composition of described S DoG convolution yardstick successively, comprising:
According to formula D ( ϵ 0 , v 0 ) = [ H Σ d , 1 ( ϵ 0 , v 0 , R 1 ) , . . . , H Σ d , s ( ϵ 0 , v 0 , R s ) ] Obtain described total characteristic vector;
Wherein, represent that convolution kernel yardstick is ∑ d, 1proper vector.
The invention provides a kind of image-characterization methods based on DoG function, comprising:
S concentric circles is set, a described S concentric circles from inside to outside radius exponentially m increase, described S be more than or equal to 2 integer;
Each described concentric circle extracts T sampled point, extract 1 sampled point in the described concentric circles center of circle and form sampled point template, the DoG convolution yardstick of each sampled point comprises 1 DoG convolution yardstick and P the 2nd DoG convolution yardstick, described P is integer and 2≤P≤S-1, wherein, a DoG convolution yardstick ∑ of the sampled point on i-th concentric circle d,i=η R i, the 2nd DoG convolution yardstick ∑ of the sampled point on i-th concentric circle d ', i=η R k, wherein, i=1,2 ... S, k=1,2 ... S and k ≠ i, R irepresent i-th concentrically ringed radius, R krepresent a kth concentrically ringed radius, one DoG convolution yardstick of the sampled point on the concentric circle that the one DoG convolution yardstick of the sampled point in the described concentric circles center of circle is minimum with radius is identical, and described DoG convolution yardstick refers to the standard deviation of the small scale Gaussian function that DoG function comprises;
A DoG convolution yardstick according to described S concentrically ringed sampled point template obtains S+1 gaussian kernel yardstick ∑ g,j, j=1,2 ..., S, S+1, wherein, ∑ g,r=∑ d,r, r=1,2 ... S; ∑ g, s+1=∑ d,sm;
Obtain the gradient map in N number of direction of image to be characterized, described N be more than or equal to 1 integer;
For the gradient map in each direction, adopt the gaussian kernel of a described S+1 yardstick to carry out Gaussian convolution process, obtain the gradient direction figure of the S+1 width Gaussian convolution in each direction;
For the gradient direction figure of the S+1 width Gaussian convolution in each direction, deduct the gradient direction figure of the Gaussian convolution of large scale with the gradient direction figure of the Gaussian convolution of small scale in the gradient direction figure of the two width Gaussian convolutions that gaussian kernel scale size is adjacent, obtain the gradient direction figure of S width DoG convolution, symbiosis becomes the DoG convolution gradient direction figure of S yardstick;
According to a DoG convolution yardstick of each sampled point and the 2nd DoG convolution yardstick and described sampled point in sampled point template position, in a DoG convolution yardstick of described sampled point and DoG convolution gradient direction figure corresponding to the 2nd DoG convolution yardstick, extract the pixel value of the point having same position with the position of described sampled point in sampled point template;
In the DoG convolution gradient direction figure of correspondence, extract the pixel value of the point having same position with the position of described sampled point in sampled point template according to a DoG convolution yardstick of described sampled point and the 2nd DoG convolution yardstick, construct the proper vector of described sampled point;
The total characteristic vector formed by the proper vector of all described sampled points characterizes described image to be characterized.
As above based on the image-characterization methods of DoG function, in the DoG convolution gradient direction figure of correspondence, the pixel value of the point having same position with the position of described sampled point in sampled point template is extracted according to a DoG convolution yardstick of described sampled point and the 2nd DoG convolution yardstick, construct the proper vector of described sampled point, comprising:
The pixel value that there is the point of same position the position of described sampled point in sampled point template is extracted from the gradient direction figure of the DoG convolution of different directions corresponding to a DoG convolution yardstick of described sampled point, and the pixel value that there is the point of same position the position of described sampled point in sampled point template is extracted from the DoG convolution gradient direction figure of different directions corresponding to P the 2nd DoG convolution yardstick of described sampled point, form the proper vector of described sampled point.
Image-characterization methods based on DoG function provided by the invention, its key step comprises: extract sampled point and set up sampled point template; Obtain the gradient map in N number of direction of image to be characterized; For the gradient map in each direction, obtain the S+1 width Gaussian convolution directional diagram in each direction; For the S+1 width Gaussian convolution directional diagram in each direction, obtain the gradient direction figure of S width DoG convolution; From the gradient direction figure of the DoG convolution of these different gradient directions, there is the point of same position to extract pixel value according to the DoG convolution yardstick of each sampled point and the position of sampled point in sampled point template; Final composition total characteristic vector characterizes image to be characterized, and the image-characterization methods based on DoG function provided by the invention is used for token image and can realizes higher distinction and better robustness.
Accompanying drawing explanation
The process flow diagram of the image-characterization methods based on DoG function that Fig. 1 provides for the present embodiment;
8 sampled points on each concentric circle that Fig. 2 provides for the present embodiment, DoG yardstick is the single scale sampled point template of 5;
Another process flow diagram of the image-characterization methods based on DoG function that Fig. 3 provides for the present embodiment;
Another process flow diagram of the image-characterization methods based on DoG function that Fig. 4 provides for the present embodiment;
4 sampled points on each concentric circle that Fig. 5 provides for the present embodiment, DoG yardstick is the multi-scale sampling point template of 5;
Fig. 6 is the function curve diagram of TF-DoG;
Fig. 7 is the image block example in all databases of experiment.
Embodiment
For making the object of the embodiment of the present invention, technical scheme and advantage clearly, below in conjunction with the accompanying drawing in the embodiment of the present invention, technical scheme in the embodiment of the present invention is clearly and completely described, obviously, described embodiment is the present invention's part embodiment, instead of whole embodiments.Based on the embodiment in the present invention, the every other embodiment that those of ordinary skill in the art obtain under the prerequisite not making creative work, all belongs to the scope of protection of the invention.
The invention provides a kind of image-characterization methods based on DoG function, the method comprising the steps of S101 ~ S109, S101 ~ S103 is about how obtaining sampled point template, and S104 ~ S109 carries out characterization image about how utilizing sampled point template.
The process flow diagram of the image-characterization methods based on DoG function that Fig. 1 provides for the present embodiment, as shown in Figure 1, the method comprises:
S101: S concentric circles is set, S concentric circles radius exponentially m growth from inside to outside.
Wherein, m be more than or equal to 2 integer.
It should be noted that, m is a constant, and in actual mechanical process, m can be chosen as 2 2 3 ≈ 1.5874 .
S102: extract T sampled point on each concentric circle, extracts 1 sampled point and forms sampled point template, the DoG convolution yardstick ∑ of the sampled point on i-th concentric circle in the concentric circles center of circle d,i=η R i.
Wherein, R irepresent i-th concentrically ringed radius, i=1,2, S, the DoG convolution yardstick of the sampled point on the concentric circle that the DoG convolution yardstick of the sampled point in the concentric circles center of circle is minimum with radius is identical, and DoG convolution yardstick refers to the standard deviation of the small scale Gaussian function that DoG function comprises.
8 sampled points on each concentric circle that Fig. 2 provides for the present embodiment, DoG yardstick is the single scale sampled point template of 5, as shown in Figure 2, have 5 concentric circless, each concentric circle is got equably 8 sampled points, concentric circles circle centre position gets 1 sampled point, and these 41 sampled points form sampled point template.
S103: the DoG convolution yardstick according to S concentrically ringed sampled point template obtains S+1 gaussian kernel yardstick ∑ g,j.
Wherein, j=1,2 ..., S, S+1, ∑ g,r=∑ d,r, r=1,2 ... S; ∑ g, s+1=∑ d,sm.
For S=5, corresponding 5 the different DoG convolution yardsticks of 5 concentric circless, 5 DoG convolution yardsticks are followed successively by ∑ from little arrival d, 1, ∑ d, 2, ∑ d, 3, ∑ d, 4, ∑ d, 5, obtain 6 gaussian kernel yardsticks by these 5 DoG convolution yardsticks, 6 gaussian kernel yardsticks are respectively ∑ g, 1=∑ d, 1, ∑ g, 2=∑ d, 2, ∑ g, 3=∑ d, 3, ∑ g, 4=∑ d, 4, ∑ g, 5=∑ d, 5, ∑ g, 6=∑ d, 5m.
S104: the gradient map obtaining N number of direction of image to be characterized.
Wherein, N be more than or equal to 1 integer.
It should be noted that, the gradient map obtaining N number of direction of image to be characterized refers to the Grad in N number of direction of each pixel obtained in image to be characterized, and the Grad of each pixel in same direction forms the gradient map in this direction.
Concrete, can according to formula obtain the gradient map in each direction of image to be characterized, wherein, represent the partial differential in x direction, represent the partial differential in y direction, θ irepresent i-th direction, (a) +=max (a, 0).Template is utilized to obtain the gradient of horizontal direction respectively with image convolution with the gradient of vertical direction template can according to circumstances be selected, and in general, for the ease of calculating, obtains the gradient of horizontal direction by template [1 ,-1] and image convolution with template [1 ,-1] tthe gradient of vertical direction is obtained with image convolution then formula is utilized obtain the gradient map in each direction of image to be characterized.
S105: for the gradient map in each direction, adopts the gaussian kernel of S+1 yardstick to carry out Gaussian convolution process, obtains the gradient direction figure of the S+1 width Gaussian convolution in each direction.
After the gradient map obtaining each direction, can according to formula obtain θ ithe ∑ in direction jthe Gaussian convolution gradient map of yardstick, j=1 ..., S, S+1, that there is yardstick ∑ g,jgaussian kernel.In formula represent the different scale of the corresponding gaussian kernel of difference of standard deviation, the matrix of coefficients that different scale is corresponding different, carries out process of convolution with these matrix of coefficients to the gradient map of different directions, obtains Gaussian convolution directional diagram.
S106: for the gradient direction figure of the S+1 width Gaussian convolution in each direction, deduct the gradient direction figure of the Gaussian convolution of large scale with the gradient direction figure of the Gaussian convolution of small scale in the gradient direction figure of the two width Gaussian convolutions that gaussian kernel scale size is adjacent, obtain the gradient direction figure of S width DoG convolution, the corresponding DoG convolution yardstick of gradient direction figure of the N width DoG convolution of different gradient direction.
After obtaining S+1 width Gaussian convolution directional diagram, can according to formula obtain θ idirection yardstick is ∑ d, cthe gradient direction figure of DoG convolution, wherein, c=1 ..., S-1, ∑ g, crepresent the small scale in the adjacent yardstick of gaussian kernel scale size, ∑ g, c+1represent the large scale in the adjacent yardstick of gaussian kernel scale size, θ idirection yardstick is ∑ g, cthe gradient direction figure of Gaussian convolution, θ idirection yardstick is ∑ g, c+1the gradient direction figure of Gaussian convolution.
It should be noted that, the DoG convolution yardstick that the gradient direction figure of every width DoG convolution is corresponding is here exactly by formula ∑ d,i=η R ithe DoG convolution yardstick obtained, η is scale-up factor, and in actual mechanical process, η can be chosen as 0.25.
Also it should be noted that, Gaussian filter can carry out filtering separately in x, y direction, and the gaussian filtering of large scale can carry out the filtering of a small scale again and obtain, so save a large amount of calculated amount on the basis of the gaussian filtering of small scale.
S107: according to the DoG convolution yardstick of each sampled point and sampled point in the position of sampled point template, extracts the pixel value of the point of the position same position of same sampled point in sampled point template in the gradient direction figure of DoG convolution corresponding to the DoG convolution yardstick of sampled point.
S108: the pixel value extracting the point of the position same position of sampled point in sampled point template in the gradient direction figure according to N width DoG convolution corresponding to the DoG convolution yardstick of sampled point, the proper vector of structure sampled point.
Alternatively, according to formula h Σ d , c ( l k ( ϵ 0 , v 0 , R c ) ) = [ D θ 1 Σ d , c ( l k ( ϵ 0 , v 0 , R c ) ) , . . . , D θ N Σ d , c ( l k ( ϵ 0 , v 0 , R c ) ) ] , Obtain the proper vector of sampled point.
Wherein, l k0, v 0, R c) represent distance center point (ε 0, v 0) distance be R cconcentric circles on K sampled point, represent that there is DoG convolution yardstick ∑ d,cthe gradient direction figure of DoG convolution at position l k0, v 0, R c) pixel value of point at place.
S109: the total characteristic vector formed by the proper vector of all sampled points characterizes image to be characterized.
Another process flow diagram of the image-characterization methods based on DoG function that Fig. 3 provides for the present embodiment, on basis embodiment illustrated in fig. 1, as shown in Figure 1, the total characteristic vector sign image to be characterized of the composition of the proper vector with all sampled points of above-mentioned steps 109 can comprise:
S109A: by the proper vector of the sampled point of same DoG convolution yardstick, splice successively according to the position of sampled point in sampled point template, obtains each DoG convolution scale feature vector.
Alternatively, according to formula H Σ d , c ( ϵ 0 , v 0 , R c ) = [ h Σ d , c ( l 1 ( ϵ 0 , v 0 , R c ) ) , . . . , h Σ d , c ( l T ( ϵ 0 , v 0 , R c ) ) ] Obtain DoG convolution yardstick ∑ d, cproper vector.
Wherein, represent DoG convolution yardstick ∑ d,cthe proper vector of K sampled point.
S109B: according to the size order of S DoG convolution yardstick, splices the proper vector composition total characteristic vector of S DoG convolution yardstick successively.
Specifically, according to formula D ( ϵ 0 , v 0 ) = [ H Σ d , 1 ( ϵ 0 , v 0 , R 1 ) , . . . , H Σ d , s ( ϵ 0 , v 0 , R s ) ] Obtain total characteristic vector;
Wherein, represent that convolution kernel yardstick is ∑ d, 1proper vector.
S109C: adopt total characteristic vector to characterize image to be characterized.
A kind of image-characterization methods based on DoG function that the present embodiment provides sets up sampled point template by extracting sampled point; Obtain the gradient map in N number of direction of image to be characterized; For the gradient map in each direction, obtain the S+1 width Gaussian convolution directional diagram in each direction; For the S+1 width Gaussian convolution directional diagram in each direction, obtain the gradient direction figure of S width DoG convolution; From the gradient direction figure of the DoG convolution of these different gradient directions, there is the point of same position to extract pixel value according to the DoG convolution yardstick of each sampled point and the position of sampled point in sampled point template; Final composition total characteristic vector characterizes image to be characterized, thus makes the sign of image can realize higher distinction and better robustness.
In the image-characterization methods based on DoG function that above-described embodiment provides, a corresponding DoG convolution yardstick of sampled point, namely be single scale situation, the present invention also provides a kind of image-characterization methods based on DoG function, the DoG convolution yardstick that in the method, a sampled point is corresponding more than one, namely be multiple dimensioned situation, the method comprising the steps of S201 ~ S209, S201 ~ S203 is about how obtaining sampled point template, and S204 ~ S209 carries out characterization image about how utilizing sampled point template.
Another process flow diagram of the image-characterization methods based on DoG function that Fig. 4 provides for the present embodiment, as shown in Figure 4, the method comprises:
S201: S concentric circles is set, S concentric circles radius exponentially m growth from inside to outside.
Wherein, S be more than or equal to 2 integer.
S202: extract T sampled point on each concentric circle, extract 1 sampled point in the concentric circles center of circle and form sampled point template, the DoG convolution yardstick of each sampled point comprises 1 DoG convolution yardstick and P the 2nd DoG convolution yardstick, a DoG convolution yardstick ∑ of the sampled point on i-th concentric circle d,i=η R i, the 2nd DoG convolution yardstick ∑ of the sampled point on i-th concentric circle d ', i=η R k.
Wherein, P is integer and 2≤P≤S-1, i=1, and 2 ... S, k=1,2 ... S and k ≠ i, R irepresent i-th concentrically ringed radius, R krepresent a kth concentrically ringed radius, one DoG convolution yardstick of the sampled point on the concentric circle that the one DoG convolution yardstick of the sampled point in the concentric circles center of circle is minimum with radius is identical, and DoG convolution yardstick refers to the standard deviation of the small scale Gaussian function that DoG function comprises.
4 sampled points on each concentric circle that Fig. 5 provides for the present embodiment, DoG yardstick is the multi-scale sampling point template of 5, as shown in Figure 5, have 5 concentric circless, each concentric circle is got equably 4 sampled points, concentric circles circle centre position gets 1 sampled point, and these 21 sampled points form sampled point template.
Also it should be noted that, one DoG convolution yardstick refers to his intrinsic yardstick, the obtain manner that this yardstick catches up with the DoG convolution yardstick stated in single scale situation is identical, 2nd DoG convolution yardstick refers to a DoG convolution yardstick of other sampled point (with this sampled point not on a concentric circle), and the number of the 2nd DoG convolution yardstick corresponding to each sampled point can be one or more.
For S=5,5 concentric circless corresponding 1 DoG convolution yardstick (intrinsic yardstick) respectively, these 5 DoG convolution yardsticks are followed successively by ∑ from little arrival d, 1, ∑ d, 2, ∑ d, 3, ∑ d, 4, ∑ d, 5corresponding five concentric circless (being respectively first, second, the 3rd, the 4th and the 5th) from small to large respectively.On the basis of S=5, for P=4, the sampled point on first concentric circle is except comprising self intrinsic DoG convolution yardstick ∑ d, 1, also comprise 4 the 2nd DoG convolution yardsticks outward, these 4 the 2nd DoG convolution yardsticks are a DoG convolution yardstick ∑ of the sampled point on second, the 3rd, the 4th and the 5th concentric circle respectively d, 2, ∑ d, 3, ∑ d, 4, ∑ d, 5; Equally, the sampled point on second concentric circle is except comprising self intrinsic DoG convolution yardstick ∑ d, 2, also comprise 4 the 2nd DoG convolution yardsticks outward, these 4 the 2nd DoG convolution yardsticks are a DoG convolution yardstick ∑ of the sampled point on first, the 3rd, the 4th and the 5th concentric circle respectively d, 1, ∑ d, 3, ∑ d, 4, ∑ d, 5; Sampled point on 5th concentric circle is except comprising self intrinsic DoG convolution yardstick ∑ d, 5, also comprise 4 the 2nd DoG convolution yardsticks outward, these 4 the 2nd DoG convolution yardsticks are a DoG convolution yardstick ∑ of the sampled point on first, second, the 3rd and the 4th concentric circle respectively d, 1, ∑ d, 2, ∑ d, 3, ∑ d, 4.On the basis of S=5, for P=2, each sampled point on five concentric circles can get 2 the 2nd DoG convolution yardsticks except self intrinsic DoG convolution yardstick, selection rule as the 2nd DoG convolution yardstick can be formulated according to actual needs, such as, the two DoG convolution yardstick of an adjacent DoG convolution yardstick as this sampled point of the DoG convolution yardstick for choosing this sampled point can be formulated, therefore, the sampled point on first concentric circle is except comprising self intrinsic DoG convolution yardstick ∑ d, 1outward, a 2nd DoG convolution yardstick ∑ can only be comprised d, 2, the sampled point on the 5th concentric circle is except comprising self intrinsic DoG convolution yardstick ∑ d, 5outward, a 2nd DoG convolution yardstick ∑ can only be comprised d, 4, and the sampled point on second, the 3rd, the 4th concentric circle can comprise 2 the 2nd DoG convolution yardsticks respectively.
S203: the DoG convolution yardstick according to S concentrically ringed sampled point template obtains S+1 gaussian kernel yardstick ∑ g,j.
Wherein, j=1,2 ..., S, S+1, ∑ g,r=∑ d,r, r=1,2 ... S; ∑ g, s+1=∑ d,sm;
S204: the gradient map obtaining N number of direction of image to be characterized.
Wherein, N be more than or equal to 1 integer;
S205: for the gradient map in each direction, adopts the gaussian kernel of S+1 yardstick to carry out Gaussian convolution process, obtains the gradient direction figure of the S+1 width Gaussian convolution in each direction.
S206: for the gradient direction figure of the S+1 width Gaussian convolution in each direction, deduct the gradient direction figure of the Gaussian convolution of large scale with the gradient direction figure of the Gaussian convolution of small scale in the gradient direction figure of the two width Gaussian convolutions that gaussian kernel scale size is adjacent, obtain the gradient direction figure of S width DoG convolution, symbiosis becomes the DoG convolution gradient direction figure of S yardstick.
S207: according to a DoG convolution yardstick of each sampled point and the 2nd DoG convolution yardstick and sampled point in sampled point template position, extracts the pixel value that there is the point of same position the position of same sampled point in sampled point template in a DoG convolution yardstick of sampled point and DoG convolution gradient direction figure corresponding to the 2nd DoG convolution yardstick.
S208: extract the pixel value that there is the point of same position the position of same sampled point in sampled point template according to a DoG convolution yardstick of sampled point and the 2nd DoG convolution yardstick in the DoG convolution gradient direction figure of correspondence, the proper vector of structure sampled point.
S209: the total characteristic vector formed by the proper vector of all sampled points characterizes image to be characterized.
On the basis of above-described embodiment, in the DoG convolution gradient direction figure of correspondence, the pixel value that the position of same sampled point in sampled point template has the point of same position is extracted according to a DoG convolution yardstick of sampled point and the 2nd DoG convolution yardstick, the proper vector of structure sampled point, comprising:
The pixel value that there is the point of same position the position of sampled point in sampled point template is extracted from the gradient direction figure of the DoG convolution of different directions corresponding to a DoG convolution yardstick of sampled point, and the pixel value that there is the point of same position the position of sampled point in sampled point template is extracted, the proper vector of composition sampled point from the DoG convolution gradient direction figure of different directions corresponding to P the 2nd DoG convolution yardstick of sampled point.
The image-characterization methods based on DoG function that the present embodiment provides sets up sampled point template by extracting sampled point; Obtain the gradient map in N number of direction of image to be characterized; For the gradient map in each direction, obtain the S+1 width Gaussian convolution directional diagram in each direction; For the S+1 width Gaussian convolution directional diagram in each direction, obtain the gradient direction figure of S width DoG convolution; From the gradient direction figure of the DoG convolution of these different gradient directions, there is the point of same position to extract pixel value according to the DoG convolution yardstick of each sampled point and the position of sampled point in sampled point template; Final composition total characteristic vector characterizes image to be characterized, thus makes the sign of image can realize higher distinction and better robustness.
The present invention also provides a kind of proof theory for the above-mentioned image-characterization methods based on DoG function, from the angle of wavelet theory, the ability of small echo characterization signal depends on the character of small echo, tight frame small echo not only can completely characterizes the signal in Hilbert space, and ga s safety degree and the robustness of characterization can also be improved, and although DoG small echo has circular symmetry and the high advantage of counting yield, usually be used to approximate Mexico hat wavelet (second derivative of Gaussian function), but the framework tightness that DoG small echo is formed is not high, in order to verify that traditional descriptor based on gradient is exactly a kind of example of our Wavelet Descriptor, also higher in order to build ga s safety degree, the better descriptor of robustness, we construct a tighter DoG small echo, we are called after TF-DoG, the function of this small echo is as follows:
Ψ ( x , y , σ , θ ) = k π ( k 2 + 1 ) πσ ( k 2 - 1 ) · [ e x 2 + y 2 2 σ 2 · ( e i ( 1 λ · σ · cos θ + 1 λ · σ · sin θ ) - e 1 2 · λ 2 ) - e - k 2 ( x 2 + y 2 ) 2 σ 2 · ( e i ( 1 λ · σ · cos θ + 1 λ · σ · sin θ ) - e - 1 2 · k 2 λ 2 ) ]
Wherein, i is the unit of imaginary number, x, y is the locus coordinate of small echo, θ is the sense of rotation of small echo at each sample point, DoG small echo is that circle is symmetrical, so only have a direction at each sample point, TF-DoG is not function with circular symmetry, at each sample point, we have got eight sense of rotation 0 °, 45 °, 90 °, 135 °, 180 °, 225 °, 270 °, 315 °, 1/ σ represents the standard deviation of Gaussian function, k represents the ratio of two Gaussian function standard deviations, λ is the constant for controlling Spatial bandwidth, in our descriptor, this constant value is 1.5, now corresponding wavelet frame tightness is higher.The tightness of wavelet frame can be characterized by the ratio of the bound of framework, and the tightness of value more close to 1 wavelet frame of B/A is higher, and the bound that we calculate gained small echo TF-DoG is as shown in the table:
The framework bound of the lower TF-DoG of table 1 different parameters configuration
Wherein, b 0be the translation interval of small echo in spatial domain, M represents the scope of yardstick, namely the number of yardstick, and N is the sampling step number of every radian, and A is the lower bound of framework, and B is the upper bound of framework, and K is the sense of rotation number of each sample point wavelet function, can obtain from table:
Increasing 1, along with sense of rotation number, framework is more and more tighter, but 8 directions are just enough to the tighter framework of formation one.
2, only need limited several yardsticks just can form an almost tight framework, this builds with the descriptor before us is consistent, and therefore, in based on the descriptor of DoG, we use only five yardsticks.
3, the ratio of Spatial bandwidth parameter and two Gaussian function standard deviations also affects the tightness of framework, optimal value be approximately in descriptor below, we are taken as 1.5.
Fig. 6 is the function curve diagram of TF-DoG, when during with λ=1.5, as shown in Figure 6, give the contrast with DoG function in figure, wherein, solid line represents TF-DoG, and dotted line represents DoG.
Experimental data below for adopting the image-characterization methods that the present invention is based on Tight wavelet frames to carry out image block coupling, we have employed various visual angles Stereo matching database, this database is the database of test descriptor performance best in the world at present, descriptor can well be test in the performance overcoming on-plane surface conversion and look after in conversion, this database comprises three subsets, Yosemite, Notre Dame and Liberty, each subset includes 450, 000 image block, the size of each image block is 64*64 pixel, these image blocks pluck out centered by point of interest, the extracting method of point of interest is the interest point extraction method that have employed SIFT, all blocks have carried out the normalization of yardstick and principal direction, Fig. 7 is the image block example in all databases of experiment, Fig. 7 illustrates the example of image block in database.Each database utilizes 450,000 image block forms 500,000 image block coupling is right, this wherein has the right of 50% coupling, there have to be 50% unmatched right, with our descriptor extracting method, feature is extracted to each image block, then adopt the standardized method of sift to carry out standardization to the feature extracted, and calculate Euclidean distance between often pair of feature as judging whether it is the foundation of a pair.Because the size of image block is 64*64, the radius that descriptor the 5th encloses is set to 23, on experimental program, we make use of four training and testing collection combinations, Yosemite-NotreDame respectively, Yosemite-Liberty, NotreDame-Yosemite, NotreDame-Liberty, wherein the former is training set, the latter is test set, utilize 100 on training set, the parameter of 000 pair of image block fine setting descriptor, utilize 100 on test set, 000 pair of sub-performance of image block test description, in order to show the result of experiment, we illustrate when coupling is 95% to upper accuracy non-matching to upper false acceptance rate, our method with traditional descriptor based on gradient as SIFT, HOG, DAISY etc. have done the comparison of performance, as shown in table 2, the descriptor of our invention is significantly better than traditional descriptor.Also compare with the descriptor obtained based on the method learnt simultaneously, as the method (2014 of Simonyan etc., TPAMI), the method (2011 of Brown etc., and rootSIFT (CVPR TPAMI), 2012), although our descriptor is not through training sample study, the method based on study has been better than, as shown in table 3, the numeral in table inside round bracket is the dimension of descriptor.In addition, we also have evaluated the performance after descriptor dimensionality reduction, as shown in table 4, and after dimensionality reduction, the performance of descriptor is promoted further.From the result of table 4, the descriptor performance based on TF-DoG is promoted further, and this descriptor also demonstrating us builds theoretical: framework is tighter, and descriptor performance is better.
Table 2 invention descriptor with the Performance comparision of traditional descriptor
Table 3 invention descriptor with the Performance comparision of descriptor based on study
Table 4 is based on the descriptor of TF-DoG
Last it is noted that above each embodiment is only in order to illustrate technical scheme of the present invention, be not intended to limit; Although with reference to foregoing embodiments to invention has been detailed description, those of ordinary skill in the art is to be understood that: it still can be modified to the technical scheme described in foregoing embodiments, or carries out equivalent replacement to wherein some or all of technical characteristic; And these amendments or replacement, do not make the essence of appropriate technical solution depart from the scope of various embodiments of the present invention technical scheme.

Claims (7)

1. based on an image-characterization methods for DoG function, it is characterized in that, comprising:
S concentric circles is set, a described S concentric circles from inside to outside radius exponentially m increase, described S be more than or equal to 2 integer;
Each described concentric circle extracts T sampled point, extracts 1 sampled point in the described concentric circles center of circle and form sampled point template, the DoG convolution yardstick Σ of the sampled point on i-th concentric circle d,i=η R i, wherein, R irepresent i-th concentrically ringed radius, i=1,2, S, the DoG convolution yardstick of the sampled point on the concentric circle that the DoG convolution yardstick of the sampled point in the described concentric circles center of circle is minimum with radius is identical, and described DoG convolution yardstick refers to the standard deviation of the small scale Gaussian function that DoG function comprises;
DoG convolution yardstick according to described S concentrically ringed sampled point template obtains S+1 gaussian kernel yardstick Σ g,j, j=1,2 ..., S, S+1, wherein, Σ g,rd,r, r=1,2 ..., S; Σ g, s+1d,sm;
Obtain the gradient map in N number of direction of image to be characterized, described N be more than or equal to 1 integer;
For the gradient map in each direction, adopt the gaussian kernel of a described S+1 yardstick to carry out Gaussian convolution process, obtain the gradient direction figure of the S+1 width Gaussian convolution in each direction;
For the gradient direction figure of the S+1 width Gaussian convolution in each direction, deduct the gradient direction figure of the Gaussian convolution of large scale with the gradient direction figure of the Gaussian convolution of small scale in the gradient direction figure of the two width Gaussian convolutions that gaussian kernel scale size is adjacent, obtain the gradient direction figure of S width DoG convolution, the corresponding DoG convolution yardstick of gradient direction figure of the N width DoG convolution of different gradient direction;
According to the DoG convolution yardstick of each sampled point and described sampled point in the position of sampled point template, in the gradient direction figure of DoG convolution corresponding to the DoG convolution yardstick of described sampled point, extract the pixel value of the point with the position same position of described sampled point in sampled point template;
In the gradient direction figure of the N width DoG convolution of correspondence, extract the pixel value with the point of the position same position of described sampled point in sampled point template according to the DoG convolution yardstick of described sampled point, construct the proper vector of described sampled point;
The total characteristic vector formed by the proper vector of all described sampled points characterizes described image to be characterized.
2. method according to claim 1, is characterized in that, characterizes described image to be characterized, comprising with the total characteristic vector of the proper vector composition of all described sampled points:
By the proper vector of the sampled point of same DoG convolution yardstick, splice successively according to the position of sampled point in sampled point template, obtain each DoG convolution scale feature vector;
According to the size order of S DoG convolution yardstick, splice the proper vector composition total characteristic vector of described S DoG convolution yardstick successively;
Described total characteristic vector is adopted to characterize image to be characterized.
3. method according to claim 1, it is characterized in that, in the gradient direction figure of the N width DoG convolution of correspondence, the pixel value with the point of the position same position of described sampled point in sampled point template is extracted according to the DoG convolution yardstick of described sampled point, construct the proper vector of described sampled point, comprising:
According to formula h Σ d , c ( l k ( ϵ 0 , v 0 , R c ) ) = [ D θ 1 Σ d , c ( l k ( ϵ 0 , v 0 , R c ) ) , . . . , D θ N Σ d , c ( l k ( ϵ 0 , v 0 , R c ) ) ] , Obtain the proper vector of sampled point;
Wherein, l k0, v 0, R c) represent distance center point (ε 0, v 0) distance be R cconcentric circles on K sampled point, represent that there is DoG convolution yardstick Σ d,cdoG convolution gradient direction figure at position l k0, v 0, R c) pixel value of point at place.
4. method according to claim 2, is characterized in that, describedly the proper vector of the sampled point of same DoG convolution yardstick is spliced successively according to the position of sampled point in sampled point template, obtains the proper vector of each DoG convolution yardstick, comprising:
According to formula H Σ d , c ( ϵ 0 , v 0 , R c ) = [ h Σ d , c ( l 1 ( ϵ 0 , v 0 , R c ) ) , . . . , h Σ d , c ( l T ( ϵ 0 , v 0 , R c ) ) ] Obtain DoG convolution yardstick Σ d,cproper vector;
Wherein, represent DoG convolution yardstick Σ d,cthe proper vector of K sampled point.
5. method according to claim 3, is characterized in that, the described size order according to S DoG convolution yardstick, splices the total characteristic vector of the proper vector composition of described S DoG convolution yardstick successively, comprising:
According to formula D ( ϵ 0 , v 0 ) = [ H Σ d , 1 ( ϵ 0 , v 0 , R 1 ) , . . . , H Σ d , s ( ϵ 0 , v 0 , R s ) Obtain described total characteristic vector;
Wherein, represent that convolution kernel yardstick is Σ d, 1proper vector.
6. based on an image-characterization methods for DoG function, it is characterized in that, comprising:
S concentric circles is set, a described S concentric circles from inside to outside radius exponentially m increase, described S be more than or equal to 2 integer;
Each described concentric circle extracts T sampled point, extract 1 sampled point in the described concentric circles center of circle and form sampled point template, the DoG convolution yardstick of each sampled point comprises 1 DoG convolution yardstick and P the 2nd DoG convolution yardstick, described P is integer and 2≤P≤S-1, wherein, a DoG convolution yardstick Σ of the sampled point on i-th concentric circle d,i=η R i, the 2nd DoG convolution yardstick Σ of the sampled point on i-th concentric circle d ', i=η R k, wherein, i=1,2 ... S, k=1,2 ... S and k ≠ i, R irepresent i-th concentrically ringed radius, R krepresent a kth concentrically ringed radius, one DoG convolution yardstick of the sampled point on the concentric circle that the one DoG convolution yardstick of the sampled point in the described concentric circles center of circle is minimum with radius is identical, and described DoG convolution yardstick refers to the standard deviation of the small scale Gaussian function that DoG function comprises;
A DoG convolution yardstick according to described S concentrically ringed sampled point template obtains S+1 gaussian kernel yardstick Σ g,j, j=1,2 ..., S, S+1, wherein, Σ g,rd,r, r=1,2 ... S; Σ g, s+1d,sm;
Obtain the gradient map in N number of direction of image to be characterized, described N be more than or equal to 1 integer;
For the gradient map in each direction, adopt the gaussian kernel of a described S+1 yardstick to carry out Gaussian convolution process, obtain the gradient direction figure of the S+1 width Gaussian convolution in each direction;
For the gradient direction figure of the S+1 width Gaussian convolution in each direction, deduct the gradient direction figure of the Gaussian convolution of large scale with the gradient direction figure of the Gaussian convolution of small scale in the gradient direction figure of the two width Gaussian convolutions that gaussian kernel scale size is adjacent, obtain the gradient direction figure of S width DoG convolution, symbiosis becomes the DoG convolution gradient direction figure of S yardstick;
According to a DoG convolution yardstick of each sampled point and the 2nd DoG convolution yardstick and described sampled point in sampled point template position, in a DoG convolution yardstick of described sampled point and DoG convolution gradient direction figure corresponding to the 2nd DoG convolution yardstick, extract the pixel value of the point having same position with the position of described sampled point in sampled point template;
In the DoG convolution gradient direction figure of correspondence, extract the pixel value of the point having same position with the position of described sampled point in sampled point template according to a DoG convolution yardstick of described sampled point and the 2nd DoG convolution yardstick, construct the proper vector of described sampled point;
The total characteristic vector formed by the proper vector of all described sampled points characterizes described image to be characterized.
7. method according to claim 6, it is characterized in that, in the DoG convolution gradient direction figure of correspondence, the pixel value of the point having same position with the position of described sampled point in sampled point template is extracted according to a DoG convolution yardstick of described sampled point and the 2nd DoG convolution yardstick, construct the proper vector of described sampled point, comprising:
The pixel value that there is the point of same position the position of described sampled point in sampled point template is extracted from the gradient direction figure of the DoG convolution of different directions corresponding to a DoG convolution yardstick of described sampled point, and the pixel value that there is the point of same position the position of described sampled point in sampled point template is extracted from the DoG convolution gradient direction figure of different directions corresponding to P the 2nd DoG convolution yardstick of described sampled point, form the proper vector of described sampled point.
CN201510048398.0A 2015-01-30 2015-01-30 A kind of image-characterization methods based on DoG functions Active CN104598911B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510048398.0A CN104598911B (en) 2015-01-30 2015-01-30 A kind of image-characterization methods based on DoG functions

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510048398.0A CN104598911B (en) 2015-01-30 2015-01-30 A kind of image-characterization methods based on DoG functions

Publications (2)

Publication Number Publication Date
CN104598911A true CN104598911A (en) 2015-05-06
CN104598911B CN104598911B (en) 2017-12-19

Family

ID=53124683

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510048398.0A Active CN104598911B (en) 2015-01-30 2015-01-30 A kind of image-characterization methods based on DoG functions

Country Status (1)

Country Link
CN (1) CN104598911B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106022342A (en) * 2016-05-05 2016-10-12 南京邮电大学 Image feature extraction method based on KAZE algorithm

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09305766A (en) * 1996-05-17 1997-11-28 Meidensha Corp Recognition method for two-dimensional object
CN101650784A (en) * 2009-09-23 2010-02-17 南京大学 Method for matching images by utilizing structural context characteristics

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09305766A (en) * 1996-05-17 1997-11-28 Meidensha Corp Recognition method for two-dimensional object
CN101650784A (en) * 2009-09-23 2010-02-17 南京大学 Method for matching images by utilizing structural context characteristics

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
唐永鹤等: "基于DOG特征点的序列图像匹配算法", 《现代电子技术》 *
罗晓晖等: "基于DOG模型的线条检测算法", 《计算机辅助设计与图形学学报》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106022342A (en) * 2016-05-05 2016-10-12 南京邮电大学 Image feature extraction method based on KAZE algorithm

Also Published As

Publication number Publication date
CN104598911B (en) 2017-12-19

Similar Documents

Publication Publication Date Title
CN102662949B (en) Method and system for retrieving specified object based on multi-feature fusion
CN101303768B (en) Method for correcting circle center error of circular index point when translating camera perspective projection
CN111626269B (en) Practical large-space-range landslide extraction method
CN103426186A (en) Improved SURF fast matching method
CN101714254A (en) Registering control point extracting method combining multi-scale SIFT and area invariant moment features
CN104036289A (en) Hyperspectral image classification method based on spatial and spectral features and sparse representation
CN103065135A (en) License number matching algorithm based on digital image processing
TWI503760B (en) Image description and image recognition method
CN106778526B (en) A kind of extensive efficient face identification method based on Hamming distance
CN104820718A (en) Image classification and searching method based on geographic position characteristics and overall situation vision characteristics
CN104182973A (en) Image copying and pasting detection method based on circular description operator CSIFT (Colored scale invariant feature transform)
CN104978582B (en) Shelter target recognition methods based on profile angle of chord feature
Li et al. RIFT: Multi-modal image matching based on radiation-invariant feature transform
Yuan et al. Learning to count buildings in diverse aerial scenes
CN104217426A (en) Object-oriented water-body extracting method based on ENVISAT ASAR and Landsat TM remote sensing data
CN105654122B (en) Based on the matched spatial pyramid object identification method of kernel function
CN105654421A (en) Projection transform image matching method based on transform invariant low-rank texture
CN107958443A (en) A kind of fingerprint image joining method based on crestal line feature and TPS deformation models
CN105488541A (en) Natural feature point identification method based on machine learning in augmented reality system
CN104616280A (en) Image registration method based on maximum stable extreme region and phase coherence
CN105760879A (en) Fourier-Mellin transform-based image geometric matching method
Li et al. Enhanced automatic root recognition and localization in GPR images through a YOLOv4-based deep learning approach
CN113379777A (en) Shape description and retrieval method based on minimum circumscribed rectangle vertical internal distance proportion
CN105809678A (en) Global matching method for line segment characteristics between two views under short baseline condition
CN103336964B (en) SIFT image matching method based on module value difference mirror image invariant property

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant