CN104268550A - Feature extraction method and device - Google Patents

Feature extraction method and device Download PDF

Info

Publication number
CN104268550A
CN104268550A CN201410479118.7A CN201410479118A CN104268550A CN 104268550 A CN104268550 A CN 104268550A CN 201410479118 A CN201410479118 A CN 201410479118A CN 104268550 A CN104268550 A CN 104268550A
Authority
CN
China
Prior art keywords
image
unique point
feature
window
scaled window
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410479118.7A
Other languages
Chinese (zh)
Other versions
CN104268550B (en
Inventor
鲁路平
张勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201410479118.7A priority Critical patent/CN104268550B/en
Publication of CN104268550A publication Critical patent/CN104268550A/en
Application granted granted Critical
Publication of CN104268550B publication Critical patent/CN104268550B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • G06V10/464Salient features, e.g. scale invariant feature transforms [SIFT] using a plurality of salient features, e.g. bag-of-words [BoW] representations

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a feature extraction method and device. By the adoption of the feature extraction method and device, the problem that in the prior art, due to the fact that the difference between shooting view angles is large, the image matching effect is not ideal is solved. The feature extraction method comprises the steps that feature points of an original image are extracted; two or more dimension windows of different sizes are given to each feature point; in each dimension window, the feature point used as the center of the dimension window is used as a seed point for area growing, so that two or more foreground areas corresponding to each feature point are obtained, and the number of the foreground areas corresponding to each feature point is equal to the number of the dimension windows given to the feature point; a covariance matrix of each foreground area is worked out; according to the covariance matrix of each foreground area, affine deformation correction is conducted on the original image; after correction, the feature points are extracted from the image, and feature point description is conducted. By the adoption of the feature extraction method, the image matching effect can be remarkably improved, and the method is convenient to implement and easy to apply and popularize.

Description

Feature extracting method and device
Technical field
The present invention relates to Image Matching technology, in particular to a kind of image feature extracting method and device.
Background technology
From difference shooting orientation, imaging is carried out to Same Scene, and carry out full-automatic image coupling to these images, the same place (i.e. the image space of same object space point on different images) found out in image is the very important basic steps of computer vision field.The shooting orientation discrepancy of image is larger, and the distortion between corresponding image also can be more serious, and the difficulty of Image Matching also can be larger.
Current Image Matching Algorithm can mate the less image of distortion well, but differs greatly for shooting visual angle, and as the image more than more than 40 degree, matching effect cannot effectively be ensured.
The ability of Image Matching Algorithm of the prior art enough tackles the matching task that distortion is in plane linear design rank image, even be out of shape that more serious also out of question, such as: shooting visual angle difference is within 40 degree, if but distortion is serious again, matching result just can not ensure that.Cause that its matching capacity is limited mainly contains following reasons:
If use the algorithm not carrying out local affine invariant rectification of distortion, as scale invariant feature conversion (Scale Invariant Feature Transform, SIFT) algorithm, so the distortion of image can bring three adverse influences to this kind of algorithm: first, when feature is described, need the size determining feature interpretation window, this window size decides according to the yardstick of feature, along with the intensification of deformation extent between image, on image, the anisotropy of yardstick can be more and more serious, use isotropic Gaussian function difference (Difference of Gaussian, DoG) operator carries out scale selection, effect can sharply be deteriorated, when making to carry out feature interpretation, described image capturing range is inconsistent on each image, secondly, for the image that distortion is serious, the identical imaged shape of object space contiguous range on each image is different, uses circle or square window can not ensure to comprise identical feature contiguous range, finally, the anisotropy of video imaging also can cause the gradient in feature contiguous range of the same name to be distributed with very big-difference, these factors above, the similarity between the feature interpretation vector that all seriously can reduce feature of the same name, causes that it fails to match.So, if first do not carry out the local correction of image, directly use the feature interpretation of similar Scale invariant nonsensical.
If first carry out the local correction of image, affine constant Image Matching Algorithm so will be used.At present, Region Feature Extraction (Maximally Stable Extremal Regions, MSER) corrects the good a kind of algorithm of deformation effect, and it carries out affine deformation correction by finding the most stable marking area in image.But in practice, this algorithm can meet some problems: first, owing to only using marking area the most stable in image, cause feature quantity on the low side, and the distribution of feature can not ensure, more fastidious to the type of scene, be partial to the scene that region feature is more.Secondly, the scope that the region feature of extraction is crossed over may be excessive, and this non-locality can strengthen the dissimilarity of periphery to the impact of coupling; The most unmanageable, due to the complicacy of image, MSER algorithm can not ensure the good correspondence of extracted region result usually, causes rectification of distortion result dissimilar.
For the image that difficulty of matching is larger, the ratio that in the arest neighbors of the feature found according to the similarity of proper vector or front k (k is greater than 0) individual neighbour, correct feature of the same name accounts in all features is very low, cause can not well selecting correct coupling, this causes the main cause that it fails to match.
From above analysis, inventor draws such conclusion: look for identical contiguous range to be the most essential thing of Image Matching.Feature according to contiguous range is corrected image local, describe, is mated, and aids in outstanding elimination of rough difference strategy, is just likely got up by image successful match larger for difficulty of matching.Given this, devise the affine invariants extracting method that a kind of distinguished point based region increases after inventor's lateral thinking, to improve the matching effect of the larger image of deformation, the main design thought of the embodiment of the present invention is as follows:
The very small amount of pixel in local just can reflect the deformation information of local, does not need the shape knowing object entirety, does not also need to extract larger scope.Consider the characteristic of point patterns, the embodiment of the present invention is selected to start with from unique point, launches follow-up operation.
In view of needing to find roughly the same contiguous range, inventor selects the region by carrying out local with the unique point extracted for Seed Points to increase, the shape obtained is made not only to want gray scale similar, and also must be spatially be communicated with, as far as possible object eliminates the point not belonging to this shape, get rid of the interference of other textures, ensure the pure property of the affine deformation parameter calculated, thus improve the similarity of correcting rear local image.
Owing to not knowing the shape of object entirety, the different scale between image is not known yet, also to consider the anisotropy of yardstick on image simultaneously, when the scope of the image increased for region only adopts one, the possibility obtaining similar affine rectification result can be lower, so need to use multiple dimensioned method to carry out region growth, namely needs to set up metric space, different yardsticks carries out region growth to unique point, improves the completeness of feature.
Ensureing on the basis of feature completeness as far as possible, also will make great efforts the redundancy reducing feature.For desirable angle point, it belongs to a kind of feature irrelevant with yardstick to a certain extent, and namely on different scale, carry out region growth, the affine deformation correcting parameter obtained will be all similar.If carry out these foreground area shapes of matching with ellipse, will closely between the sense of rotation of these ellipses and major axis, minor axis ratio.Such as: when carrying out region growth with the window of four kinds of different sizes to same unique point, the shape of the ellipse of four matching foreground area shapes is consistent substantially, so the redundancy in order to reduce feature, need to carry out the selection representing yardstick (representative scales), remove other yardstick of redundancy.
In order to reduce the impact on region shape parameter, preferably using circular window to delimit neighborhood to unique point in the embodiment of the present invention, and not using the window of other shape.Such as: when algorithm realization, first can carry out region growth in rectangular window, then growth results and a circular shuttering be operated, the foreground pixel do not belonged within the scope of circular shuttering is removed.
Based on above-mentioned mentality of designing, as shown in Figure 1, the embodiment of the invention discloses a kind of feature extracting method, be applied to feature deriving means, the method comprises:
Step S300: the unique point (initial characteristics point) extracting former image; Step S301: the scaled window giving the different size of each unique point two or more, scaled window is the region obtained centered by each unique point; In each scaled window, carry out region increase using the unique point at the center as scaled window as Seed Points and obtain the plural foreground area of each Feature point correspondence respectively, the quantity of the foreground area of each Feature point correspondence equals the quantity of the scaled window giving each unique point; Step 302: the covariance matrix calculating each foreground area; Step 303: the covariance matrix according to foreground area carries out affine deformation correction to image; Step 304: the image after correction extracts and corrects unique point, carries out correction unique point and describes.
Further, as shown in Figure 2, the main technical schemes of the embodiment of the present invention comprises: step S200: the unique point extracting former image with feature point extraction algorithm; Step S201: increase for Seed Points carries out region with the unique point that step S200 extracts, obtain foreground area (supporting zone); Step S202: the form parameter calculating foreground area; Step S203: select to represent scaled window, remove redundancy scaled window; Step S204: affine deformation correction is carried out to image; Step S205: extract minutiae on the image after correction, carries out unique point description.The particular content of each step is as follows:
Step S200: the unique point extracting former image with feature point extraction algorithm.
Choice for use point patterns is as the fundamental element of algorithm, and point patterns is the most basic a kind of feature, has minimum fastidious property to the scene type of image, widely applicable, can ensure the quantity of feature, the distribution of some position, locality and repeatability etc. preferably.
At present, unique point is mainly divided into corner point (corner point) and dropping point (blob point), and wherein, corner point is generally the point of crossing at edge, end points or Curvature varying larger part; The border that dropping point surrounding has some grey scale change to form usually, homogeneous is then compared in inside, dropping point is just in the inside that these borders comprise, and this border comprises the size that characteristic just in time embodies feature, namely the yardstick of feature, so dropping point has more dimensional properties from inwardness, inventor finds after deliberation, and corner point and dropping point compare on a position has complementarity, if be combined, characteristic type on image can be made abundanter, distribute more even.The embodiment of the present invention extracts two kinds of typical algorithm of this two category features point by introducing respectively, and by analyzing these two kinds of typical algorithm characteristic separately, the using method to them in the adjustment embodiment of the present invention.
Harris (Harris) feature point extraction algorithm is a kind of typical algorithm extracting corner point, and its design philosophy is, on image, the distribution of gradient can reflect the classification of image feature: the gradient of homogenous area all directions is all very little; The gradient in the some directions of edge is large, and the gradient in other direction is little; Unique point place has at least both direction gradient all larger.The distribution of gradient can represent with the covariance matrix of x direction and y direction gradient, the situation of the eigenwert of the covariance matrix consisted of the gradient analyzed in picture point contiguous range judges whether this picture point is unique point, and this matrix is also referred to as second-order matrix (Second Moment Matrix).The step extracting Harris unique point is as follows:
Calculate the gradient in image x direction and y direction, convolution can be carried out with column direction in the row direction respectively by single order Gaussian derivative template and obtain; Calculate three component Ix of Second Moment Matrix 2, IxIy, Iy 2; Three components of Second Moment Matrix are carried out to the Gauss's weighted mean in contiguous range, the size of this contiguous range is determined by the standard deviation sigma of Gaussian function; Calculate the Harris response of picture point; Extracting the local maximum alternatively unique point of response, then filtering out final unique point according to application demand, as arranged the graticule mesh size of extract minutiae.
Because Harris Feature Points Extraction is prior art, thus more do not speak more bright at this, it should be noted that at this, the standard deviation criteria in Harris Feature Points Extraction is not specifically studied in prior art, inventor finds after deliberation, and Harris Feature Points Extraction relates to two parameters: the standard deviation sigma in single order Gaussian derivative when calculating image gradient dand standard deviation sigma when Gauss's weighted mean carried out to three components of Second Moment Matrix iif following rules are followed in the selection of these two parameters can make the corner location of extraction be positioned at angle point inside, greatly improves the convenience that back region increases:
Both can not value too small, too small meeting causes the Harris response at unique point place without clear superiority, and noise immunity is very weak.The value size of standard deviation should be determined according to the actual conditions of image (comprising noise level and the readability of image), standard deviation value is larger, to noise and fuzzy resistivity stronger, along with the increase of two standard deviation values, the unique point position of extracting can be mobile towards angle point inside, when there being multiple feature in the scope corresponding to standard deviation, can produce interference between feature, information can merge mutually.Empirical tests, in the embodiment of the present invention, standard deviation sigma dand standard deviation sigma ivalue all proper between 1 ~ 2, preferably 1.6; When being used for carrying out the average weighted standard deviation sigma of Gauss to the gradient in subrange ibe less than the standard deviation sigma for solving gradient dtime, can not get comprehensive of the gradient information in subrange and gather effect, also just can not well for extract minutiae, and thus, the relative size between two standard deviations should meet σ d≤ σ i, in the embodiment of the present invention, preferably get σ d=0.8 σ i.
Laplce Gauss (Laplacian of Gassian, the LoG) operator be made up of second order Gauss derivative and its some approximation operator, be suitable for extracting dropping point as DoG operator etc. compares.Standard deviation in LoG operator determines the diameter of its central socket mouth, the place that the symbol being weights in LoG template near the place of nest mouth changes, if image exists so feature, the change location of its gray scale is corresponding with this nest mouth size, so the absolute value of the LoG response at this feature place will be very large, thus can be extracted, so, with the LoG operator of different standard deviations, the feature of different size can be detected.Here it is uses ultimate principle that is multiple dimensioned and automated characterization scale selection.Because the calculated amount of LoG operator is larger, so scholars propose multiple approximation operator, approximation operator is morphologically close to LoG operator, but computation amount, such as DoG (Difference of Gaussian) operator, because the corresponding manner extracting dropping point is prior art, the embodiment of the present invention is only introduced dropping point for DoG operator and is extracted and characteristic, and the step extracting DoG unique point is as follows:
With the series of standards difference σ of ascending change icorresponding Gaussian function G (σ i) gaussian filtering is carried out to image; With G (σ i+1) result that obtains of filtering deducts G (σ i) filter result, obtain multiple difference of Gaussian image; It searching image is simultaneously the local extremum alternatively unique point of spatial domain and scale domain; Remove the candidate feature point on edge by the method for Harris response or Hessian matrix (Hessian Matrix), obtain final unique point.Empirical tests, along with the increase of standard deviation, the corner location that DoG operator extraction goes out also can be mobile towards angle point inside, and this and Harris Characteristic of The Operators introduced above are similar; And can draw, angle point can both be extracted on multiple yardstick, and the unique point of other Blob type can only be extracted on the yardstick corresponding with its structure size.This also again show desirable angle point yardstick independent property to a certain extent and the scale selection characteristic of DoG operator.Based on this, the size selective rule of each yardstick in metric space when the embodiment of the present invention has drawn and extracted DoG unique point:
Initial yardstick can not be too little, generally can select about 1.5 ~ 1.6, and Main Function is anti-noise; Adjacent different scale is not excessive, otherwise meeting effect characteristics point extracts and the effect of automatic scale selection; Maximum yardstick is not too big, too large Blob feature all illustrates that there is gray scale homogenous area in a big way at this place usually, larger window is needed to carry out feature interpretation to increase quantity of information to it, concrete size is selected to select flexibly according to actual conditions, and the embodiment of the present invention is in this no limit.
Step S201: increase for Seed Points carries out region with the unique point that step S200 extracts, obtain foreground area (supporting zone).
Initial unique point (unique point of former image) needs the support of neighborhood just can have real value, and for the image having deformed more than plane linear design rank, circular window usually can not cover similar contiguous range on different images.The embodiment of the present invention increases to come to unique point determination contiguous range by the region of distinguished point based, can reduce the impact of other texture like this, improves the possibility finding similar neighborhood.The region growth of distinguished point based mainly comprises two contents: the rule that the size of region growing operation window and region increase.The size of action pane is used for delimiting initial area-of-interest (Region Of Iinterest, ROI) scope, the rule that region increases is used for realizing the segmentation of texture in ROI, the quality of rule directly affects the quality of the foreground area shape obtained, and will introduce this two parts respectively in the embodiment of the present invention.
The size of region growing operation window:
The starting point of the embodiment of the present invention is unique point, do not know the shape of object entirety, the different scale between raw video is not known yet, also will consider the anisotropy of yardstick on image, if the size of region growing operation window only adopts one, the possibility obtaining similar neighborhood scope can be lower simultaneously, thus, the embodiment of the present invention gives multiple scaled window to each unique point, about the size of region growing operation window, the most important thing is to set up metric space.
Set up metric space and will relate to two kinds of operations: set up the pyramid structure of image and on the every one deck of pyramid, choose the individual different yardstick of n (n is greater than 0).Inventor is through lateral thinking, and which floor pyramid builds, and the proportionate relationship between every layer is how many, and how the yardstick on every one deck is chosen can be followed following principle: the metric space of foundation must can hold the different scale between mated image.Such as: between image, differences in resolution is everywhere all twice, and so minimum metric space can be arranged like this: pyramid is built two-layer, and the proportionate relationship between every layer is twice, only chooses a yardstick for every layer; Or pyramid only builds one deck, namely only have raw video, this layer chooses two yardsticks, is the relation of twice between these two yardsticks.The optimum configurations of metric space will adjust according to the data cases of process, if the metric space set up can not hold the different scale between mated image, the possibility that the match is successful will reduce greatly.This drinks like going to fetch water in well with bucket, and water surface distance well head has 4 meters, but only with the rope for drawing water from a well system bucket of 2 meters, this can not get to water.If image is because the reason of shooting angle, shape is very serious by what compress, it is also conceivable to suitable magnified image when building pyramid.
Suppose that metric space is built up, pyramid structure has k, and (k is greater than 0) layer, every one deck is designated as P i, i ∈ [0, k-1]; Every layer of pyramid image is chosen n yardstick, be designated as σ j, j ∈ [0, n-1]; So, every one deck pyramid image extracts initial unique point with feature point extraction algorithm, each unique point carries out region growth on its pyramid image be extracted, region growing operation window has n, each window all centered by this unique point, window size is respectively (((int) ceil (3 σ j)) * 2+1), wherein, Int is the function rounded downwards by a numerical value as immediate integer; Ceil returns the smallest positive integral being greater than or equal to and specifying expression formula.Meanwhile, the window shape increased to make region, for circular, can establish the circular mask of a corresponding size to operate region growth results at each window.
It is pointed out that for by DoG operator extraction unique point out, to increase effect well a lot of to carry out region for the scaled window using n region growing operation window more corresponding than the characteristic dimension being used alone this unique point.This is because the intensification of distortion of projection's degree along with image, the effect of DoG operator automated characterization scale selection can obviously be deteriorated.
For these reasons, in the embodiment of the present invention, preferably all give multiple different size scaled window to each unique point extracted, the scaled window of each unique point is the region obtained centered by each unique point, as around the different circle of each feature dot-dash radius, pass between the scaled window of each unique point is: the size of the scaled window of each unique point is all according to the ascending foundation of parameter, characteristic point position just owing to extracting is different, and thus the position of the scaled window (region growth window) of each unique point on image is different.
The rule that region increases:
It is extremely important that region increases rule, because its determining area increases the condition stopped, or is called threshold value.Threshold selection must meet adaptive requirement, and the threshold value that the region at each unique point place increases in window (scaled window) must be determined according to the actual conditions of this window, and the overall situation that can not be dogmatic adopts a threshold value.Automatic threshold is selected to be the proposition that research is more at present, and pertinent literature is also more.In the middle of numerous algorithm, it is desirable that maximum between-cluster variance algorithm (Otsu Thresholding) applies effectiveness comparison in embodiments of the present invention, so preferably adopt Otsu Thresholding to determine the threshold value that region increases in the embodiment of the present invention.
The mentality of designing of Otsu Thresholding is: the quality of a threshold value, that distinction between each classification of being produced by this threshold value is evaluated, this distinction can describe with inter-class variance (between-class variance), inter-class variance is larger, distinction between class is better, so Otsu Thresholding realizes by maximizing inter-class variance.Algorithm steps is as follows:
The grey level histogram of statistics image, and be normalized; Suppose Image Segmentation to be become k classification, be designated as C respectively i, i ∈ [1, k], then will determine k-1 threshold value, be designated as T respectively i, i ∈ [1, k-1], T i-1<T i; Calculate the average gray μ of whole image a; Calculate the inter-class variance σ under often kind of threshold condition b 2; Make σ b 2the threshold value obtaining maximal value is exactly the threshold value that will look for.
Distinguished point based region is used to increase the foreground area (support neighborhood) finding unique point as stated above, can ensure that the support neighborhood of unique point belongs to same object and belongs to same plane as far as possible, reduce other texture to the impact of resolving affine deformation parameter, in fact this method serves the effect based on the adaptive strain window of presentation content.Select the region of (distinguished point based) among a small circle to increase and abandon the large-scale region segmentation of similar MSER algorithm, the difficulty requirement to Region Segmentation Algorithm can be reduced, better avoid the harmful effect that the regional extent of multiple Extraction of Image during extracted region is inconsistent caused.
Step S202: the form parameter calculating foreground area.
The form parameter calculating foreground area comprises two aspects: add up the covariance matrix of foreground area shape and carry out ellipse fitting according to covariance matrix, calculates oval major axis, minor axis and rotation principal direction.
Wherein, oval rotation principal direction and oval minor axis can reflect the anisotropy of local image with the ratio of major axis, according to the similarity degree between the ellipse fitting parameter of unique point region growth results on each yardstick, some can picking out feature represent yardstick, remove the yardstick similar with representing yardstick elliptic parameter, utilization represents unique point region on yardstick and increases the covariance matrix of the foreground area shape obtained, affine deformation correction is carried out to image local, the anisotropy of image can be eliminated, the distortion between the feature local image of the same name after correction is made to be reduced to the rank of plane linear design, utilize and can complete follow-up feature interpretation on correction image to plane linear design rank Image Matching process good SIFT algorithm.The foreground area covariance matrix of single Feature point correspondence only can be corrected image local, but is distributed in image everywhere, so image can both be corrected respectively everywhere due to unique point.
In this step, distinguished point based region is utilized to increase the form parameter of the foreground area obtained, namely the ELLIPTIC REVOLUTION principal direction corresponding to covariance matrix of this foreground area shape and the minor axis of ellipse carry out with the ratio of major axis the selection that feature represents yardstick, can very effective reply deformation of image larger time image on the anisotropy of yardstick characteristic dimension is selected to the adverse effect brought, thus realize the selection completing the characteristic dimension in the larger situation of deformation of image by a kind of simple mode.
The affine deformation of carrying out region only needs the covariance matrix using foreground area shape, and the parameter calculating ellipse fitting is then the selection representing yardstick for feature, removes redundancy yardstick.The calculating of region covariance matrix statistics, ellipse fitting and elliptic parameter is in very ripe prior art, and the embodiment of the present invention enumerates wherein a kind of implementation method, and remaining does not more speak more bright.
The covariance matrix of statistics foreground area shape:
After unique point region increases, the pixel belonging to foreground area shape P is marked as 1, and other pixel is marked as 0, then the covariance matrix ∑ of foreground area shape can calculate as follows.
f ( x , y ) = 1 , ( x , y ) &Element; P 0 , ( x , y ) &NotElement; P
m 10 = &Sigma; x &Sigma; y x 1 y 0 f ( x , y ) &Sigma; x &Sigma; y f ( x , y )
m 01 = &Sigma; x &Sigma; y x 0 y 1 f ( x , y ) &Sigma; x &Sigma; y f ( x , y )
m 11 = &Sigma; x &Sigma; y ( x - m 10 ) 1 ( y - m 01 ) 1 f ( x , y ) &Sigma; x &Sigma; y f ( x , y )
m 20 = &Sigma; x &Sigma; y ( x - m 10 ) 2 ( y - m 01 ) 0 f ( x , y ) &Sigma; x &Sigma; y f ( x , y )
m 02 = &Sigma; x &Sigma; y ( x - m 10 ) 0 ( y - m 01 ) 2 f ( x , y ) &Sigma; x &Sigma; y f ( x , y )
&Sigma; = m 20 m 11 m 11 m 02
Wherein, (x, y) represents the coordinate of the flat shape P observed.
Calculate the elliptic parameter of ellipse fitting:
Planar elliptical has following parameter: oval barycentric coordinates, oval major semi-axis a, oval minor semi-axis b, oval rotation principal direction θ.The barycentric coordinates of foreground area shape are exactly oval barycentric coordinates, and major axis, minor axis and rotation principal direction can by the eigenvalue λ of the covariance matrix of region shape 1, λ 21>=λ 2) and proper vector e 1calculate, computing formula is as follows:
&lambda; 1 = m 20 + m 02 + ( m 20 - m 02 ) 2 + 4 m 11 2 2
&lambda; 2 = m 20 + m 02 - ( m 20 - m 02 ) 2 + 4 m 11 2 2
e 1 = e 1 x e 1 y = m 11 ( &lambda; 1 - m 20 ) 2 + m 11 2 &lambda; 1 - m 20 ( &lambda; 1 - m 20 ) 2 + m 11 2
a = 2 &lambda; 1
b = 2 &lambda; 2
&theta; = a tan 2 ( e 1 y , e 1 x )
Step S203: select to represent scaled window, remove redundancy scaled window.
In the embodiment of the present invention, be ensure the completeness of feature, the metric space of foundation is redundancy to some extent.If use the feature on all yardsticks, be exactly so-called multi-scale method (Multi-Scale Method), if pick out the feature on some yardsticks according to some rule, be exactly scale selection method (Scale Selection Method).Inventor studies discovery, when the imaged viewing angle of image differs greatly, when deformation comparison between image is serious, the matching algorithm of similar SIFT and so on can lose efficacy, through in many ways analyzing, this mainly contains two reasons: along with the intensification of deformation extent between image, on image, the anisotropy of yardstick can be more and more serious, and use isotropic DoG operator to carry out scale selection, effect will inevitably sharply be deteriorated, when directly causing carrying out feature interpretation, described image capturing range is inconsistent on each image; Along with the intensification of deformation extent between image, circular window is used to carry out to image the anisotropy that feature interpretation can not adapt to yardstick on image, gradient distribution between image to be matched in feature contiguous range of the same name is also different, that is, if first do not carry out the local correction of image, carry out SIFT feature and describe nonsensical.
Based on above-mentioned result of study, the embodiment of the present invention preferably utilizes the anisotropy of yardstick to carry out scale selection.The anisotropy of yardstick can, with above expressing the elliptic parameter that foreground area shape carries out ellipse fitting, more specifically, be mathematically the ratio r of oval rotation principal direction θ and oval minor axis and major axis.Oval rotation principal direction has reacted maximum dimension deformation direction, and oval minor axis has reacted the anisotropic degree of yardstick with the ratio of major axis, and the concrete size of transverse, minor axis only reacts the convergent-divergent of shape, but can not the distortion of effect characteristics.
When the rotation principal direction of ellipse and oval minor axis and major axis ratio all relatively time, can think that two shapes are substantially similar, so, by increasing the foreground area shape obtained carry out ellipse fitting to each scaled window carrying out unique point region, and the similarity degree compared between these elliptic parameters, just can carry out the selection of yardstick.When elliptic parameter on contiguous several yardsticks is similar, one of them representative as this characteristic dimension can be retained, and utilize the shape information on this yardstick to carry out the rectification of distortion of local to image.In embodiments of the present invention, these select yardsticks are called represent yardstick (Representative Scales).
In sum, in the embodiment of the present invention, the detailed process of carrying out scale selection is: the scaled window become large gradually at multiple numerical value of correspondence to the initial characteristics point extracted carrying out the growth of Seed Points region respectively; The foreground area shape obtained is increased to each region and carries out ellipse fitting, calculate oval parameter, the ratio of principal direction and minor axis and major axis is rotated in elliptic parameter by the more same Feature point correspondence of the sequencing of the scale size of scaled window, if the rotation principal direction difference that the adjacent several yardstick of this feature is corresponding is no more than 20 degree and minor axis is no more than 0.1 with the difference of the ratio of major axis, then select that yardstick middle in these yardsticks as the representative yardstick of this feature as.When the elliptic parameter difference corresponding with these yardsticks when follow-up yardstick transfinites, think and have new representative yardstick to occur, then the reservation of this feature is multiple represents yardstick.Such as: a unique point has 6 scaled window from small to large, 1st, 2, 3 yardsticks carry out region and increase the foreground area shape that produces when carrying out ellipse fitting, oval rotation principal direction difference is no more than 20 degree and minor axis is no more than 0.1 with the difference of the ratio of major axis, namely elliptic parameter is similar, when this elliptic parameter is similar, carrying out affine deformation correction result by the foreground area shape of corresponding scale window also can be similar, redundancy can be produced, thus need to carry out representing yardstick to find, remove the yardstick that some are unnecessary, just the 2nd yardstick is represented yardstick as one, 3 remaining scaled window (yardsticks 4, 5, 6) suppose between that elliptic parameter is also similar, but they all transfinite with the parameter differences of ellipse on any one yardstick above, then the 5th yardstick is also represented yardstick as one.So as a result, this unique point has two to represent yardstick, other yardstick is then no longer considered.Correspondingly, other unique point also carries out similar operation, for each unique point carries out representing scale selection.The feature of this method to Corner type and Blob type is all applicable, because the reference frame of algorithm is the form parameter of feature.
Step S204: affine deformation correction is carried out to image.
In the embodiment of the present invention, after the calculating and feature that have passed through foreground area shape covariance matrix represent the selection of yardstick, utilize and represent covariance matrix corresponding to yardstick carries out local affine deformation correction to image.Because covariance matrix is positive definite matrix, thus the decomposition of Qiao Lisi basis matrix can be carried out, by the inverse matrix of the matrix of consequence decomposed, shape coordinate is converted, make only to remain next rotational deformation between two transformation results, thus the affine deformation realizing image is corrected.
Step S205: extract minutiae on the image after correction, carries out unique point description.
After affine deformation is corrected, the similarity of image near feature of the same name improves a lot, and this provides conveniently completing follow-up Image Matching work.Because the coordinate transformation relation corrected between the coordinate of the image raw video corresponding with it is determined, only need the image after correction extract feature and carry out feature interpretation.Complete the present selection of the algorithm of this part function a lot, in the embodiment of the present invention, preferably use SIFT algorithm, about the details of SIFT algorithm, can with reference to available data, the embodiment of the present invention is not described in detail.In the content that this will introduce, but be about very little very important problem when carrying out feature interpretation: how the size (or being called the size of match window) of feature interpretation window is chosen proper.No matter be use related coefficient or the proper vector using similar SIFT feature describing method to be formed, all can relate to the problem that selected characteristic describes window size.Too little window is because of the image information comprised very little, too weak to the description dynamics of feature, easily causes higher error hiding; Describe the impact that window easily suffers again too greatly deformation of image, cause the similarity between feature of the same name to decline, reduce matching rate equally, so it is very important for selecting suitable feature interpretation window size to be matched to power for raising.
The size of feature interpretation window is usual is relevant to the yardstick of this feature, such as in SIFT algorithm, suppose that the yardstick of a unique point is s, so the feature interpretation window size w of this unique point can choose as follows: w=(((int) ceil (s*6)) * 4+1); Wherein, Int is the function rounded downwards by a numerical value as immediate integer; Ceil returns the smallest positive integral being greater than or equal to and specifying expression formula.So research characteristic describes window size has important reference value on the impact of coupling to how setting up metric space.
The embodiment of the present invention employs some have the stereogram of internal and external orientation and terrain model accurately sizes that research characteristic describes window to the impact of coupling, found that: for SIFT feature describing method, feature interpretation window size is proper near 60 pixels.Can release according to the relation between characteristic dimension and feature interpretation window size, when setting up metric space, reasonable range scale is between 1.5 ~ 6.5.Can find, when deformation of image is smaller, increase feature interpretation window, what match point accuracy declined is not clearly, and when deformation of image is larger, match point accuracy decline rate is obviously faster simultaneously.
Summary of the invention
The object of the present invention is to provide a kind of feature extracting method and device, take the undesirable problem of Image Matching effect that visual angle differs greatly to improve in prior art.
To achieve these goals, the technical solution used in the present invention is as follows:
First aspect, embodiments provides a kind of feature extracting method, is applied to feature deriving means, and described method comprises:
Extract the unique point of former image;
Give the scaled window of the different size of each described unique point two or more, described scaled window is the region obtained centered by each described unique point; In each scaled window, carry out region increase using the unique point at the center as described scaled window as Seed Points and obtain the plural foreground area of each Feature point correspondence respectively, the quantity of the foreground area of described each Feature point correspondence equals the quantity of the scaled window giving described each unique point;
Calculate the covariance matrix of each described foreground area;
Carry out affine deformation according to the covariance matrix of described foreground area to described former image to correct;
Described image after correction extracts and corrects unique point, carry out described correction unique point and describe.
Wherein, correction unique point is the unique point that the image after correction extracts, and it is identical with the character essence of the unique point of described former image, and it is different that unique difference is to extract object, and the extraction object correcting unique point is the image after correcting; The extraction object of unique point is former image.
In conjunction with first aspect, in the first possibility mode of first aspect, the described covariance matrix according to described foreground area carries out affine deformation correction correction to described former image and comprises:
Covariance matrix according to described each foreground area carries out ellipse fitting, draws the elliptic parameter of corresponding described each foreground area;
Respectively for each described unique point, to the scaled window of the different size of the described two or more centered by each unique point, the elliptic parameter of the described foreground area that the scaled window adjacent between two of more each described unique point is corresponding, judge whether elliptic parameter difference is greater than predetermined threshold value, if not, the elliptic parameter of the foreground area that scaled window adjacent between two described in then thinking is corresponding is similar, chooses scaled window corresponding to elliptic parameter as first generation table scaled window from the similar elliptic parameter obtained;
Described elliptic parameter corresponding with described first generation table scaled window for elliptic parameter corresponding for scaled window described in all the other identical with described first generation rear sight degree window center is compared, if elliptic parameter difference is greater than predetermined threshold value, elliptic parameter difference is greater than scaled window corresponding to the described elliptic parameter of predetermined threshold value as second generation table scaled window;
Carry out affine deformation according to the covariance matrix of described first generation table scaled window and described foreground area corresponding to second generation table scaled window to described former image to correct.
In above-mentioned, unique point may have and one or morely represents yardstick, such as: a unique point can only have first generation table scaled window or second generation table scaled window, only has second generation table scaled window to refer to all dissimilar situation of each scaled window of this unique point.
In conjunction with the first possibility mode of first aspect, in the second possibility mode, described elliptic parameter comprises major axis, minor axis and rotation principal direction, the elliptic parameter of the described foreground area that the scaled window adjacent between two of described more each described unique point is corresponding, judges whether elliptic parameter difference is greater than predetermined threshold value and comprises:
By the identical described two or more scaled window in center size order relatively described in rotate the difference of the difference of principal direction, described minor axis and the ratio of described major axis in adjacent scaled window is corresponding between two described elliptic parameter, described size order refers to order from big to small or from small to large;
The described elliptic parameter that described in judgement, adjacent scaled window is corresponding between two rotates principal direction difference and whether is greater than described predetermined threshold value, and whether described minor axis is greater than described predetermined threshold value with the difference of the ratio of described major axis;
Described described elliptic parameter corresponding with described first generation table scaled window for elliptic parameter corresponding for scaled window described in all the other identical with described first generation rear sight degree window center to be compared, comprising:
Judge whether be greater than described predetermined threshold value, whether described minor axis is greater than described predetermined threshold value with the difference of the ratio of described major axis if in the described elliptic parameter that described elliptic parameter that described in all the other identical with described first generation rear sight degree window center, scaled window is corresponding is corresponding with described first generation table scaled window, rotating principal direction difference.
In conjunction with the second possibility mode of first aspect, in the third possibility mode, described predetermined threshold value is that described elliptic parameter rotates principal direction difference 20 degree, described minor axis differs 0.1 with the ratio of described major axis, and described elliptic parameter difference is greater than described predetermined threshold value and refers to that described ELLIPTIC REVOLUTION principal direction difference is greater than 20 degree or described minor axis is greater than 0.1 with the difference of the ratio of described major axis.
In conjunction with first aspect, may in mode at the second of first aspect, describedly carry out the threshold value that increases that region increases using the unique point at the center as described scaled window as Seed Points and obtained by maximum between-cluster variance algorithm Otsu Thresholding.
In conjunction with first aspect, in the third possibility mode of first aspect, extract minutiae on described described image after correction, carries out unique point description and comprises: use extract minutiae on the described image of scale invariant feature conversion SIFT algorithm after correction, carry out feature interpretation.
In conjunction with the third possibility mode of first aspect, in the first possibility mode, extract minutiae on the described image of described use scale invariant feature conversion SIFT algorithm after correction, carry out in feature interpretation step, described feature interpretation window size is 55 ~ 65 pixels.
In conjunction with first aspect, in the 4th kind of possibility mode of first aspect, the unique point of the former image of described extraction comprises:
Be σ according to standard deviation dsingle order Gaussian derivative calculate the gradient in image x direction and y direction; Be σ according to standard deviation igaussian function calculate the component Ix of image greyscale second-order matrix Second Moment Matrix 2, IxIy and Iy 2, use Harris Harris feature point extraction algorithm to extract the unique point of former image;
Wherein, described standard deviation sigma dbe less than described standard deviation sigma i, described standard deviation sigma dwith described standard deviation sigma ivalue be 1 ~ 2.
Described scaled window is the border circular areas obtained centered by each described unique point.
Second aspect, embodiments provides a kind of feature deriving means, comprising:
First extraction unit: for extracting the unique point of former image;
Region Growth Units: for giving the scaled window of the different size of each described unique point two or more, described scaled window is the region obtained centered by each described unique point; In each scaled window, carry out region increase using the unique point at the center as described scaled window as Seed Points and obtain the plural foreground area of each described Feature point correspondence respectively, the quantity of the foreground area of described each Feature point correspondence equals the quantity of the scaled window giving described each unique point;
Matrix calculation unit: for calculating the covariance matrix of each described foreground area;
Correct processing unit: correct for carrying out affine deformation according to the covariance matrix of described foreground area to described former image;
Second extraction unit: extract on the described image after correction and correct unique point;
Description unit: the described correction unique point for extracting described second extraction unit is described.
The technique effect that the present invention realizes:
Embodiment of the present invention choice for use point patterns is as the fundamental element of feature extraction, point patterns is the most basic a kind of feature of image, to the scene type of image, there is minimum fastidious property, widely applicable, the quantity of feature, the distribution of some position, locality and repeatability can be ensured preferably.
The method that the embodiment of the present invention uses distinguished point based region to increase (region growing) finds the foreground area (support neighborhood) of unique point, can ensure that the support neighborhood of unique point belongs to same object and belongs to same plane as far as possible, reduce other texture to the impact of resolving affine deformation parameter, serve the effect based on the adaptive strain window of presentation content, this region is among a small circle selected to increase and abandon zone similarity feature extraction (Maximally Stable Extremal Regions, MSER) the large-scale region segmentation of algorithm, the difficulty requirement to Region Segmentation Algorithm can be reduced, better avoid the harmful effect that the regional extent of multiple Extraction of Image during extracted region is inconsistent caused, effect is better.
The form parameter utilizing distinguished point based region to increase the foreground area obtained in the embodiment of the present invention carries out ellipse fitting, use the elliptic parameter difference corresponding to the covariance matrix of this shape whether to exceed predetermined threshold value (similarity) and carry out the selection that feature represents scaled window, can very effective reply deformation of image larger time image on the anisotropy of yardstick characteristic dimension is selected to the adverse effect brought, thus realize the selection completing the characteristic dimension in the larger situation of deformation of image by a kind of simple mode, effectively can solve problems of the prior art.
Other features and advantages of the present invention will be set forth in the following description, and, partly become apparent from instructions, or understand by implementing the embodiment of the present invention.The object of the embodiment of the present invention and other advantages are by realizing at write instructions, claims and accompanying drawing and obtaining.
Accompanying drawing explanation
In order to be illustrated more clearly in the technical scheme in the embodiment of the present invention, be briefly described to the accompanying drawing used required in embodiment below, apparently, accompanying drawing in the following describes is only some embodiments of the present invention, for those of ordinary skill in the art, under the prerequisite not paying creative work, other accompanying drawing can also be obtained according to these accompanying drawings.Shown in accompanying drawing, above-mentioned and other object of the present invention, Characteristics and advantages will be more clear.
Fig. 1 is the embodiment of the present invention 1 schematic flow sheet one;
Fig. 2 is the embodiment of the present invention 1 schematic flow sheet two;
Fig. 3 is the structural drawing one of the embodiment of the present invention 2;
Fig. 4 is the structural drawing two of the embodiment of the present invention 2.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, carry out clear, complete description to the technical scheme in the embodiment of the present invention, obviously, described embodiment is only the present invention's part embodiment, instead of whole embodiments.Based on the embodiment in the present invention, those of ordinary skill in the art, not making the every other embodiment obtained under creative work prerequisite, belong to the scope of protection of the invention.
Embodiment 1
The target of Image Matching is to locate in image to be matched the similarity meeting certain objective constraint, and for much needing the task of being obtained certain information by image, Image Matching is a step that is crucial and basis.From stereogram, such as recover three-dimensional scenic, the retrieval of image, change detection, and the target identification etc. in security protection facility, because the feature of different sensor imagings is different, required Image Matching Algorithm is also different.Technical scheme in the embodiment of the present invention is mainly applicable to the photogrammetric optical image common with computer vision field, in this field, Image Matching probably can be divided into two classes: a class does not have or only has a small amount of prior imformation, the coupling that match point is relatively less, is generally used for the initial relativeness set up between image; One class knows prior imformation (recursive model and parameter as image), and the coupling of match point very dense, is generally used for the subtle three-dimensional model setting up scene, and industry calls empty three couplings last class usually, and a rear class is called dense Stereo Matching.The technology contents of the embodiment of the present invention belongs to last class.
The input of Image Matching Algorithm is image, and the digitized two-dimensional array of general employing represents, its each unit is called pixel.The each pixel of grayscale image only has a numerical value, and each pixel of chromatic image has three numerical value, is a quantized value of RGB three wave bands respectively.Except the gray scale of pixel, the neighbouring relations of pixel reflect the neighbouring relations of scenery to a certain extent, because the final image space of scenery also will be subject to the impact of taking orientation, so in sum, the utilizable information of algorithm only has the gray-scale value of pixel and the syntople of pixel.The tone Processing Algorithm of relevant to grey scale pixel value is image, the geometric manipulations algorithm of relevant to the syntople of pixel is image.Image Matching Algorithm exactly will towards this direction effort: geometrically make pixel of the same name put unanimously as far as possible different for scenery being imaged on, tone makes as far as possible pixel tone of the same name consistent.So, just comparatively comparing difference can be easy to.
The technology emphasis of the embodiment of the present invention is the geometric manipulations algorithm of image, and realize sane Image Matching reliably, matching algorithm must can tackle the geometry deformation existed between image.Geometry deformation is mainly divided into three major types at present: translation rotation, isotropy scaling and anisotropy scaling.They correspond to the geometric transformation that this three kinds of complexities of the European conversion of plane, plane linear design and plane affined transformation increase progressively gradually respectively.Although in actual applications, the geometry deformation that the projective transformation that often there is higher level between image produces, what coupling utilized usually is local feature, so it is also enough for describing geometry deformation locally by affined transformation.
Feature is a key concept in Image Matching, and the gray scale of scenery and the change of shape define the feature of scenery.On image, these all changes are all embodied by the grey scale change of neighbor, the existence of feature is the precondition of Image Matching, if whole image does not all have grey scale change, this image does not have information and can not mate, if certain region in image does not have grey scale change, the pixel in this block region is very difficult accurate match, so a key content of Image Matching is feature extraction.From mathematical form point, image feature has point, line, surface three kinds, and the characteristic type of different application institute preferences is also different, as road extraction preference line features, classification of remote-sensing images preference region feature, point patterns is first degree in all features, and therefore the scope of application is also the most widely.Different from road extraction, the feature in Image Matching does not need, strictly corresponding to the scene features be of practical significance, only to need Image Matching just helpful.Usually, the feature meeting following ask for something just can preferably in Image Matching:
Repeatable: in the image overlap area between the different imaging of same scenery, feature can as much as possiblely be extracted by same algorithm.Feature " can repeat " to be the prerequisite that feature " can be mated ".
Conspicuousness: the neighborhood at feature place comprises more grey scale change and quantity of information, can distinguish with further feature than being easier to.
Localizability: can more accurately and reliably marking as planimetric position of feature, there will not be ambiguous situation.
Quantity is more than enough: the quantity of feature wants the needs that can meet application, and can than being easier to the extraction quantity being adjusted feature by parameter.Meanwhile, the extraction quantity increasing feature can improve the repeatability of feature.
Distributivity is good: when photographed unique characteristics satisfies condition, and image feature can compare and is evenly distributed in image everywhere.
Locality: the image capturing range of characteristics span can not be too large, facilitates the geometry deformation process in Image Matching Algorithm.
In addition, if feature can with fairly simple and fast algorithm extract, greatly can increase the practicality of matching algorithm.
Feature interpretation is also the requisite part of Image Matching Algorithm.After extracting feature, need to use and carry out this feature of quantificational description someway, form the proper vector representing each feature, could be used in Image Matching.The most directly and the most simply one describes quantized value is exactly pixel grey scale itself.But there is tone difference and noise between different images, direct use gray scale is very insecure, even if eliminate tone difference, the information of single pixel is used also to be far from being enough to carry out feature interpretation, do not meet above-mentioned conspicuousness, so the neighborhood territory pixel must including certain limit in is strengthened supporting.
Determine that neighborhood comprises the content of two aspects: the size of neighborhood and the shape of neighborhood.If two image resolutions are identical, and also difference is little to take orientation, namely under the geometry deformation between image is in the rank of the European conversion of plane, then the shape of neighborhood is not too important, the size of main consideration neighborhood, Criterion of Selecting is the balance found between neighborhood support strength (conspicuousness of feature) and calculated amount.If image resolution has difference, under the geometry deformation between image is in the rank of similarity transformation, except the size of neighborhood will be considered, also to determine the ratio of the resolution between image, make the atural object scope that two supporting zones are corresponding identical.Such as: if go to mate with one tree with leaf, it will be very low for being matched to power, the scale problem in Here it is Image Matching.If filming image orientation also difference is comparatively large, the geometry deformation between image reaches affine deformation, even higher rank, then the geometry deformation between different imaging can be clearly.If shooting is three-dimensional scenic, also can there is occlusion issue, at this moment determine that the shape of neighborhood is just extremely important.Based on above-mentioned research, inventor finds, after finding suitable neighborhood territory pixel, the affine deformation of image local can be corrected according to these pixels, the geometry deformation between image is made to be down to the rank of similarity transformation, then be down to the rank of European conversion by solving scale problem further, then solve the Rotation between image, just can complete geometry deformation process preferably.Geometry deformation process and feature extraction are interdependent accompanying, so be usually seen as a part for feature extraction.
Embodiment 2
As shown in Figure 3, the embodiment of the invention discloses a kind of feature deriving means, comprising: the first extraction unit 100: for extracting the unique point of former image; Region Growth Units 101: for giving the scaled window of the different size of each unique point two or more, the scaled window of each unique point is the region obtained centered by each unique point; In each scaled window, carry out region to increase using the unique point at the center as scaled window as Seed Points, obtain the plural foreground area of each Feature point correspondence respectively, the quantity of the foreground area of each Feature point correspondence equals the quantity of the scaled window giving each unique point; Matrix calculation unit 102: for calculating the covariance matrix of each foreground area; Correct processing unit 103: for the covariance matrix according to foreground area, affine deformation correction is carried out to image; Second extraction unit 104: extract minutiae on the image after correction; Description unit 105: the unique point for extracting the second extraction unit 104 is described.
Matrix calculation unit 102 carries out ellipse fitting specifically for the covariance matrix according to each foreground area, draws the elliptic parameter of corresponding each foreground area; Respectively for the scaled window of the different size of the two or more centered by each unique point, the elliptic parameter of the foreground area that the scaled window adjacent between two of more each unique point is corresponding, judge whether elliptic parameter difference is greater than predetermined threshold value, if not, then think that the elliptic parameter of the foreground area that scaled window adjacent is between two corresponding is similar, from the similar elliptic parameter obtained, choose scaled window representatively scaled window corresponding to an elliptic parameter; The elliptic parameter that elliptic parameter corresponding for all the other scaled window identical with representing scaled window center is corresponding with representing scaled window compares, if elliptic parameter difference is greater than predetermined threshold value, elliptic parameter difference is greater than scaled window corresponding to the elliptic parameter of predetermined threshold value as new representative scaled window, retains and originally represent scaled window; Correct processing unit 103 for carrying out affine deformation correction according to the foreground area covariance matrix representing scaled window to image.
In above-mentioned, elliptic parameter comprises major axis, minor axis and rotation principal direction, matrix calculation unit 102 is specifically for rotating the difference of the difference of principal direction, minor axis and the ratio of major axis in the size order elliptic parameter that more adjacent scaled window is corresponding by the identical two or more scaled window in center, size order refers to order from big to small or from small to large; Judge whether the elliptic parameter rotation principal direction difference that adjacent scaled window is corresponding is between two greater than predetermined threshold value, and whether minor axis is greater than predetermined threshold value with the difference of the ratio of major axis; Judge whether be greater than predetermined threshold value, whether minor axis is greater than predetermined threshold value with the difference of the ratio of major axis if in the elliptic parameter that elliptic parameter that all the other scaled window identical with first generation rear sight degree window center are corresponding is corresponding with first generation table scaled window, rotating principal direction difference.
Region Growth Units 101 for obtained by maximum between-cluster variance algorithm Otsu Thresholding to carry out using the unique point at the center as scaled window as Seed Points that region increases increase threshold value; For changing extract minutiae on the image of SIFT algorithm after correction by scale invariant feature, carry out feature interpretation.
Description unit 105 for by size be the description window of 55 ~ 65 pixels carry out correction unique point describe.
First extraction unit 100 is specifically for being σ according to standard deviation dsingle order Gaussian derivative calculate the gradient in image x direction and y direction; Be σ according to standard deviation igaussian function calculate the component Ix of image greyscale second-order matrix Second Moment Matrix 2, IxIy and Iy 2, use Harris Harris feature point extraction algorithm to extract the unique point of former image; Wherein, standard deviation sigma dbe less than standard deviation sigma i, standard deviation sigma dwith standard deviation sigma ivalue be 1 ~ 2.
As shown in Figure 4, the embodiment of the present invention additionally provides a kind of feature deriving means, comprising: processor 400, storer 404, bus 402 and communication interface 403, and described processor 400, communication interface 403 are connected by bus 402 with storer 404.
Wherein, storer 404 is for storage program 401;
Processor 400 is for the program 401 in execute store 404; Wherein, processor 400 is by communication interface 403 receiving data stream;
In specific implementation, program 401 can comprise program code, and described program code comprises computer-managed instruction and algorithm etc.;
Processor 400 may be a central processor CPU, or specific integrated circuit ASIC (Application Specific Integrated Circuit), or is configured to the one or more integrated circuit implementing the embodiment of the present invention.
As shown in Figure 3, program 401 can comprise the first extraction unit 100: for extracting the unique point of former image; Region Growth Units 101: for giving the scaled window of the different size of each unique point two or more, the scaled window of each unique point is the region obtained centered by each unique point; In each scaled window, carry out region to increase using the unique point at the center as scaled window as Seed Points, obtain the plural foreground area of each Feature point correspondence respectively, the quantity of the foreground area of each Feature point correspondence equals the quantity of the scaled window giving each unique point; Matrix calculation unit 102: for calculating the covariance matrix of each foreground area; Correct processing unit 103: for the covariance matrix according to foreground area, affine deformation correction is carried out to image; Second extraction unit 104: extract minutiae on the image after correction; Description unit 105: the unique point for extracting the second extraction unit 104 is described.
Those skilled in the art can be well understood to, and for convenience and simplicity of description, the specific works process of the device of foregoing description and principle, with reference to the corresponding process in preceding method embodiment, principle, can not repeat them here.
In the embodiment that the application provides, should be understood that, disclosed apparatus and method, can realize by another way.Such as, device embodiment described above is only schematic, such as, the division of described unit, be only a kind of logic function to divide, actual can have other dividing mode when realizing, such as multiple unit or assembly can in conjunction with or another system can be integrated into, or some features can be ignored, or do not perform.Another point, shown or discussed connection each other can be electrical, machinery or other form.
The described unit illustrated as separating component or can may not be and physically separates, and the parts as unit display can be or may not be physical location, namely can be positioned at a place, or also can be distributed in multiple network element.Some or all of unit wherein can be selected according to the actual needs to realize the object of the present embodiment scheme.
In addition, each functional unit in embodiments of the present invention can be integrated in a processing unit, also can be that the independent physics of unit exists, also can two or more unit in a unit integrated.
If described function using the form of SFU software functional unit realize and as independently production marketing or use time, can be stored in a computer read/write memory medium.Based on such understanding, the part of the part that technical scheme of the present invention contributes to prior art in essence in other words or this technical scheme can embody with the form of software product, this computer software product is stored in a storage medium, comprises all or part of step of some instructions in order to make relevant device perform method described in the embodiment of the present invention.And aforesaid storage medium comprises: mobile phone, USB flash disk, portable hard drive, ROM (read-only memory) (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disc or CD etc. various can be program code stored medium.
The foregoing is only the preferred embodiments of the present invention, be not limited to the present invention, for a person skilled in the art, the present invention can have various modifications and variations.Within the spirit and principles in the present invention all, any amendment done, equivalent replacement, improvement etc., all should be included within protection scope of the present invention.

Claims (10)

1. a feature extracting method, is characterized in that, is applied to feature deriving means, and described method comprises:
Extract the unique point of former image;
Give the scaled window of the different size of each described unique point two or more, described scaled window is the region obtained centered by each described unique point; In each scaled window, carry out region increase using the unique point at the center as described scaled window as Seed Points and obtain the plural foreground area of each Feature point correspondence respectively, the quantity of the foreground area of described each Feature point correspondence equals the quantity of the scaled window giving described each unique point;
Calculate the covariance matrix of each described foreground area;
Carry out affine deformation according to the covariance matrix of described foreground area to described former image to correct;
Described image after correction extracts and corrects unique point, carry out described correction unique point and describe.
2. feature extracting method according to claim 1, is characterized in that, the described covariance matrix according to described foreground area carries out affine deformation correction correction to described former image and comprises:
Covariance matrix according to described each foreground area carries out ellipse fitting, draws the elliptic parameter of corresponding described each foreground area;
Respectively for each described unique point, to the scaled window of the different size of the described two or more centered by each unique point, the elliptic parameter of the described foreground area that the scaled window adjacent between two of more each described unique point is corresponding, judge whether elliptic parameter difference is greater than predetermined threshold value, if not, the elliptic parameter of the foreground area that scaled window adjacent between two described in then thinking is corresponding is similar, chooses scaled window corresponding to elliptic parameter as first generation table scaled window from the similar elliptic parameter obtained;
Described elliptic parameter corresponding with described first generation table scaled window for elliptic parameter corresponding for scaled window described in all the other identical with described first generation rear sight degree window center is compared, if elliptic parameter difference is greater than predetermined threshold value, elliptic parameter difference is greater than scaled window corresponding to the described elliptic parameter of predetermined threshold value as second generation table scaled window;
Carry out affine deformation according to the covariance matrix of described first generation table scaled window and described foreground area corresponding to second generation table scaled window to described former image to correct.
3. feature extracting method according to claim 2, it is characterized in that, described elliptic parameter comprises major axis, minor axis and rotation principal direction, the elliptic parameter of the described foreground area that the scaled window adjacent between two of described more each described unique point is corresponding, judges whether elliptic parameter difference is greater than predetermined threshold value and comprises:
By the identical described two or more scaled window in center size order relatively described in rotate the difference of the difference of principal direction, described minor axis and the ratio of described major axis in adjacent scaled window is corresponding between two described elliptic parameter, described size order refers to order from big to small or from small to large;
The described elliptic parameter that described in judgement, adjacent scaled window is corresponding between two rotates principal direction difference and whether is greater than described predetermined threshold value, and whether described minor axis is greater than described predetermined threshold value with the difference of the ratio of described major axis;
Described described elliptic parameter corresponding with described first generation table scaled window for elliptic parameter corresponding for scaled window described in all the other identical with described first generation rear sight degree window center to be compared, comprising:
Judge whether be greater than described predetermined threshold value, whether described minor axis is greater than described predetermined threshold value with the difference of the ratio of described major axis if in the described elliptic parameter that described elliptic parameter that described in all the other identical with described first generation rear sight degree window center, scaled window is corresponding is corresponding with described first generation table scaled window, rotating principal direction difference.
4. feature extracting method according to claim 3, it is characterized in that, described predetermined threshold value is that described elliptic parameter rotates principal direction difference 20 degree, described minor axis differs 0.1 with the ratio of described major axis, and described elliptic parameter difference is greater than described predetermined threshold value and refers to that described ELLIPTIC REVOLUTION principal direction difference is greater than 20 degree or described minor axis is greater than 0.1 with the difference of the ratio of described major axis.
5. feature extracting method according to claim 1, is characterized in that, describedly carries out the threshold value that increases that region increases using the unique point at the center as described scaled window as Seed Points and is obtained by maximum between-cluster variance algorithm Otsu Thresholding.
6. feature extracting method according to claim 1, it is characterized in that, extract minutiae on described described image after correction, carries out unique point description and comprises: use extract minutiae on the described image of scale invariant feature conversion SIFT algorithm after correction, carry out feature interpretation.
7. feature extracting method according to claim 6, it is characterized in that, the described image of described use scale invariant feature conversion SIFT algorithm after correction extracts and corrects unique point, carry out correction unique point and describe in step, it is 55 ~ 65 pixels that described correction unique point describes window size.
8. feature extracting method according to claim 1, is characterized in that, the unique point of the former image of described extraction comprises:
Be σ according to standard deviation dsingle order Gaussian derivative calculate the gradient in image x direction and y direction; Be σ according to standard deviation igaussian function calculate the component Ix of image greyscale second-order matrix SecondMoment Matrix 2, IxIy and Iy 2, use Harris Harris feature point extraction algorithm to extract the unique point of former image;
Wherein, described standard deviation sigma dbe less than described standard deviation sigma i, described standard deviation sigma dwith described standard deviation sigma ivalue be 1 ~ 2.
9. the feature extracting method according to claim 1 ~ 8 any one, is characterized in that, described scaled window is the border circular areas obtained centered by each described unique point.
10. a feature deriving means, is characterized in that, comprising:
First extraction unit: for extracting the unique point of former image;
Region Growth Units: for giving the scaled window of the different size of each described unique point two or more, described scaled window is the region obtained centered by each described unique point; In each scaled window, carry out region increase using the unique point at the center as described scaled window as Seed Points and obtain the plural foreground area of each described Feature point correspondence respectively, the quantity of the foreground area of described each Feature point correspondence equals the quantity of the scaled window giving described each unique point;
Matrix calculation unit: for calculating the covariance matrix of each described foreground area;
Correct processing unit: correct for carrying out affine deformation according to the covariance matrix of described foreground area to described former image;
Second extraction unit: extract on the described image after correction and correct unique point;
Description unit: the described correction unique point for extracting described second extraction unit is described.
CN201410479118.7A 2014-09-18 2014-09-18 Feature extracting method and device Expired - Fee Related CN104268550B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410479118.7A CN104268550B (en) 2014-09-18 2014-09-18 Feature extracting method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410479118.7A CN104268550B (en) 2014-09-18 2014-09-18 Feature extracting method and device

Publications (2)

Publication Number Publication Date
CN104268550A true CN104268550A (en) 2015-01-07
CN104268550B CN104268550B (en) 2017-08-25

Family

ID=52160070

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410479118.7A Expired - Fee Related CN104268550B (en) 2014-09-18 2014-09-18 Feature extracting method and device

Country Status (1)

Country Link
CN (1) CN104268550B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108021886A (en) * 2017-12-04 2018-05-11 西南交通大学 A kind of unmanned plane repeats texture image part remarkable characteristic matching process
CN108830842A (en) * 2018-06-04 2018-11-16 哈尔滨工程大学 A kind of medical image processing method based on Corner Detection
CN111027544A (en) * 2019-11-29 2020-04-17 武汉虹信技术服务有限责任公司 MSER license plate positioning method and system based on visual saliency detection
CN112966633A (en) * 2021-03-19 2021-06-15 中国测绘科学研究院 Semantic and structural information double-constraint inclined image feature point filtering method
CN113378865A (en) * 2021-08-16 2021-09-10 航天宏图信息技术股份有限公司 Image pyramid matching method and device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101915913A (en) * 2010-07-30 2010-12-15 中交第二公路勘察设计研究院有限公司 Steady automatic matching method for high-resolution satellite image connecting points
CN102693542A (en) * 2012-05-18 2012-09-26 中国人民解放军信息工程大学 Image characteristic matching method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101915913A (en) * 2010-07-30 2010-12-15 中交第二公路勘察设计研究院有限公司 Steady automatic matching method for high-resolution satellite image connecting points
CN102693542A (en) * 2012-05-18 2012-09-26 中国人民解放军信息工程大学 Image characteristic matching method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
N.A.MAT-ISA ETC.: ""Seeded Region Growing Features Extraction Algorithm;Its Potential Use in Improving Screening for Cervical Cancer"", 《INTERNATIONAL JOURNAL OF THE COMPUTER,THE INTERNET AND MANAGEMENT》 *
YANWEI PANG ETC.: ""Fully affine invariant SURF for image matching"", 《NEUROCOMPUTING》 *
谢萍: ""基于Harris角点与SIFT特征的近景影像匹配"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108021886A (en) * 2017-12-04 2018-05-11 西南交通大学 A kind of unmanned plane repeats texture image part remarkable characteristic matching process
CN108021886B (en) * 2017-12-04 2021-09-14 西南交通大学 Method for matching local significant feature points of repetitive texture image of unmanned aerial vehicle
CN108830842A (en) * 2018-06-04 2018-11-16 哈尔滨工程大学 A kind of medical image processing method based on Corner Detection
CN108830842B (en) * 2018-06-04 2022-01-07 哈尔滨工程大学 Medical image processing method based on angular point detection
CN111027544A (en) * 2019-11-29 2020-04-17 武汉虹信技术服务有限责任公司 MSER license plate positioning method and system based on visual saliency detection
CN111027544B (en) * 2019-11-29 2023-09-29 武汉虹信技术服务有限责任公司 MSER license plate positioning method and system based on visual saliency detection
CN112966633A (en) * 2021-03-19 2021-06-15 中国测绘科学研究院 Semantic and structural information double-constraint inclined image feature point filtering method
CN112966633B (en) * 2021-03-19 2021-10-01 中国测绘科学研究院 Semantic and structural information double-constraint inclined image feature point filtering method
CN113378865A (en) * 2021-08-16 2021-09-10 航天宏图信息技术股份有限公司 Image pyramid matching method and device

Also Published As

Publication number Publication date
CN104268550B (en) 2017-08-25

Similar Documents

Publication Publication Date Title
CN109299720B (en) Target identification method based on contour segment spatial relationship
CN113592845A (en) Defect detection method and device for battery coating and storage medium
US9141871B2 (en) Systems, methods, and software implementing affine-invariant feature detection implementing iterative searching of an affine space
CN110443128B (en) Finger vein identification method based on SURF feature point accurate matching
CN110334762B (en) Feature matching method based on quad tree combined with ORB and SIFT
CN110021024B (en) Image segmentation method based on LBP and chain code technology
CN116664559B (en) Machine vision-based memory bank damage rapid detection method
EP2534612B1 (en) Efficient scale-space extraction and description of interest points
CN110781885A (en) Text detection method, device, medium and electronic equipment based on image processing
CN108009554A (en) A kind of image processing method and device
CN106485651B (en) The image matching method of fast robust Scale invariant
CN101650784B (en) Method for matching images by utilizing structural context characteristics
CN104217221A (en) Method for detecting calligraphy and paintings based on textural features
CN104463795A (en) Processing method and device for dot matrix type data matrix (DM) two-dimension code images
CN106981077A (en) Infrared image and visible light image registration method based on DCE and LSS
CN113392856B (en) Image forgery detection device and method
CN104268550A (en) Feature extraction method and device
US20120033873A1 (en) Method and device for determining a shape match in three dimensions
CN108875504B (en) Image detection method and image detection device based on neural network
CN111192194A (en) Panoramic image splicing method for curtain wall building vertical face
CN112614167A (en) Rock slice image alignment method combining single-polarization and orthogonal-polarization images
CN103353941A (en) Natural marker registration method based on viewpoint classification
CN113269752A (en) Image detection method, device terminal equipment and storage medium
Cai et al. An adaptive symmetry detection algorithm based on local features
Kovacs et al. Orientation based building outline extraction in aerial images

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20170825

Termination date: 20210918