CN104268550B - Feature extracting method and device - Google Patents

Feature extracting method and device Download PDF

Info

Publication number
CN104268550B
CN104268550B CN201410479118.7A CN201410479118A CN104268550B CN 104268550 B CN104268550 B CN 104268550B CN 201410479118 A CN201410479118 A CN 201410479118A CN 104268550 B CN104268550 B CN 104268550B
Authority
CN
China
Prior art keywords
characteristic point
image
feature
window
scaled window
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201410479118.7A
Other languages
Chinese (zh)
Other versions
CN104268550A (en
Inventor
鲁路平
张勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201410479118.7A priority Critical patent/CN104268550B/en
Publication of CN104268550A publication Critical patent/CN104268550A/en
Application granted granted Critical
Publication of CN104268550B publication Critical patent/CN104268550B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • G06V10/464Salient features, e.g. scale invariant feature transforms [SIFT] using a plurality of salient features, e.g. bag-of-words [BoW] representations

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a kind of feature extracting method and device, the problem of Image Matching effect that shooting visual angle in the prior art differs greatly is undesirable is improved.Methods described includes:Extract the characteristic point of former image;Assign each different size of scaled window of characteristic point two or more;In each scaled window, carry out region using the characteristic point at the center as the scaled window as seed point to increase and respectively obtain the corresponding more than two foreground areas of each characteristic point, the quantity of the corresponding foreground area of each characteristic point is equal to the quantity for the scaled window for assigning each characteristic point;Calculate the covariance matrix of each foreground area;Affine deformation is carried out according to the covariance matrix of the foreground area to the former image to correct;Characteristic point is extracted on the image after correction, feature point description is carried out.Using this method, the matching effect of image can be significantly improved, it is easy to implement, it is easy to popularization and application.

Description

Feature extracting method and device
Technical field
The present invention relates to Image Matching technology, in particular to a kind of image feature extracting method and device.
Background technology
Same Scene is imaged from different shooting orientation, and full-automatic image matching is carried out to these images, is looked for The same place (image space of the i.e. same object space point on different images) gone out in image is that computer vision field is extremely important Basic steps.The shooting orientation discrepancy of image is bigger, and the deformation between corresponding image also can be more serious, the difficulty of Image Matching Degree also can be bigger.
Current Image Matching Algorithm can be well matched with deforming less image, but for shooting visual angle difference compared with Greatly, the image such as more than more than 40 degree, matching effect can not be effectively ensured.
The content of the invention
It is poor to improve shooting visual angle in the prior art it is an object of the invention to provide a kind of feature extracting method and device The problem of different larger Image Matching effect is undesirable.
To achieve these goals, the technical solution adopted by the present invention is as follows:
In a first aspect, the embodiments of the invention provide a kind of feature extracting method, applied to feature deriving means, the side Method includes:
Extract the characteristic point of former image;
Each different size of scaled window of characteristic point two or more is assigned, the scaled window is with each spy The region obtained centered on levying a little;In each scaled window, kind is used as using the characteristic point at the center as the scaled window Son point carries out region growth and respectively obtains the corresponding more than two foreground areas of each characteristic point, each characteristic point pair The quantity for the foreground area answered is equal to the quantity for the scaled window for assigning each characteristic point;
Calculate the covariance matrix of each foreground area;
Affine deformation is carried out according to the covariance matrix of the foreground area to the former image to correct;
Extracted on the image after correction and correct characteristic point, carry out the correction feature point description.
Wherein, the characteristic point for correcting characteristic point to extract on image after correction, its characteristic point with the former image Property it is substantially identical, unique difference is that extracting object is different, corrects the extracting object of characteristic point for the image after correcting; The extracting object of characteristic point is former image.
With reference in a first aspect, in the first possible mode of first aspect, the association side according to the foreground area Poor matrix carries out affine deformation to the former image, which corrects correction, includes:
Ellipse fitting is carried out according to the covariance matrix of each foreground area, correspondence each foreground area is drawn Elliptic parameter;
Respectively for each characteristic point, to described two yardsticks different size of above centered on each characteristic point Window, the elliptic parameter of the corresponding foreground area of scaled window adjacent two-by-two of relatively more each characteristic point, judges ellipse Whether Circle Parameters difference is more than predetermined threshold value, if not, then it is assumed that the adjacent corresponding foreground area of scaled window two-by-two Elliptic parameter is similar, and the corresponding scaled window of an elliptic parameter is chosen from the similar elliptic parameter of acquisition and is used as the first generation Table scaled window;
Will with the first generation rear sight degree window center identical the corresponding elliptic parameter of remaining described scaled window and institute The corresponding elliptic parameter of first generation table scaled window is stated to be compared, will be ellipse if elliptic parameter difference is more than predetermined threshold value The corresponding scaled window of the elliptic parameter that Circle Parameters difference is more than predetermined threshold value is used as second generation table scaled window;
According to the first generation table scaled window and the covariance of the corresponding foreground area of second generation table scaled window Matrix carries out affine deformation to the former image and corrected.
In above-mentioned, a characteristic point may have it is one or more represent yardstick, for example:One characteristic point can only have first Scaled window or second generation table scaled window are represented, only second generation table scaled window refers to each scaled window of this feature point not Similar situation.
With reference to the first possible mode of first aspect, second may in mode, the elliptic parameter include major axis, Short axle and rotation principal direction, the corresponding foreground area of scaled window adjacent two-by-two of each characteristic point of comparison Elliptic parameter, judges whether elliptic parameter difference includes more than predetermined threshold value:
By the size order of the described two scale above windows of the center identical scaled window pair adjacent two-by-two The difference, the short axle and the difference of the ratio between the major axis of principal direction, the size order are rotated in the elliptic parameter answered Refer to order from big to small or from small to large;
Described in whether the corresponding elliptic parameter rotation principal direction difference of adjacent scaled window is more than two-by-two described in judging Whether predetermined threshold value, the short axle and the difference of the ratio between the major axis are more than the predetermined threshold value;
It is described will with the first generation rear sight degree window center identical the corresponding elliptic parameter of remaining described scaled window The elliptic parameter corresponding with the first generation table scaled window is compared, including:
Judge that with the first generation rear sight degree window center identical remaining described scaled window is corresponding described oval to join Whether rotation principal direction difference is more than the default threshold in the number elliptic parameter corresponding with the first generation table scaled window Whether value, the short axle and the difference of the ratio between the major axis are more than the predetermined threshold value.
With reference to second of possible mode of first aspect, in the third possible mode, the predetermined threshold value is described ellipse Circle Parameters rotation principal direction differs 20 degree, and the ratio between the short axle and described major axis differ 0.1, and the elliptic parameter difference is more than institute State predetermined threshold value and refer to the ELLIPTIC REVOLUTION principal direction difference and be more than more than 20 degree or the short axle and the difference of the ratio between the major axis 0.1。
With reference in a first aspect, may be described using as in the scaled window in mode at second of first aspect The growth threshold value that the characteristic point of the heart carries out region growth as seed point passes through maximum between-cluster variance algorithm Otsu Thresholding is obtained.
With reference in a first aspect, in the third possible mode of first aspect, the image after correction is above carried Characteristic point is taken, carrying out feature point description includes:SIFT algorithms are changed on the image after correction using scale invariant feature Characteristic point is extracted, feature description is carried out.
With reference to the third possible mode of first aspect, in the first possible mode, the use scale invariant feature Change SIFT algorithms and extract characteristic point on the image after correction, carry out feature and describe in step, the feature describes window Mouth size is 55~65 pixels.
With reference in a first aspect, in the 4th kind of possible mode of first aspect, the characteristic point for extracting former image includes:
It is σ according to standard deviationDSingle order Gaussian derivative calculate the gradient in image x directions and y directions;It is σ according to standard deviationI Gaussian function calculate image greyscale second-order matrix Second Moment Matrix component Ix2, IxIy and Iy2, use Harry This Harris feature point extractions algorithm extracts the characteristic point of former image;
Wherein, the standard deviation sigmaDLess than the standard deviation sigmaI, the standard deviation sigmaDWith the standard deviation sigmaIValue be 1~2.
The scaled window is the border circular areas obtained centered on each characteristic point.
Second aspect, the embodiments of the invention provide a kind of feature deriving means, including:
First extraction unit:Characteristic point for extracting former image;
Region Growth Units:For assigning each different size of scaled window of characteristic point two or more, the chi Degree window is the region obtained centered on each characteristic point;In each scaled window, to be used as the scaled window The characteristic point at center as seed point carry out region growth and respectively obtain each characteristic point it is corresponding it is more than two before Scene area, the quantity of the corresponding foreground area of each characteristic point is equal to the number for the scaled window for assigning each characteristic point Amount;
Matrix calculation unit:Covariance matrix for calculating each foreground area;
Correct processing unit:Affine deformation is carried out to the former image for the covariance matrix according to the foreground area Correct;
Second extraction unit:Extracted on the image after correction and correct characteristic point;
Unit is described:For the correction characteristic point that second extraction unit is extracted to be described.
The technique effect that the present invention is realized:
Selection of the embodiment of the present invention uses point feature as the basic element of feature extraction, and point feature is that image is most basic A kind of feature, has minimum fastidious property to the scene type of image, widely applicable, can preferably ensure feature quantity, Point bit distribution, locality and repeatability.
The embodiment of the present invention increases the method for (region growing) to find characteristic point using distinguished point based region Foreground area (support neighborhood), can ensure that the support neighborhood of characteristic point belongs to as far as possible and same object and belong to same Plane, reduces influence of other textures to resolving affine deformation parameter, serves the effect based on the adaptive strain window of presentation content Really, the region of this small range is selected to increase and abandon zone similarity feature extraction (Maximally Stable Extremal Regions, MSER) algorithm large-scale region segmentation, can reduce to the requirement of the difficulty of Region Segmentation Algorithm, preferably keep away Exempt from the inconsistent caused harmful effect of regional extent of multiple Extraction of Image during extracted region, application effect is more preferably.
The form parameter for increasing obtained foreground area using distinguished point based region in the embodiment of the present invention carries out oval Fitting, spy is carried out using whether the elliptic parameter difference corresponding to the covariance matrix of the shape exceedes predetermined threshold value (similarity) Levy the selection for representing scaled window, can with it is very effective reply deformation of image it is larger when image on yardstick anisotropy to spy Levy the adverse effect that scale selection is brought, thus realize with a kind of simple mode complete deformation of image it is larger in the case of feature The selection of yardstick, can effectively solve the problem that problems of the prior art.
Other features and advantages of the present invention will be illustrated in the following description, also, partly be become from specification Obtain it is clear that or being understood by implementing the embodiment of the present invention.The purpose of the embodiment of the present invention and other advantages can pass through Realize and obtain in the specification, claims and accompanying drawing write.
Brief description of the drawings
Technical scheme in order to illustrate the embodiments of the present invention more clearly, below by using required in embodiment Accompanying drawing is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the present invention, for ability For the those of ordinary skill of domain, on the premise of not paying creative work, it can also be obtained according to these accompanying drawings other attached Figure.By shown in accompanying drawing, above and other purpose of the invention, feature and advantage will become apparent from.
Fig. 1 is the schematic flow sheet one of the embodiment of the present invention 1;
Fig. 2 is the schematic flow sheet two of the embodiment of the present invention 1;
Fig. 3 is the structure chart one of the embodiment of the present invention 2;
Fig. 4 is the structure chart two of the embodiment of the present invention 2.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete Whole description, it is clear that described embodiment is only a part of embodiment of the invention, rather than whole embodiments.It is based on Embodiment in the present invention, it is every other that those of ordinary skill in the art are obtained under the premise of creative work is not made Embodiment, belongs to the scope of protection of the invention.
Embodiment 1
The target of Image Matching is to find the similarity that certain objective constraint is met in image to be matched, for a lot For the needing to obtain certain information by image of the task, Image Matching is crucial and basis a step.Such as from space image Centering recovers three-dimensional scenic, the retrieval of image, changes detection, and the target identification in security protection facility etc., due to different biographies The characteristics of sensor is imaged is different, and required Image Matching Algorithm is also different.Technical scheme in the embodiment of the present invention is mainly applicable In the photogrammetric and common optical image of computer vision field, in this field, Image Matching can probably be divided into two Class:One class is to have no or only a small amount of prior information, the relatively small number of matching of match point, be generally used for setting up between image just Beginning relativeness;One class is to know prior information (recursive model and parameter of such as image), match point very intensive matching, is led to It is usually used in setting up the subtle three-dimensional model of scene, previous class is generally called empty three matching, latter class is called intensive by industry Match somebody with somebody.The technology contents of the embodiment of the present invention belong to previous class.
The input of Image Matching Algorithm is image, is typically represented using digitized two-dimensional array, each of which unit Referred to as pixel.The each pixel only one of which numerical value of grayscale image, each pixel of chromatic image has three numerical value, is RGB respectively One quantized value of three wave bands.In addition to the gray scale of pixel, the neighbouring relations of pixel reflect scenery to a certain extent Neighbouring relations because scenery is ultimately imaged position and also influenceed by orientation is shot, in summary, algorithm can be with The information utilized only has the gray value of pixel and the syntople of pixel.Related to grey scale pixel value is at the tone of image Adjustment method, related to the syntople of pixel is the geometric manipulations algorithm of image.Image Matching Algorithm is sought to towards this Direction effort:Pixel of the same name is put unanimously as far as possible different being imaged on of scenery, make as far as possible on tone same Name pixel tone is consistent.In this way, comparing difference just can be relatively easy to.
The technology emphasis of the embodiment of the present invention is the geometric manipulations algorithm of image, to realize sane reliable Image Matching, Matching algorithm allows for tackling the geometry deformation existed between image.Geometry deformation is broadly divided into three major types at present:Translation rotation, Isotropism scaling and anisotropy scaling.They correspond to respectively plane it is European conversion, plane linear design and The gradually incremental geometric transformation of plane affine transformation these three complexities.Although in actual applications, often being existed between image Geometry deformation produced by the projective transformation of higher level, but generally utilize is local feature for matching, so using affine transformation It is also enough to describe local geometry deformation.
It is characterized in a key concept in Image Matching, the change of the gray scale and shape of scenery forms the spy of scenery Levy.On image, all these changes are all embodied by the grey scale change of adjacent pixel, and the presence of feature is Image Matching Precondition, if whole image is all without grey scale change, this image is no information and can not matched, if shadow Some region as in does not have grey scale change, and it is difficult accurate match that the pixel in this block region, which is, so, Image Matching One key content is feature extraction.Divide from mathematical form, image feature there are three kinds of point, line, surface, and different applications is preferred Characteristic type it is also different, such as road extraction preference line feature, classification of remote-sensing images preference region feature, point feature is in all features In be lowermost level, therefore the scope of application is also widest.Different from road extraction, the feature in Image Matching need not be tight Lattice correspond to the scene features that are of practical significance, it is only necessary to it is helpful to Image Matching just.Generally, following some requirements are met Feature preferably can just be used in Image Matching:
Repeatability:In image overlap area between the difference imaging of same scenery, feature can be as much as possible same One algorithm is extracted.Feature " repeatable " is the premise of feature " can match ".
Conspicuousness:Neighborhood where feature includes more grey scale change and information content, can be easier and other spies Levy and make a distinction.
Localizability:The image plane position of feature can be relatively more accurate and reliably marked, and is not in ambiguous two Can situation.
Quantity is more than enough:The need for the quantity of feature will disclosure satisfy that application, and it can be easier to adjust by parameter The extraction quantity of whole feature.Meanwhile, the extraction quantity of increase feature can improve the repeatability of feature.
Distributivity is good:In the case where photographed unique characteristics meet condition, image feature can be than more uniform point Cloth in image everywhere.
Locality:The image capturing range of characteristics span can not be too big, facilitates the geometry deformation in Image Matching Algorithm to handle.
If in addition, feature can be extracted with fairly simple and quick algorithm, matching algorithm can be greatly increased Practicality.
Feature description is also the essential part of Image Matching Algorithm.Feature is extracted later, it is necessary to use certain Method carrys out quantificational description this feature, forms a characteristic vector for representing each feature, could be used in Image Matching.It is most direct With a kind of simplest description quantized value be exactly pixel grey scale in itself.But there is tone difference and noise between different images, directly The use of gray scale is very insecure, even if eliminating tone difference, feature description is carried out using the information of single pixel It is far from being enough, is unsatisfactory for above-mentioned conspicuousness, so must be incorporated into a range of neighborhood territory pixel to strengthen support.
Determine that neighborhood includes both sides content:The size of neighborhood and the shape of neighborhood.If two image resolution phases Together, and shoot orientation also difference less, i.e., the geometry deformation between image is under the rank of plane European conversion, then neighborhood Shape is hardly important, and the main size for considering neighborhood, Criterion of Selecting is to find neighborhood support strength (conspicuousness of feature) and meter A balance between calculation amount.If image resolution has the geometry deformation between difference, image to be under the rank of similarity transformation, Except the size of neighborhood to be considered, the ratio between resolution ratio between image is also predefined so that two supporting zone correspondence identicals Atural object scope.For example:If being gone to be matched with one tree with leaf, rate that the match is successful will be very low, here it is image Scale problem in matching.If also difference is larger in filming image orientation, the geometry deformation between image has reached affine deformation, very To higher rank, then the geometry deformation between different imagings can be clearly.If what is shot is three-dimensional scenic, can also exist Occlusion issue, at this moment determines that the shape of neighborhood is just extremely important.Based on the studies above, inventor has found, is finding suitable neighbour After the pixel of domain, the local affine deformation of image can be corrected according to these pixels, the geometry deformation between image is down to similar change The rank changed, is then further reduced to the rank of European conversion by solving scale problem, then solves the Rotation between image, Just geometry deformation processing can preferably be completed.Geometry deformation processing with feature extraction be it is interdependent accompany, so generally being seen Work is a part for feature extraction.
Reply deformation enough is in plane linear design rank shadow to the ability of Image Matching Algorithm of the prior art The matching task of picture, or even more seriously some are also out of question for deformation, for example:Shooting visual angle difference is within 40 degree, but if becoming Shape is serious again, and matching result cannot ensure that.Cause its matching capacity is limited mainly there are following some reasons:
If using the algorithm without local affine invariant rectification of distortion, such as scale invariant feature conversion (Scale Invariant Feature Transform, SIFT) algorithm, then the deformation of image can be brought to this kind of algorithm three it is unfavorable Influence:First, when feature being described, it is thus necessary to determine that feature describes the size of window, this window size is according to spy The yardstick levied is come what is determined, and with the intensification of deformation extent between image, the anisotropy of yardstick can be increasingly severe on image, makes Scale selection is carried out with isotropic Gaussian function difference (Difference of Gaussian, DoG) operator, effect can be anxious Drastic change is poor so that when progress feature is described, described image capturing range is inconsistent on each image;Secondly, it is tight for deforming The image of weight, imaged shape of the identical object space contiguous range on each image is different, uses circular or square window It cannot be guaranteed that including identical feature contiguous range;Finally, the anisotropy of video imaging also results in feature neighborhood model of the same name Very big difference is distributed with gradient in enclosing;Between the above factor, the feature description vectors that all can seriously reduce feature of the same name Similitude, cause that it fails to match.So, if the local correction of image is not carried out first, directly using similar Scale invariant Feature describes nonsensical.
If first carrying out the local correction of image, then affine constant Image Matching Algorithm will be used.At present, region is special It is to correct a kind of deformation effect preferably algorithm to levy extraction (Maximally Stable Extremal Regions, MSER), it Affine deformation correction is carried out by finding most stable of marking area in image.But in practice, this algorithm can be encountered Some problems:Firstly, since only using most stable of marking area in image, cause feature quantity on the low side, and point of feature Cloth cannot guarantee that the type to scene is more fastidious, be partial to the more scene of region feature.Secondly, the region feature of extraction across Scope more may be excessive, and this non-locality can strengthen the influence of the dissimilarity to matching on periphery;It is most intractable to be, by In the complexity of image, MSER algorithms generally cannot guarantee that the good correspondence of extracted region result, cause rectification of distortion knot Fruit is dissimilar.
For the image that difficulty of matching is larger, the arest neighbors of the feature found according to the similitude of characteristic vector or preceding k (k It is very low more than the ratio that correct feature of the same name is accounted in all features in 0) individual neighbour, cause therefrom to select well Go out correct matching, this is to cause the main cause that it fails to match.
Inventor is concluded that in analysis more than:It is that Image Matching is most essential to look for identical contiguous range Thing.Image is locally corrected according to the characteristics of contiguous range, described, matched, and aids in outstanding rough error and is picked Except strategy, be possible to the larger image successful match of difficulty of matching.In consideration of it, being devised after inventor's lateral thinking The affine invariants extracting method that a kind of distinguished point based region increases, to improve the matching effect of the larger image of deformation, The main design thought of the embodiment of the present invention is as follows:
Local very small amount of pixel can just reflect the deformation information of part, it is not necessary to know the overall shape of object, Larger scope need not be extracted.In view of the characteristic of point feature, selection of the embodiment of the present invention is started with from characteristic point, after deploying Continuous operation.
In view of needing to find roughly the same contiguous range, inventor's selection by seed of the characteristic point of extraction by clicking through The local region of row increases, so as to get shape not only want gray scale similar, and also must be spatially to connect, it is therefore an objective to The point for being not belonging to the shape is eliminated as far as possible, excludes the interference of other textures, it is ensured that the pure property of the affine deformation parameter of calculating, from And improve the similitude of local image after correction.
The shape overall due to not knowing object, the different scale between image is not known yet, while it is also contemplated that image The anisotropy of upper yardstick, when the scope of the image increased for region is only with one, obtains similar affine rectification result Possibility so needing to use multiple dimensioned method to carry out region growth, that is, can need to set up metric space than relatively low, Region growth is carried out on different yardsticks to characteristic point, the completeness of feature is improved.
On the basis of feature completeness is ensured as far as possible, also to make great efforts to reduce the redundancy of feature.By taking preferable angle point as an example, It belongs to a kind of feature unrelated with yardstick to a certain extent, i.e., region growth is carried out on different scale, and resulting is imitative It all will be similar to penetrate rectification of distortion parameter.If being fitted these foreground area shapes, these oval rotation sides with ellipse To will closely between major axis, short axle ratio.For example:When carrying out region to same characteristic point with four kinds of different size of windows During growth, the oval shape of four fitting foreground area shapes is substantially coincident, so the redundancy in order to reduce feature, is needed Represent yardstick (representative scales) selection, remove other yardsticks of redundancy.
Circular window is preferably used in the influence to region shape parameter, the embodiment of the present invention give feature dot-dash in order to reduce Determine neighborhood, and without using other shapes of window.For example:When algorithm is realized, region increasing can be first carried out in rectangular window It is long, then growth results are operated with a circular shuttering, the foreground pixel that will not belong in the range of circular shuttering removes.
Based on above-mentioned mentality of designing, as shown in figure 1, the embodiment of the invention discloses a kind of feature extracting method, being applied to Feature deriving means, this method includes:
Step S300:Extract the characteristic point (initial characteristicses point) of former image;Step S301:Assign each characteristic point two with Upper different size of scaled window, scaled window is the region obtained centered on each characteristic point;In each scaled window, with Region growth is carried out as seed point and respectively obtain each characteristic point corresponding two as the characteristic point at the center of scaled window Foreground area more than individual, the quantity of the corresponding foreground area of each characteristic point is equal to the scaled window that assigns each characteristic point Quantity;Step 302:Calculate the covariance matrix of each foreground area;Step 303:According to the covariance matrix pair of foreground area Image carries out affine deformation correction;Step 304:Extracted on image after correction and correct characteristic point, carried out correction characteristic point and retouch State.
Further, as shown in Fig. 2 the main technical schemes of the embodiment of the present invention include:Step S200:Carried with characteristic point Algorithm is taken to extract the characteristic point of former image;Step S201:Region is carried out using the characteristic point that step S200 is extracted as seed point to increase, Obtain foreground area (supporting zone);Step S202:Calculate the form parameter of foreground area;Step S203:Selection represents yardstick Window, removes redundancy scaled window;Step S204:Affine deformation correction is carried out to image;Step S205:Image after correction Upper extraction characteristic point, carries out feature point description.The particular content of each step is as follows:
Step S200:The characteristic point of former image is extracted with feature point extraction algorithm.
Selection uses point feature as the basic element of algorithm, and point feature is a kind of most basic feature, to the field of image Scape type has minimum fastidious property, widely applicable, can preferably ensure the quantity, point bit distribution, locality and can of feature Repeatability etc..
At present, characteristic point is broadly divided into corner point (corner point) and dropping point (blob point), wherein, corner point The usually crosspoint at edge, end points or Curvature varying larger part;Dropping point surrounding generally has the side that some grey scale changes are constituted Boundary, internal then compare homogeneous, dropping point is at the inside that these borders are included, and this border just embodies feature comprising characteristic Size, that is, feature yardstick, so dropping point from inwardness have more dimensional properties, inventor it has been investigated that, Corner point is compared with dropping point on a position complementarity, if be used in combination, and characteristic type on image can be made more to enrich, point Cloth is more uniform.The embodiment of the present invention extracts two kinds of typical algorithms of this two category features point by introducing respectively, and by analyzing this To their application method in two kinds of respective characteristics of typical algorithm, the adjustment embodiment of the present invention.
Harris (Harris) feature point extraction algorithm is a kind of typical algorithm for extracting corner point, its design philosophy It is that the distribution of gradient can reflect the classification of image feature on image:The gradient of homogenous area all directions is all very small;Edge The gradient for locating some direction is big, and the gradient in other directions is small;At least both direction gradient is all than larger at characteristic point.Gradient Distribution can be represented with the covariance matrix of x directions and y direction gradients, by analyzing the gradient structure in picture point contiguous range Into the situation of characteristic value of covariance matrix judge whether the picture point is characterized a little, the matrix is also referred to as second-order matrix (Second Moment Matrix).The step of extracting Harris characteristic points is as follows:
The gradient in image x directions and y directions is calculated, can be with single order Gaussian derivative template respectively in line direction and column direction Convolution is carried out to obtain;Calculate Second Moment Matrix three component Ix2,IxIy,Iy2;To Second Moment Matrix three components carry out the Gauss weighted average in contiguous range, the size of this contiguous range by Gaussian function mark Quasi- difference σ is determined;Calculate the Harris responses of picture point;The local maximum for extracting response is used as candidate feature point, Ran Hougen Final characteristic point is filtered out according to application demand, the grid size for extracting characteristic point is such as set.
Because Harris Feature Points Extractions are prior art, thus do not make more explanations herein, explanation is needed herein , the standard deviation criteria in Harris Feature Points Extractions is not studied specifically in the prior art, inventor's warp Research finds that Harris Feature Points Extractions are related to two parameters:In single order Gaussian derivative when calculating image gradient Standard deviation sigmaDAnd standard deviation sigma when Second Moment Matrix three components are carried out with Gauss weighted averageIIf, this The selection of two parameters follow following some rules can make extraction corner location be located at angle point inside some, greatly improve behind The convenience that region increases:
Neither can value it is too small, the too small Harris responses that can cause at characteristic point are without clear superiority, Er Qiekang Making an uproar property is very weak.The value size of standard deviation should according to the actual conditions (noise level and readability that include image) of image come Fixed, standard deviation value is bigger, stronger to noise and fuzzy resistivity, with the increase of two standard deviation values, extraction Characteristic point position, when having multiple features in the scope corresponding to standard deviation, be able to can be produced towards movement inside angle point between feature Interference, information can be merged mutually.In empirical tests, the embodiment of the present invention, standard deviation sigmaDAnd standard deviation sigmaIValue between 1~2 It is proper, preferably 1.6;When for the gradient progress average weighted standard deviation sigma of Gauss in local scopeILess than for asking Solve the standard deviation sigma of gradientDWhen, what the gradient information in subrange cannot get well integrates and gathers effect, also cannot be fine Be used for extract characteristic point, thus, the relative size between two standard deviations should meet σD≤σI, in the embodiment of the present invention preferably Take σD=0.8 σI
Laplce Gauss (Laplacian of Gassian, the LoG) operator being made up of second order Gauss derivative and it Some approximation operators, such as DoG operators be relatively more suitable for extract dropping point.Standard deviation in LoG operators determines its central socket mouthful Diameter, the place changed close to the symbol that the place of nest mouthful is weights in LoG templates, if having so one on image Individual feature, the change location of its gray scale is corresponding with this nest mouthful size, then the absolute value of the LoG responses at this feature Will be very big, so as to be extracted, so, with the LoG operators of different standard deviations, it can detect different size of Feature.Here it is using multiple dimensioned and automated characterization scale selection general principle.Because the amount of calculation of LoG operators compares Greatly, so scholars propose multiple approximation operators, approximation operator is morphologically close to LoG operators, but computation amount, Such as DoG (Difference of Gaussian) operator, because the corresponding manner for extracting dropping point is prior art, the present invention is real Example is applied only by taking DoG operators as an example to introduce dropping point extraction and its characteristic, the step of extracting DoG characteristic points is as follows:
With the series of standards difference σ of ascending changeiCorresponding Gaussian function G (σi) gaussian filtering is carried out to image;With G(σi+1) the obtained result of filtering subtracts G (σi) filter result, obtain multiple difference of Gaussian images;Find on image and be simultaneously The local extremum of spatial domain and scale domain is used as candidate feature point;With Harris responses or Hessian matrix (Hessian Matrix method) removes the candidate feature point on edge, obtains final characteristic point.Empirical tests, with the increase of standard deviation, The corner location that DoG operator extractions go out also can be towards movement inside angle point, and this is with Harris Characteristic of The Operators explained before Similar;And it can draw, angle point can be extracted on multiple yardsticks, and the characteristic point of other Blob types is only It can be extracted on the yardstick corresponding with its structure size.This also again show the chi of preferable angle point to a certain extent Spend the scale selection characteristic of independent property and DoG operators.Based on this, when the embodiment of the present invention has drawn extraction DoG characteristic points The size selection rule of the yardstick of each in metric space:
Originating yardstick can not be too small, typically may be selected 1.5~1.6 or so, main function is anti-noise;Adjacent different scale Should not be excessive, otherwise can the extraction of effect characteristicses point and the effect of automatic scale selection;Maximum yardstick is not too big, too big Blob features generally all illustrate that have large range of gray scale homogenous area at this retouches, it is necessary to carry out feature to it than larger window State to increase information content, specific size selection can flexibly be selected according to actual conditions, and the embodiment of the present invention is not limited thereto.
Step S201:Region is carried out as seed point to increase, obtain foreground area (support using the characteristic point that step S200 is extracted Region).
Initial characteristic point (characteristic point of former image) needs the support of neighborhood to have really value, for deformation The image of plane linear design rank is exceeded, circular window generally can not cover similar neighborhood model on different images Enclose.The embodiment of the present invention increases to determine contiguous range to characteristic point by the region of distinguished point based, can so reduce it The influence of its texture, improves the possibility for finding similar neighborhood.The region of distinguished point based increases main comprising two contents:Area Domain increases the rule that the size of action pane and region increase.The size of action pane is used for delimiting initial area-of-interest The scope of (Region Of Iinterest, ROI), the rule that region increases is used for realizing the segmentation of texture in ROI, regular Quality, which is directly affected in the quality of obtained foreground area shape, the embodiment of the present invention, to introduce this two parts respectively.
The size of region growing operation window:
The starting point of the embodiment of the present invention is characteristic point, it is not known that the overall shape of object, do not know yet raw video it Between different scale, while it is also contemplated that on image yardstick anisotropy, if the size of region growing operation window is only adopted With one, obtain similar neighborhood scope possibility can than relatively low, thus, the embodiment of the present invention assigns multiple to each characteristic point Scaled window, the size on region growing operation window, it is most important that set up metric space.
Two kinds of operations will be related to by setting up metric space:Set up the pyramid structure of image and selected on each layer of pyramid Take n (n is more than 0) individual different yardstick.Inventor is through lateral thinking, and pyramid builds several layers of, and the proportionate relationship between every layer is many Few, how the yardstick on each layer, which is chosen, can follow following principle:The metric space of foundation allows for accommodating what is matched Different scale between image.For example:Differences in resolution between image everywhere is all twice, then minimum metric space can So to set:The proportionate relationship that pyramid is built between two layers, every layer is twice, and every layer is only chosen a yardstick;Or golden word Tower only builds one layer, i.e. only raw video, and it is twice of relation that the layer choosing, which is taken between two yardsticks, the two yardsticks,.Yardstick is empty Between parameter setting to be adjusted according to the data cases of processing, if set up metric space can not accommodate matched image Between different scale, the possibility that the match is successful will substantially reduce.This drinks like going to fetch water in well with bucket, water surface distance Well head has 4 meters, but only with 2 meters of rope for drawing water from a well system bucket, this can not get to water.If image is because the original of shooting angle Cause, it is very serious that shape is compressed, and is also conceivable to appropriate magnified image when building pyramid.
Assuming that metric space has been built up, pyramid structure has k, and (k is more than0) layer, each layer is designated as Pi,i∈[0,k-1]; N yardstick is chosen on every layer of pyramid image, σ is designated asj,j∈[0,n-1];So, feature is used on each layer of pyramid image Point extraction algorithm extracts initial characteristic point, and each characteristic point carries out region increasing on the pyramid image that it is extracted Long, region growing operation window has n, and each window is centered on this feature point, and window size is respectively (((int) ceil (3σj)) * 2+1), wherein, Int is to round a numerical value as the function of immediate integer downwards;Ceil be return be more than or Person is equal to the smallest positive integral for specifying expression formula.Meanwhile, in order that the window shape that region increases is circle, can be in each window If a correspondingly sized circular mask is operated to region growth results.
It is pointed out that for the characteristic point come out with DoG operator extractions, using n region growing operation window ratio It is well many to carry out region growth effect that the corresponding scaled window of characteristic dimension of this feature point is used alone.Because with The intensification of distortion of projection's degree of image, the effect of DoG operator automated characterization scale selections can substantially be deteriorated.
For these reasons, in the embodiment of the present invention, preferably assigned to each characteristic point extracted multiple different big Small scaled window, the scaled window of each characteristic point is the region obtained centered on each characteristic point, as around each feature Relation between the different circle of dot-dash radius, the scaled window of each characteristic point is:The size of the scaled window of each characteristic point All it is to be set up according to parameter is ascending, only because the characteristic point position extracted is different, thus the yardstick window of each characteristic point The position of mouth (region growth window) on image is different.
The rule that region increases:
Region increases regular extremely important, because its determining area increases the condition stopped, or is threshold value.Threshold value is selected Adaptive requirement must is fulfilled for, the threshold value that the region where each characteristic point increases in window (scaled window) must basis The actual conditions of this window are determined, and the overall situation that can not be dogmatic is using a threshold value.Automatic threshold selection be current research compared with A many propositions, pertinent literature is also more.Among numerous algorithms, maximum between-cluster variance algorithm (Otsu Thresholding) effect is more satisfactory in embodiments of the present invention for application, so being preferred to use Otsu in the embodiment of the present invention Thresholding come determine region increase threshold value.
Otsu Thresholding mentality of designing is:The quality of one threshold value, is each that produced by this threshold value Distinction between classification is come what is evaluated, and this distinction can be retouched with inter-class variance (between-class variance) State, inter-class variance is bigger, the distinction between class is better, so, Otsu Thresholding are exactly by maximizing side between class Difference is realized.Algorithm steps are as follows:
The grey level histogram of image is counted, and is normalized;Assuming that Image Segmentation is designated as respectively into k classification Ci, i ∈ [1, k] will then determine k-1 threshold value, T is designated as respectivelyi, i ∈ [1, k-1], Ti-1<Ti;Calculate being averaged for whole image Gray scale μA;Calculate the inter-class variance σ under every kind of threshold conditionB 2;Make σB 2The threshold value for obtaining maximum seeks to the threshold value looked for.
Increase to find the foreground area (support neighborhood) of characteristic point using distinguished point based region as stated above, can Ensure that the support neighborhood of characteristic point belongs to same object and belongs to approximately the same plane as far as possible, reduce other textures imitative to resolving The influence of deformation parameter is penetrated, this method actually serves the effect based on the adaptive strain window of presentation content.Select small model The region for enclosing (distinguished point based) increases and abandoned the large-scale region segmentation of similar MSER algorithms, can reduce to region point The difficulty requirement of algorithm is cut, it is bad caused by the regional extent of multiple Extraction of Image is inconsistent when preferably avoiding extracted region Influence.
Step S202:Calculate the form parameter of foreground area.
The form parameter for calculating foreground area includes two aspect contents:Count foreground area shape covariance matrix and Ellipse fitting is carried out according to covariance matrix, oval major axis, short axle and rotation principal direction is calculated.
Wherein, the ratio between short axle and major axis of oval rotation principal direction and ellipse can reflect each to different of local image Property, according to the similarity degree between the ellipse fitting parameter of characteristic point region growth results on each yardstick, feature can be picked out Some represent yardstick, remove the yardstick similar with representing yardstick elliptic parameter, increased using characteristic point region is represented on yardstick The covariance matrix of obtained foreground area shape, locally carries out affine deformation correction to image, can eliminate each to different of image Property, the deformation between the feature local image of the same name after correction is reduced to the rank of plane linear design, using to plane phase Follow-up feature description can be completed on image is corrected by handling preferable SIFT algorithms like conversion rank Image Matching.Single spy A little corresponding foreground area covariance matrix is levied to be only capable of locally correcting image, but due to characteristic point, to be distributed in image each Place, so image can respectively obtain correction everywhere.
In this step, increase the form parameter of obtained foreground area, the i.e. foreground area using distinguished point based region The ratio between short axle and major axis of ELLIPTIC REVOLUTION principal direction and ellipse corresponding to the covariance matrix of shape represent chi to carry out feature The selection of degree, can with it is very effective reply deformation of image it is larger when image on yardstick anisotropy to characteristic dimension select band Come adverse effect so that realize with a kind of simple mode complete deformation of image it is larger in the case of characteristic dimension selection.
The affine deformation for carrying out region only needs to use the covariance matrix of foreground area shape, calculates the ginseng of ellipse fitting It is several, it is the selection that yardstick is represented for feature, removes redundancy yardstick.Region covariance matrix statistics, ellipse fitting and oval ginseng Several calculating be it is very ripe the embodiment of the present invention enumerates one of which implementation method in the prior art, remaining is not made More explanations.
Count the covariance matrix of foreground area shape:
After characteristic point region increases, the pixel for belonging to foreground area shape P is marked as 1, and other pixels are marked as 0, then the covariance matrix ∑ of foreground area shape can calculate as follows.
Wherein, (x, y) represents the flat shape P observed coordinate.
Calculate the elliptic parameter of ellipse fitting:
Planar elliptical has following parameter:Oval barycentric coodinates, oval major semiaxis a, oval semi-minor axis b are oval Rotation principal direction θ.The barycentric coodinates of foreground area shape are exactly oval barycentric coodinates, major axis, short axle and rotation principal direction Can by the covariance matrix of region shape eigenvalue λ1, λ21≥λ2) and characteristic vector e1To calculate, calculation formula is as follows:
Step S203:Selection represents scaled window, removes redundancy scaled window.
In the embodiment of the present invention, to ensure the completeness of feature, the metric space of foundation is redundancy.If used Feature on all yardsticks, is exactly so-called multi-scale method (Multi-Scale Method), if chosen according to some rules The feature on some yardsticks is selected, is exactly scale selection method (Scale Selection Method).Inventor's research discovery, When the imaged viewing angle of image differs greatly, and the deformation comparison between image is serious, similar SIFT etc matching algorithm can fail, Through multi-party analysis, this mainly has two reasons:With the intensification of deformation extent between image, the anisotropy of yardstick can be got on image Come more serious, scale selection is carried out using isotropic DoG operators, effect will necessarily drastically be deteriorated, directly result in progress special When levying description, described image capturing range is inconsistent on each image;With the intensification of deformation extent between image, circle is used Window carries out feature neighborhood model of the same name between the anisotropy that feature description does not adapt to yardstick on image, image to be matched to image Gradient distribution in enclosing is also different, that is to say, that if not carrying out the local correction of image first, carry out SIFT feature description It is nonsensical.
Based on the studies above result, the embodiment of the present invention carries out scale selection preferably by the anisotropy of yardstick.Chi The anisotropy of degree can mathematically be expressed with above the elliptic parameter of ellipse fitting is carried out to foreground area shape, more Specifically a bit, it is the ratio between the rotation principal direction θ and oval short axle and major axis of ellipse r.Oval rotation principal direction has been reacted most Big dimension deformation direction, the ratio between oval short axle and major axis have reacted the anisotropic degree of yardstick, and transverse, short axle Specific size only reacts the scaling of shape, but not the deformation of effect characteristicses.
When the ratio between oval rotation principal direction and oval short axle and major axis all relatively when, it is believed that two shapes It is substantially similar, so, entered by the foreground area shape obtained to carrying out the growth of characteristic point region on each scaled window Row ellipse fitting, and compare the similarity degree between these elliptic parameters, it is possible to carry out the selection of yardstick.When neighbouring several When elliptic parameter on individual yardstick is similar, one of them can be retained as the representative of this feature yardstick, and utilize this yardstick On shape information local rectification of distortion is carried out to image.In embodiments of the present invention, these select yardsticks are claimed To represent yardstick (Representative Scales).
In summary, in the embodiment of the present invention, carrying out the detailed process of scale selection is:The initial characteristicses point of extraction is existed The growth of seed point region is carried out respectively on the scaled window that corresponding multiple numerical value become larger;What is obtained is increased to each region Foreground area shape carries out ellipse fitting, calculates oval parameter, and the sequencing by the scale size of scaled window is more same The ratio between principal direction and short axle and major axis are rotated in the corresponding elliptic parameter of one characteristic point, if the adjacent several yardsticks correspondences of this feature Difference of the rotation principal direction difference no more than 20 degree and short axle and the ratio between major axis no more than 0.1, then in these yardsticks most That middle yardstick elects the representative yardstick of this feature as.When follow-up yardstick elliptic parameter difference corresponding with these yardsticks is super In limited time, it is believed that there is new representative yardstick to occur, then this feature reservation is multiple represents yardstick.For example:One characteristic point have from it is small to Big 6 scaled windows, the 1st, 2, when region is carried out on 3 yardsticks increasing the foreground area shape produced and carry out ellipse fitting, it is ellipse Round rotation principal direction difference is no more than 20 degree and short axle is similar no more than 0.1, i.e. elliptic parameter with the difference of the ratio between major axis, In the case where this elliptic parameter is similar, affine deformation correction result is carried out with the foreground area shape of corresponding scale window and also can It is similar, redundancy can be produced, thus need to carry out representing yardstick searching, some unnecessary yardsticks are removed, just the 2nd yardstick are assign as one Individual represent assumes that elliptic parameter is also similar between yardstick, remaining 3 scaled windows (yardstick 4,5,6), but they with above Oval parameter differences all transfinite on any one yardstick, then the 5th yardstick is also served as one represents yardstick.So result It is that this characteristic point has two to represent yardstick, and other yardsticks are not considered further that then.Correspondingly, other characteristic points also carry out class As operate, be that each characteristic point carries out representing scale selection.This method to the feature of Corner types and Blob types all It is applicable, because the reference frame of algorithm is the form parameter of feature.
Step S204:Affine deformation correction is carried out to image.
In the embodiment of the present invention, yardstick is represented in the calculating and feature that have passed through foreground area shape covariance matrix After selection, local affine deformation correction is carried out to image using the corresponding covariance matrix of yardstick is represented.Due to covariance Matrix is positive definite matrix, thus can carry out cholesky matrix decomposition, and shape is sat with the inverse matrix of the matrix of consequence of decomposition Mark into line translation, make only to remain next rotational deformation between two transformation results, so as to realize that the affine deformation of image is corrected.
Step S205:Characteristic point is extracted on image after correction, feature point description is carried out.
After affine deformation is corrected, similitude of the image near feature of the same name is improved a lot, and this is to after the completion of It is convenient that continuous Image Matching work is provided.Because correcting the Coordinate Conversion between the coordinate raw video corresponding with it of image What relation was to determine, feature is extracted on image that only need to be after correction and feature description is carried out.Complete this partial function The selection of algorithm now is a lot, and SIFT algorithms are preferably used in the embodiment of the present invention, on the details of SIFT algorithms, be may be referred to Available data, the embodiment of the present invention is not described in detail.The content that will be introduced herein, is on carrying out when feature is described very A small but very important problem:How the size (or being the size of match window) that feature describes window is chosen and is compared Properly.The characteristic vector of method formation is either still described using similar SIFT feature using coefficient correlation, can be all related to The problem of selected characteristic describes window size.Too small window because comprising image information very little, to the description dynamics of feature It is too weak, easily cause higher error hiding;The too big influence for being subject to deformation of image again of description window, cause feature of the same name it Between similarity decline, equally reduce matching rate, thus select suitable feature describe window size for improve the match is successful Rate is very important.
It is related that feature, which describes the size of window generally to the yardstick of this feature, such as in SIFT algorithms, it is assumed that one The yardstick of individual characteristic point is s, then the feature of this feature point, which describes window size w, to be chosen as follows:W= (((int)ceil(s*6))*4+1);Wherein, Int is to round a numerical value as the function of immediate integer downwards;ceil It is to return to the smallest positive integral for being more than or equal to and specifying expression formula.So, research characteristic describes influence of the window size to matching To how to set up metric space and have important reference value.
The embodiment of the present invention has accurate internal and external orientation and the stereogram of terrain model to study using some Feature describes influence of the size to matching of window, as a result finds:Method is described for SIFT feature, feature describes window size It is proper near 60 pixels.The relation described according to characteristic dimension and feature between window size can be released, and be set up During metric space, relatively good range scale is between 1.5~6.5.Simultaneously it can be found that when deformation of image is smaller, increasing Big feature describes window, what match point accuracy declined be not clearly, and when deformation of image than it is larger when, match point is correct Rate decrease speed is considerably more rapid.
Embodiment 2
As shown in figure 3, the embodiment of the invention discloses a kind of feature deriving means, including:First extraction unit 100:With In the characteristic point for extracting former image;Region Growth Units 101:For assigning each different size of yardstick of characteristic point two or more Window, the scaled window of each characteristic point is the region obtained centered on each characteristic point;In each scaled window, to be used as chi The characteristic point at center of window is spent as seed point progress region growth, respectively obtains each characteristic point corresponding more than two Foreground area, the quantity of the corresponding foreground area of each characteristic point is equal to the quantity for the scaled window for assigning each characteristic point;Square Battle array computing unit 102:Covariance matrix for calculating each foreground area;Correct processing unit 103:For according to foreground zone The covariance matrix in domain carries out affine deformation correction to image;Second extraction unit 104:Extracted on the image after correction Characteristic point;Unit 105 is described:For the characteristic point that the second extraction unit 104 is extracted to be described.
Matrix calculation unit 102 is drawn specifically for carrying out ellipse fitting according to the covariance matrix of each foreground area The elliptic parameter of each foreground area of correspondence;The different size of yardstick window of two or more centered on each characteristic point is directed to respectively Mouthful, the elliptic parameter of the corresponding foreground area of scaled window adjacent two-by-two of relatively more each characteristic point judges elliptic parameter difference Whether predetermined threshold value is more than, if not, then it is assumed that the elliptic parameter of the corresponding foreground area of adjacent scaled window is similar two-by-two, from The corresponding scaled window of an elliptic parameter is chosen in the similar elliptic parameter obtained as representing scaled window;Will be with representative The corresponding elliptic parameter of scaled window center remaining scaled window of identical elliptic parameter corresponding with representing scaled window is carried out Compare, if elliptic parameter difference is more than predetermined threshold value, elliptic parameter difference is more than to the corresponding chi of elliptic parameter of predetermined threshold value Window is spent as new representative scaled window, and reservation is original to represent scaled window;Correcting processing unit 103 is used for according to representative The foreground area covariance matrix of scaled window carries out affine deformation correction to image.
In above-mentioned, elliptic parameter includes major axis, short axle and rotation principal direction, and matrix calculation unit 102 is specifically in The size order of heart identical two or more scaled window, which compares, rotates master in the corresponding elliptic parameter of adjacent scaled window two-by-two The difference of the ratio between difference, short axle and the major axis in direction, size order refers to order from big to small or from small to large;Judge two two-phases Whether the corresponding elliptic parameter rotation principal direction difference of adjacent scaled window is more than predetermined threshold value, and short axle and the difference of the ratio between major axis are It is no to be more than predetermined threshold value;Judge elliptic parameter corresponding with first generation rear sight degree window center identical remaining scaled window and the Rotate whether principal direction difference is more than predetermined threshold value, short axle and the ratio between major axis in the corresponding elliptic parameter of generation table scaled window Whether difference is more than predetermined threshold value.
Region Growth Units 101 are used to obtain to be used as chi by maximum between-cluster variance algorithm Otsu Thresholding The characteristic point for spending the center of window carries out the growth threshold value of region growth as seed point;For being changed by scale invariant feature SIFT algorithms extract characteristic point on the image after correction, carry out feature description.
Describing unit 105 is used to carry out correction feature point description by size for the description window of 55~65 pixels.
First extraction unit 100 is specifically for being σ according to standard deviationDSingle order Gaussian derivative calculate image x directions and y side To gradient;It is σ according to standard deviationIGaussian function calculate image greyscale second-order matrix Second Moment Matrix point Measure Ix2, IxIy and Iy2, the characteristic point of former image is extracted using Harris's Harris feature point extractions algorithm;Wherein, standard deviation sigmaD Less than standard deviation sigmaI, standard deviation sigmaDWith standard deviation sigmaIValue be 1~2.
As shown in figure 4, the embodiment of the present invention additionally provides a kind of feature deriving means, including:Processor 400, memory 404, bus 402 and communication interface 403, the processor 400, communication interface 403 and memory 404 are connected by bus 402.
Wherein, memory 404 is used for storage program 401;
Processor 400 is used to perform the program 401 in memory 404;Wherein, processor 400 is connect by communication interface 403 Receive data flow;
In the specific implementation, program 401 can include program code, described program code include computer-managed instruction and Algorithm etc.;
Processor 400 is probably a central processor CPU, or specific integrated circuit ASIC (Application Specific Integrated Circuit), or it is arranged to implement one or more integrated electricity of the embodiment of the present invention Road.
As shown in figure 3, program 401 can include the first extraction unit 100:Characteristic point for extracting former image;Region Growth Units 101:For assigning each different size of scaled window of characteristic point two or more, the scaled window of each characteristic point is The region obtained centered on each characteristic point;In each scaled window, using the characteristic point at the center as scaled window as Seed point carries out region growth, respectively obtains the corresponding more than two foreground areas of each characteristic point, each characteristic point correspondence The quantity of foreground area be equal to the quantity of the scaled window for assigning each characteristic point;Matrix calculation unit 102:It is every for calculating The covariance matrix of individual foreground area;Correct processing unit 103:Image is carried out for the covariance matrix according to foreground area Affine deformation is corrected;Second extraction unit 104:Characteristic point is extracted on the image after correction;Unit 105 is described:For The characteristic point that second extraction unit 104 is extracted is described.
It is apparent to those skilled in the art that, for convenience and simplicity of description, the device of foregoing description Specific work process and principle, may be referred to corresponding process, the principle in preceding method embodiment, will not be repeated here.
In embodiment provided herein, it should be understood that disclosed apparatus and method, others can be passed through Mode is realized.For example, device embodiment described above is only schematical, for example, the division of the unit, is only A kind of division of logic function, can there is other dividing mode when actually realizing, such as multiple units or component can combine or Person is desirably integrated into another system, or some features can be ignored, or does not perform.Another, shown or discussed is mutual Between connection can be electrical, machinery or other forms.
The unit illustrated as separating component can be or may not be it is physically separate, it is aobvious as unit The part shown can be or may not be physical location, you can with positioned at a place, or can also be distributed to multiple On NE.Some or all of unit therein can be selected to realize the mesh of this embodiment scheme according to the actual needs 's.
In addition, each functional unit in embodiments of the present invention can be integrated in a processing unit or each Individual unit is individually physically present, can also two or more units it is integrated in a unit.
If the function is realized using in the form of SFU software functional unit and is used as independent production marketing or in use, can be with It is stored in a computer read/write memory medium.Understood based on such, technical scheme is substantially in other words The part contributed to prior art or the part of the technical scheme can be embodied in the form of software product, the meter Calculation machine software product is stored in a storage medium, including some instructions are to cause relevant device to perform the embodiment of the present invention The all or part of step of methods described.And foregoing storage medium includes:Mobile phone, USB flash disk, mobile hard disk, read-only storage (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disc or CD Etc. it is various can be with the medium of store program codes.
The preferred embodiments of the present invention are the foregoing is only, are not intended to limit the invention, for the skill of this area For art personnel, the present invention can have various modifications and variations.Within the spirit and principles of the invention, that is made any repaiies Change, equivalent substitution, improvement etc., should be included in the scope of the protection.

Claims (10)

1. a kind of feature extracting method, it is characterised in that applied to feature deriving means, methods described includes:
Extract the characteristic point of former image;
Each different size of scaled window of characteristic point two or more is assigned, the scaled window is with each characteristic point Centered on the region that obtains;In each scaled window, seed point is used as using the characteristic point at the center as the scaled window Carry out region growth and respectively obtain the corresponding more than two foreground areas of each characteristic point, each characteristic point is corresponding The quantity of foreground area is equal to the quantity for the scaled window for assigning each characteristic point;
Calculate the covariance matrix of each foreground area;
Affine deformation correction is carried out to the former image according to the covariance matrix of the foreground area;
Extracted on image after correction and correct characteristic point, carry out the correction feature point description.
2. feature extracting method according to claim 1, it is characterised in that the covariance according to the foreground area Matrix carries out affine deformation to the former image, which to be corrected, includes:
Ellipse fitting is carried out according to the covariance matrix of each foreground area, the ellipse of correspondence each foreground area is drawn Circle Parameters;
Respectively for each characteristic point, to described two yardstick windows different size of above centered on each characteristic point Mouthful, the elliptic parameter of the corresponding foreground area of scaled window adjacent two-by-two of relatively more each characteristic point judges oval Whether parameter differences are more than predetermined threshold value, if not, then it is assumed that the adjacent corresponding foreground area of scaled window two-by-two it is ellipse Circle Parameters are similar, and the corresponding scaled window of an elliptic parameter is chosen from the similar elliptic parameter of acquisition and is represented as first Scaled window;
Will with the first generation rear sight degree window center identical the corresponding elliptic parameter of remaining described scaled window and described the The corresponding elliptic parameter of generation table scaled window is compared, if elliptic parameter difference is more than predetermined threshold value, by ellipse ginseng The corresponding scaled window of the elliptic parameter that number difference is more than predetermined threshold value is used as second generation table scaled window;
According to the first generation table scaled window and the covariance matrix of the corresponding foreground area of second generation table scaled window Affine deformation is carried out to the former image to correct.
3. feature extracting method according to claim 2, it is characterised in that the elliptic parameter include major axis, short axle and Rotate principal direction, the oval ginseng of the corresponding foreground area of scaled window adjacent two-by-two of each characteristic point of comparison Number, judges whether elliptic parameter difference includes more than predetermined threshold value:
It is corresponding by the size order of the described two scale above windows of the center identical scaled window adjacent two-by-two Rotate the difference of principal direction, the short axle and the difference of the ratio between described major axis in the elliptic parameter, the size order refer to from Order small or from small to large is arrived greatly;
Whether the corresponding elliptic parameter rotation principal direction difference of adjacent scaled window is more than described preset two-by-two described in judging Whether threshold value, the short axle and the difference of the ratio between the major axis are more than the predetermined threshold value;
It is described will with the first generation rear sight degree window center identical the corresponding elliptic parameter of remaining described scaled window and institute The corresponding elliptic parameter of first generation table scaled window is stated to be compared, including:
Judge with the first generation rear sight degree window center identical the corresponding elliptic parameter of remaining described scaled window and Whether rotation principal direction difference is more than the predetermined threshold value, institute in the corresponding elliptic parameter of the first generation table scaled window State short axle and whether the difference of the ratio between the major axis is more than the predetermined threshold value.
4. feature extracting method according to claim 3, it is characterised in that the predetermined threshold value is revolved for the elliptic parameter Turn principal direction and differ 20 degree, the ratio between the short axle and described major axis differ 0.1, and the elliptic parameter difference is more than the default threshold Value refers to the ELLIPTIC REVOLUTION principal direction difference and is more than the difference of the ratio between 20 degree or the short axle and the major axis more than 0.1.
5. feature extracting method according to claim 1, it is characterised in that the center using as the scaled window Characteristic point as seed point carry out region growth growth threshold value pass through maximum between-cluster variance algorithm Otsu Thresholding Obtain.
6. feature extracting method according to claim 1, it is characterised in that extract and correct on the image after correction Characteristic point, carrying out the correction feature point description includes:Using scale invariant feature change SIFT algorithms after correction described in Characteristic point is extracted on image, feature description is carried out.
7. feature extracting method according to claim 6, it is characterised in that the use scale invariant feature changes SIFT Algorithm extracts correction characteristic point on the image after correction, correct in feature point description step, the correction feature Point description window size is 55~65 pixels.
8. feature extracting method according to claim 1, it is characterised in that the characteristic point of the former image of the extraction includes:
It is σ according to standard deviationDSingle order Gaussian derivative calculate the gradient in image x directions and y directions;It is σ according to standard deviationIHeight This function calculates image greyscale second-order matrix Second Moment Matrix component Ix2, IxIy and Iy2, use Harris Harris feature point extractions algorithm extracts the characteristic point of former image;
Wherein, the standard deviation sigmaDLess than the standard deviation sigmaI, the standard deviation sigmaDWith the standard deviation sigmaIValue be 1~2.
9. the feature extracting method according to claim 1~8 any one, it is characterised in that the scaled window be with The border circular areas obtained centered on each characteristic point.
10. a kind of feature deriving means, it is characterised in that including:
First extraction unit:Characteristic point for extracting former image;
Region Growth Units:For assigning each different size of scaled window of characteristic point two or more, the yardstick window Mouth is the region obtained centered on each characteristic point;In each scaled window, using the center as the scaled window Characteristic point as seed point carry out region growth and respectively obtain the corresponding more than two foreground zones of each characteristic point Domain, the quantity of the corresponding foreground area of each characteristic point is equal to the quantity for the scaled window for assigning each characteristic point;
Matrix calculation unit:Covariance matrix for calculating each foreground area;
Correct processing unit:Affine deformation is carried out for the covariance matrix according to the foreground area to the former image to entangle Just;
Second extraction unit:Extracted on the image after correction and correct characteristic point;
Unit is described:For the correction characteristic point that second extraction unit is extracted to be described.
CN201410479118.7A 2014-09-18 2014-09-18 Feature extracting method and device Expired - Fee Related CN104268550B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410479118.7A CN104268550B (en) 2014-09-18 2014-09-18 Feature extracting method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410479118.7A CN104268550B (en) 2014-09-18 2014-09-18 Feature extracting method and device

Publications (2)

Publication Number Publication Date
CN104268550A CN104268550A (en) 2015-01-07
CN104268550B true CN104268550B (en) 2017-08-25

Family

ID=52160070

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410479118.7A Expired - Fee Related CN104268550B (en) 2014-09-18 2014-09-18 Feature extracting method and device

Country Status (1)

Country Link
CN (1) CN104268550B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108021886B (en) * 2017-12-04 2021-09-14 西南交通大学 Method for matching local significant feature points of repetitive texture image of unmanned aerial vehicle
CN108830842B (en) * 2018-06-04 2022-01-07 哈尔滨工程大学 Medical image processing method based on angular point detection
CN111027544B (en) * 2019-11-29 2023-09-29 武汉虹信技术服务有限责任公司 MSER license plate positioning method and system based on visual saliency detection
CN112966633B (en) * 2021-03-19 2021-10-01 中国测绘科学研究院 Semantic and structural information double-constraint inclined image feature point filtering method
CN113378865B (en) * 2021-08-16 2021-11-05 航天宏图信息技术股份有限公司 Image pyramid matching method and device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101915913A (en) * 2010-07-30 2010-12-15 中交第二公路勘察设计研究院有限公司 Steady automatic matching method for high-resolution satellite image connecting points
CN102693542A (en) * 2012-05-18 2012-09-26 中国人民解放军信息工程大学 Image characteristic matching method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101915913A (en) * 2010-07-30 2010-12-15 中交第二公路勘察设计研究院有限公司 Steady automatic matching method for high-resolution satellite image connecting points
CN102693542A (en) * 2012-05-18 2012-09-26 中国人民解放军信息工程大学 Image characteristic matching method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
"Fully affine invariant SURF for image matching";Yanwei Pang etc.;《Neurocomputing》;20121230;第85卷;第1-6页 *
"基于Harris角点与SIFT特征的近景影像匹配";谢萍;《中国优秀硕士学位论文全文数据库 信息科技辑》;20111215(第12期);I138-897 *
Its Potential Use in Improving Screening for Cervical Cancer".《International Journal of The Computer,the Internet and Management》.2005,第13卷(第1期),第61-70页. *
N.A.Mat-Isa etc.."Seeded Region Growing Features Extraction Algorithm *

Also Published As

Publication number Publication date
CN104268550A (en) 2015-01-07

Similar Documents

Publication Publication Date Title
Goshtasby 2-D and 3-D image registration: for medical, remote sensing, and industrial applications
US8160366B2 (en) Object recognition device, object recognition method, program for object recognition method, and recording medium having recorded thereon program for object recognition method
CN102834845B (en) The method and apparatus calibrated for many camera heads
CN106709950B (en) Binocular vision-based inspection robot obstacle crossing wire positioning method
CN104268550B (en) Feature extracting method and device
CN107301402B (en) Method, device, medium and equipment for determining key frame of real scene
CN106981077B (en) Infrared image and visible light image registration method based on DCE and LSS
CN116664559B (en) Machine vision-based memory bank damage rapid detection method
CN111369605B (en) Infrared and visible light image registration method and system based on edge features
CN109711268B (en) Face image screening method and device
CN108875504B (en) Image detection method and image detection device based on neural network
CN112712518B (en) Fish counting method and device, electronic equipment and storage medium
CN111738045B (en) Image detection method and device, electronic equipment and storage medium
CN104298995A (en) Three-dimensional face identification device and method based on three-dimensional point cloud
CN111192194A (en) Panoramic image splicing method for curtain wall building vertical face
CN115375842A (en) Plant three-dimensional reconstruction method, terminal and storage medium
CN115601574A (en) Unmanned aerial vehicle image matching method for improving AKAZE characteristics
CN114119437A (en) GMS-based image stitching method for improving moving object distortion
CN105139013A (en) Object recognition method integrating shape features and interest points
CN110516731A (en) A kind of visual odometry feature point detecting method and system based on deep learning
Yao et al. Registrating oblique SAR images based on complementary integrated filtering and multilevel matching
CN115311691B (en) Joint identification method based on wrist vein and wrist texture
CN108960285B (en) Classification model generation method, tongue image classification method and tongue image classification device
CN115035281B (en) Rapid infrared panoramic image stitching method
CN117079272A (en) Bullet bottom socket mark feature identification method combining manual features and learning features

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20170825

Termination date: 20210918

CF01 Termination of patent right due to non-payment of annual fee