CN106649624A - Local feature point verification method based on global relation consistency constraint - Google Patents

Local feature point verification method based on global relation consistency constraint Download PDF

Info

Publication number
CN106649624A
CN106649624A CN201611109737.2A CN201611109737A CN106649624A CN 106649624 A CN106649624 A CN 106649624A CN 201611109737 A CN201611109737 A CN 201611109737A CN 106649624 A CN106649624 A CN 106649624A
Authority
CN
China
Prior art keywords
characteristic point
image
point
local feature
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201611109737.2A
Other languages
Chinese (zh)
Other versions
CN106649624B (en
Inventor
姚金良
杨醒龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN201611109737.2A priority Critical patent/CN106649624B/en
Publication of CN106649624A publication Critical patent/CN106649624A/en
Application granted granted Critical
Publication of CN106649624B publication Critical patent/CN106649624B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/5866Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, manually generated location and time information
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/211Selection of the most significant subset of features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Library & Information Science (AREA)
  • Databases & Information Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Multimedia (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a local feature point verification method based on a global relation consistency constraint. The local feature point verification method comprises three parts: offline learning, local feature point quantification and feature point voting verification. Offline learning is used for construction of a visual vocabulary dictionary. Local feature point quantification comprises three steps of: 1, extraction of local feature points; 2, quantification of feature descriptors; and 3, quantification of a main direction, a scale and an orientation. A visual vocabulary verification part uses two methods, one method is weak relation consistency verification, and the other method is strong geometric verification. Both the methods verify candidate feature points by adopting a voting mechanism, and comprise similar steps of: 1, acquiring candidate images and the candidate feature points; and 2, verifying the candidate feature points by voting. The local feature point verification method disclosed by the invention can be suitable for influence brought by changes of image cutting, rotation, scale zooming and the like, and can be used in application of image retrieval, classification and the like on the basis of visual vocabularies.

Description

Local feature region verification method based on holotopy consistency constraint
Technical field
The invention belongs to Computer Image Processing and field of machine vision, are related to the local feature of two kinds of view-based access control model vocabulary Point checking.
Background technology
It is the copy image retrieval mode of current more main flow based on the image retrieval of local feature, local feature region is retouched State son and be quantified as visual vocabulary and using bag of words representing image, be a class important method of present image retrieval.But The visual vocabulary for obtaining is quantified by description of local feature and does not have clear and definite meaning relative to the vocabulary in natural language, It is easily affected by noise.In order to ensure that the separating capacity of visual vocabulary requires that visual vocabulary quantity is more in dictionary Better.But more visual vocabularies result in its anti-noise ability and die down, and need when local feature is quantified as visual vocabulary Expend more amounts of calculation.Caused by being quantified as after visual vocabulary for local feature, ambiguity problem, has the researcher of part The problem is concerned about, and has proposed the solution of part.
To increase the accuracy of local feature matching, visual vocabulary Ambiguity is reduced, has researcher putting forth effort on increase The quantity of information of visual vocabulary, to improve the resolution of visual vocabulary.Multiple dictionaries are used to quantify the method for local feature and can carry The recall rate of hi-vision retrieval, but the quantization of multiple dictionaries has common factor, and these common factors belong to redundant data.Substantial amounts of redundant data Not only it is helpless to improve retrieval effectiveness, and recall precision can be affected.To solve this problem, Zheng proposes one kind and is based on Bayes's merging method of multiple dictionaries is used for the degree of association for reducing dictionary, reduces redundant data.Mortensen is in each local Increase global texture in characteristic point, make local feature that there is global property.The method can be used for the area for improving visual vocabulary Other ability, but the method robustness on the change of scale of image is not good enough.
There is researcher to wish by the space dependence of modeling visual vocabulary (local feature) to improve visual vocabulary Descriptive power.Yao proposes a kind of sub- generation method of context-descriptive of visual vocabulary, and the method considers that characteristic point is special with neighbour Relation between levying a little, the Context similarity relation during image detection between computation vision vocabulary judge visual word Converge and whether correctly match.Liu proposes two methods based on local feature, respectively with one-to-one between local feature region and one Many relations are characterized and are a little encoded.Above-mentioned two methods belong to the method first verified, are special before image detection Relation of a little establish is levied, has obvious advantage on retrieval rate.It is using the similar method based on neighbour's local feature, past Toward the global sexual intercourse that have ignored between characteristic point.
It is of overall importance to take into account, also there is scholar to consider the constraint of overall importance of local feature in recent years.Zheng has found, in figure As, in retrieving, query image has metastable position relationship with the visual phrase correctly matched in candidate image, such as The coordinate of the visual phrase of matching subtracts each other, and the point of correct matching can fall into the position of Relatively centralized.Zhou employs the sky for compacting Between coded method describing the mutual alignment relation of visual vocabulary.But rotation transformation of the method to image supports it is not to manage very much Think, need by the position relationship of structure multiple directions to improve the robustness to rotation transformation.
Polysemia caused by being quantified as after visual vocabulary for local feature and the matching accuracy rate that causes is relatively low asks Topic, the two methods of the present invention propose to strengthen visual vocabulary separating capacity using the holotopy based on local feature.The method Requirement, the various editors that image can be tackled and conversion in terms of compactness and robustness two is met, and has preferable effect;
The content of the invention
The purpose of the present invention is for the deficiencies in the prior art, there is provided two kinds of offices based on holotopy consistency constraint Portion's characteristic point verification method.
The technical solution adopted for the present invention to solve the technical problems is as follows:
1. the local feature region verification method based on holotopy consistency constraint, it is characterised in that including following three Point:(1) off-line learning part, (2) characteristic point quantized segment, (3) characteristic point ballot verification portion;Off-line learning part is used for structure Build visual vocabulary dictionary;Characteristic point quantized segment is according to the visual vocabulary dictionary obtained by off-line learning to the local feature amount of carrying out Change;Characteristic point votes verification portion for verifying to the characteristic point in candidate image, is implemented as follows:
Step (1) off-line learning, obtains visual vocabulary dictionary to great amount of samples grouping and clustering.
Step (2) is quantified to the characteristic point of query image by visual vocabulary dictionary, obtains visual vocabulary.
Step (3) obtains candidate feature point for the visual vocabulary of query image is matched in index database, with candidate feature The affiliated image unique mark opening relationships of point, obtains some candidate images.
Step (4) is verified to characteristic point by the weak relation of global coherency or the constraint of strong geometrical relationship, is finally reached to waiting Select the purpose of image authentication.
(1) off-line learning part, the part it is critical that the structure of the visual vocabulary dictionary of local feature, specifically Step is as follows:
1-1. chooses great amount of images and builds image library, and extracts the local feature region and its feature description of image in image library The Feature Descriptor of extraction is built into Sample Storehouse by son;
1-2. obtains visual vocabulary dictionary by Sample Storehouse;Specifically, the characteristic vector to Feature Descriptor in Sample Storehouse It is grouped, K Ge Lei centers is obtained by K mean cluster in each feature group, each class center for a characteristic vector is Represent a root in visual vocabulary, root set of the K Ge Lei centers for this feature group;The word built in each feature group Root set is combined and obtains visual vocabulary dictionary;
(2) characteristic point quantized segment, the quantization of characteristic point include two parts:The quantization of local feature description's, main formula Quantify to, yardstick and coordinate.
2-1. local feature descriptions quantifies:Local feature point set S={ P is extracted to input picturei, i ∈ [0, Q] }, Q For the number of local feature region in input picture, PiRefer to i-th local feature region;And according to visual vocabulary dictionary by packet Quantization method is by local feature region PiFeature Descriptor be quantified as visual vocabulary VWi;Comprise the following steps that:
2-1-1. extracts local feature region P from input pictureiFeature Descriptor Fi, position (Pxi,Pyi), yardstick σiWith Principal direction θiInformation, i.e. local feature region PiIt is expressed as [Fiii,Pxi,Pyi];
2-1-2. is to each local feature region PiFeature Descriptor Fi, group quantization side is adopted according to visual vocabulary dictionary Method obtains visual vocabulary;It is by Feature Descriptor F according to the group quantization of visual vocabulary dictionaryiIt is divided into M groups, per group is D/M Feature, wherein D are characterized the sub- F of descriptioniThe dimension of characteristic vector;Then per group of characteristic vector is trained according to step 1-2 Visual vocabulary dictionary be individually quantified as Vj, then Feature Descriptor F is obtained using group quantizationiVisual vocabulary VWiFor:Wherein, L is the root number of correspondence group in visual vocabulary dictionary;So as to a local feature region Pi It is represented as [VWiii,Pxi,Pyi];Per stack features vector quantization by the root set of the group based on it is European away from From the nearest class center of lookup, and using the subscript at such center as its quantized result;
2-2. principal directions, yardstick and coordinate quantify:Principal direction θ being mentioned above is the radian value of a floating type, It is quantized into integer angle value θ here:θii*180/π。
Same location information (Pxi,Pyi) and yardstick σiAlso it is quantified as shaping.In quantization scale, by σiTake advantage of 100 and then take It is whole, remain certain precision.
(3) characteristic point ballot verification portion, the realization of the part include two ways:The first is based on global weak relation The local feature region verification method of consistency constraint, is for second the local feature based on global strong geometrical relationship consistency constraint Point verification method.Voting mechanism is adopted in two ways:In a width candidate image, if a certain candidate feature point and a definite proportion The contextual feature point of example is while meet the constraint relation, it is believed that the point is the characteristic point of correct matching.
During characteristic point is verified, orientative feature main according to local feature region, principal direction feature and yardstick are special Levy to verify whether characteristic point matches;Two ways is described below:
The weak relationship consistency checkings of 3-1.:The method is had with scale size according to the principal direction between correct matching characteristic point There is conforming principle.Comprise the following steps that:
3-1-1. quantifies resulting visual vocabulary by the characteristic point of query image, carries out with the characteristic point in index database A large amount of candidate feature points that matching is obtained.Hash tables are built as key word using the image ID (imgId) belonging to these characteristic points, Find some candidate images.
3-1-2. is in query image, if the yardstick of a characteristic point is less than a certain contextual feature point, in candidate image In, corresponding candidate feature point should equally meet the constraint.Principal direction should also be as meeting this condition simultaneously, and formula is as follows:
Wherein, Scl represents yardstick, and Ori represents principal direction, subscript ai, and aj represents a images i-th, j characteristic point.Mi,jFor 1 Two characteristic points i are represented, j meets the comformity relation and constrains, and characteristic point j increases by 1 to the poll of characteristic point i.If certain characteristic point The constraint is met with a certain proportion of contextual feature point, then it is assumed that the candidate feature point is correct match point.In image retrieval During, calculate ballot summation S obtained by correct matching characteristic point.
Wherein,It is the poll obtained by characteristic point i, Th is ballot threshold value.The ballot of each candidate image and S, For weighing the similarity degree of a candidate image.
The strong geometric verifications of 3-2.:Strong geometric verification is to calculate principal direction difference and two angles of angular displacement between characteristic point Relation.In two angles that the method is used, principal direction difference is the difference of principal direction between two characteristic points.Principal direction difference can be led to Cross equation below to be calculated:
β=| Orii-Orij|
OriiIt is the i principal directions of point to be verified, OrijIt is the principal direction of the contextual feature point j of point i to be verified.
Angular displacement can be calculated by equation below:
Wherein, arctan2 (Pi,Pj) it is two characteristic points Pi,PjThe angle of line and horizontal direction.Strong geometrical relationship checking Method is as follows:
3-2-1. quantifies resulting visual vocabulary by the characteristic point of query image, carries out with the characteristic point in index database A large amount of matching characteristic points that matching is obtained.Hash tables are built as key word using the image ID (imgId) belonging to these characteristic points, Find some candidate images.
, between the characteristic point of correct matching, the principal direction between two characteristic points of query image is poor, with candidate for 3-2-2. Principal direction between corresponding two characteristic point of image is poor, should level off to equal.Angular displacement should also be as leveling off to equal, correspondence in the same manner The difference of the principal direction difference between characteristic point:
Wherein,In representing image a, characteristic point i is poor with the principal direction of j,Corresponding characteristic point in expression image b Principal direction is poor.
And the difference of angular displacement:
Wherein,The angular displacement of characteristic point i and j in image a is represented,Represent the angle of corresponding characteristic point in image b Potential difference.
If 2 points of meet the constraint relations of i, j, then Mi,jEqual to 1.Here by the way of ballot judging that candidate feature point is The no characteristic point for correct matching.If a certain characteristic point meets this relation with a certain proportion of contextual feature point in the image, Then think the characteristic point that this characteristic point is correct matching.Correct match point in every width candidate image is counted, and calculates correct With ballot summation S obtained by characteristic point.
Wherein,It is the poll obtained by characteristic point i.The ballot of each candidate image and S, for weighing one The similarity degree of candidate image.
The present invention is had the advantages that relative to prior art:
The present invention can be used for large-scale image retrieval, improve effectiveness of retrieval and accuracy rate;And the method is to adopt Local feature region overall situation sexual intercourse checking candidate image, therefore when the candidate feature point of candidate image is excessive, can be using time Select the larger characteristic point of yardstick in image, the results show ratio using all candidate feature points effect more preferably, speed is faster. Simultaneously there is good robustness to image conversion such as the scaling of image, rotation, cuttings by the method.
Description of the drawings
The flow chart that Fig. 1 represents the present invention;
The weak relationship consistencies of Fig. 2 verify schematic diagram;
The strong geometric verification schematic diagrams of Fig. 3;
Specific embodiment
In being embodied as, the algorithm adopts SIFT operators, and in the following description, description of local feature region is exactly Refer to SIFT, no longer particularly point out.Two methods are introduced in the present embodiment mainly:First method is based on global weak relationship consistency Property constraint local feature region verification method, second method is the local feature based on global strong geometrical relationship consistency constraint Point verification method.Can be used for image retrieval and based in the image recognition and detection method of local feature region.
Embodiments of the invention are further described below with reference to the accompanying drawings.
Fig. 1 is the FB(flow block) of the invention, illustrates various pieces relation and its flow process.Two methods specifically include as follows Part:Off-line learning part, characteristic point quantized segment and characteristic point ballot verification portion.Off-line learning part is used for visual vocabulary The structure of dictionary, including sample point is obtained, sample point is clustered and generates dictionary.Characteristic point quantifies comprising three parts, local Feature point extraction, principal direction, orientation, yardstick quantify and describe son and quantify.The ballot checking of Part III characteristic point includes two Part, finds candidate image and candidate feature point in index database, by way of ballot verifies candidate feature point.
(1) in Fig. 1, off-line learning part mainly includes:The structure of visual vocabulary dictionary.
Great amount of images is chosen, and its characteristic point is extracted as sample learning storehouse.Then, to the spy in Feature Descriptor Sample Storehouse The characteristic vector for levying description is grouped;K Ge Lei centers, each apoplexy due to endogenous wind are obtained by K mean cluster in each feature group The heart is the root that a characteristic vector is represented in visual vocabulary, and K Ge Lei centers are the root set of this feature group;From every A root is selected to be generated as a visual vocabulary in the root set of individual feature group.The root built in each feature group Set is combined and obtains visual vocabulary dictionary.In the present embodiment, the Feature Descriptor of local feature region is divided into 4 groups, Per group of 8 eigenvalues, build 64 Ge Lei centers by K mean cluster, and each class center is the root of the group;4 Ge Lei centers are The visual dictionary of this method;All it is stored in an array per Zu Lei centers, and stores in file;Carrying out quantifying local Feature description period of the day from 11 p.m. to 1 a.m method needs to load the array in internal memory.
Finally, the local feature description's daughter root in Sample Storehouse is regarded using grouping quantization method according to visual vocabulary dictionary Feel vocabulary.In the present embodiment, the Feature Descriptor of the SIFT of extraction is the characteristic vector of 32 dimensions.
(2) in Fig. 1, characteristic point quantized segment is comprised the following steps that:
First to image zooming-out local feature point set S={ Pi, i ∈ [0, Q] }, Q be image in local feature region Number;And by according to the grouping quantization method of visual vocabulary dictionary by local feature region PiFeature Descriptor be quantified as visual word Remittance VWi.Comprise the following steps that:
Extraction local feature region in Fig. 1, in the present embodiment, is described using SIFT to the local feature region that detection is obtained Son is described;One local feature region (Pi) son is described by SIFT be represented by:[Fiii,Pxi,Pyi];Wherein FiFor Feature description subvector, is represented with histogram of gradients;θiFor principal direction;σiFor the yardstick of local feature region, (Pxi,Pyi) for office Portion characteristic point (Pi) locus in the picture.In the present embodiment, FiIt is arranged to the characteristic vector of one 32 dimension.Pass through Local feature region is extracted and description, and image is represented as the set of SIFT description.
Feature Descriptor in Fig. 1 quantifies the Feature Descriptor (F to each local feature regioni) using according to visual vocabulary Grouping quantization method obtain visual vocabulary;Group quantization is by Feature Descriptor Fi(characteristic vector of D dimensions) is divided into M groups, often Group is D/M feature, is then individually quantified as V according to the dictionary for training to per group of characteristic vectorj, then using group quantization The visual vocabulary (VW) for obtaining Feature Descriptor is:Wherein L is correspondence group in visual vocabulary dictionary Root number.Through the quantization of local feature description's, a local feature region PiIt is represented as [VWiii,Pxi,Pyi]。
In Fig. 1,22 to other SIFT feature attribute quantifications, to need the attribute for quantifying to include principal direction θi, yardstick σiWith And orientation (Pxi,Pyi).Principal direction θiIt is the radian value of a floating type, is quantized into integer angle value here:θii* 180/π;
Same location information (Pxi,Pyi) shaping is also quantified as with yardstick.In quantization scale, 100 will be taken advantage of and then rounded, Remain certain precision.
(3) in Fig. 1, characteristic point ballot verification portion mainly obtains candidate image and candidate feature point including 31, and 32 by throwing Ticket verifies candidate feature point.In image retrieval application, obtain in index database according to the visual vocabulary of query image characteristic point A large amount of candidate feature points, these candidate feature points obtain some candidate images according to image ID (imgId).Two methods of this paper are Being constrained by global coherency carries out verifying the purpose for reaching image copy detection to candidate feature point in candidate image.The invention The checking of middle characteristic point includes two methods, and a kind of is by the principal direction between correct matching characteristic point and yardstick relative size There is relationship consistency to verify candidate feature point, become weak relationship consistency checking.Another is by between characteristic point Principal direction difference and the strong geometrical relationship of angular displacement two characteristic point is verified, referred to as by force geometric verification.
3-1. weak consistencies relation is verified:Have according to the principal direction between correct matching characteristic point and scale size consistent Property principle, the characteristic point for adopting the mode of ballot whether for correct matching to verify characteristic point.Comprise the following steps that:
3-1-1. quantifies resulting visual vocabulary by the characteristic point of query image, carries out with the characteristic point in index database A large amount of matching characteristic points that matching is obtained.Hash tables are built as key word using the image ID (imgId) belonging to these characteristic points, Find some candidate images.
Shown in 3-1-2. such as Fig. 2 (a) and 2 (b), in query image, if the yardstick of a characteristic point is less than an other feature Point, then, in candidate image, identical visual vocabulary should meet the constraint.Principal direction should also be as meeting this condition simultaneously. If a characteristic point meets this relation with a certain proportion of characteristic point in the candidate image, then it is assumed that this characteristic point is correct matching Characteristic point.Shown in following two formula:
Wherein, Scl represents yardstick, and Ori represents principal direction, subscript ai, and aj represents a images i-th, j characteristic point.Mi,jFor 1 Two characteristic points i are represented, j meets the comformity relation and constrains, and characteristic point j increases by 1 to the poll of characteristic point i.If certain characteristic point The constraint is met with a certain proportion of contextual feature point, then it is assumed that the point is correct point.During image retrieval, calculate Ballot summation S obtained by correct matching characteristic point.
Wherein,It is the poll obtained by characteristic point i, Th is ballot threshold value.The ballot of each candidate image and, For weighing the similarity degree of a candidate image.
The false code of concrete implementation procedure is as follows:
The strong geometric verifications of 3-2.:Strong geometric verification is to calculate principal direction difference and two angles of angular displacement between characteristic point Relation.The attribute of the required characteristic point used includes principal direction and orientative feature.It is in two angles that the method is used, main Direction difference is the difference of principal direction between two characteristic points, and angular displacement is the angle of line between principal direction and two characteristic points.Principal direction Difference can be calculated by equation below:
β=| Orii-Orij|
Angular displacement can be calculated by equation below:
3-2-1. is similar to step 3-1-1, the visual vocabulary obtained by being quantified by the characteristic point of query image, with index Characteristic point in storehouse carries out matching a large amount of matching characteristic points for obtaining.Using the image ID (imgId) belonging to these characteristic points as Key word builds Hash tables, finds some candidate images.
, between the characteristic point of correct matching, the principal direction of two characteristic points of image of inquiry is poor for 3-2-2., schemes with candidate As the principal direction of corresponding two characteristic point is poor, should level off to equal.Angular displacement should also be as leveling off to equal, character pair point in the same manner Between principal direction difference difference:
And the difference of angular displacement
Should level off to 0.
If 2 points of meet the constraint relations of i, j, then Mi,jEqual to 1.Here by the way of ballot judging that candidate feature point is The no characteristic point for correct matching.If a certain characteristic point meets this pass with a certain proportion of other candidate feature points in the image System, then it is assumed that this characteristic point is the characteristic point of correct matching.Correct match point in every width candidate image is counted, and calculates correct Ballot and S obtained by matching characteristic point.
Wherein,It is the poll obtained by characteristic point i.The ballot of each candidate image and, for weigh one time Select the similarity degree of image.
The false code of concrete implementation procedure is as follows:
Embodiment of the present invention has been specifically described above, it will be appreciated that one is had to the art One of ordinary skill, in the case of without departing substantially from the scope of the present invention, in above-mentioned and sheet especially set out in the claims It is changed in the range of invention and adjusts and can equally reaches the purpose of the present invention.

Claims (5)

1. the local feature region verification method based on holotopy consistency constraint, it is characterised in that including following three part:(1) Off-line learning part, (2) characteristic point quantized segment, (3) characteristic point ballot verification portion;Off-line learning part is used to build vision Lexicon dictionary;Characteristic point quantized segment quantifies to local feature according to the visual vocabulary dictionary obtained by off-line learning;It is special A ballot verification portion is levied for verifying to the characteristic point in candidate image, is implemented as follows:
Step (1) off-line learning, obtains visual vocabulary dictionary to great amount of samples grouping and clustering.
Step (2) is quantified to the characteristic point of query image by visual vocabulary dictionary, obtains visual vocabulary.
Step (3) obtains candidate feature point for the visual vocabulary of query image is matched in index database, with candidate feature point Affiliated image unique mark opening relationships, obtains some candidate images.
Step (4) is verified to characteristic point by the weak relation of global coherency or the constraint of strong geometrical relationship, reaches and candidate image is tested The purpose of card.
2. the local feature region verification method based on holotopy consistency constraint as claimed in claim 1, it is characterised in that: Step (4) verified to characteristic point using the weak relation constraint of global coherency, according to the principal direction between correct matching characteristic point and Whether the relative size of yardstick has agreement principle, adopt the mode of ballot to verify characteristic point for the correct characteristic point for matching. In query image, if the yardstick of a certain characteristic point is less than an other characteristic point, in correct candidate image, corresponding spy Levy.In the same manner, principal direction should also be as meeting this condition.If one in a certain characteristic point and the candidate image Other characteristic points of certainty ratio meet this relation simultaneously, then it is assumed that this characteristic point is the characteristic point of correct matching.Concrete steps It is as follows:
Visual vocabulary obtained by 2-1. is quantified by the characteristic point of query image is matched with the visual vocabulary in index database, The a large amount of candidate feature points for obtaining, and the image ID belonging to characteristic point is set up into Hash tables as key word, find some candidates Image.
2-2. verifies characteristic point by constraints:In proof procedure, for characteristic point i, if contextual feature point j and i is full Foot the restriction relation, then the poll of characteristic point i add 1.Correct match point is obtained by verification method of voting, and calculates correct matching Ballot obtained by characteristic point and:
Wherein,It is the poll obtained by characteristic point i, Th is the threshold value of ballot checking.When result is chosen, according to The ballot of each candidate image and, select several images from big to small as copy image.
3. the local feature region verification method based on holotopy consistency constraint as claimed in claim 1, it is characterised in that: Step (4) is constrained using strong geometrical relationship and characteristic point is verified;The method according between correct matching characteristic point principal direction difference with The strong geometrical relationship such as angular displacement meets consistency constraint principle, and characteristic point is verified.Principal direction is poor:β=| Orii-Orij|。 OriiIt is the i principal directions of point to be verified, OrijIt is the principal direction of the contextual feature point j of check post i;Angular displacement is spy to be verified Levy a little with its contextual feature point line, the angle between characteristic point principal direction to be verified can be calculated by equation below Arrive:α=| arctan2 (Pi,Pj)-OriPi|。arctan2(Pi,Pj) for calculating characteristic point (Pi,Pj) 2 points of lines and level side To angle;Comprise the following steps that:
3-1. quantifies resulting visual vocabulary by the characteristic point of query image, the characteristic point indexed with the foundation in index database Matched, a large amount of matching characteristic points for obtaining.Hash tables are set up as key word using the image ID belonging to these characteristic points, is looked for To some candidate images.
3-2. has principal direction of the rotation between robustness, therefore two characteristic points of query image well due to SIFT feature point Difference, the principal direction between two characteristic points corresponding with candidate image are poor, should level off to equal.Angular displacement should also be as leveling off in the same manner It is equal.The difference of the principal direction difference between character pair point:Dif_Orii,j=| βaijbij|, and the difference of angular displacement: Dif_Dir=| αaijbij| should level off to 0.
If characteristic point i, 2 points of meet the constraint relations of j, then Mi,jEqual to 1.
The mode voted equally is also adopted by here come whether judge candidate feature point be the characteristic point of correct matching, and calculates correct Ballot and S obtained by matching characteristic point.
Wherein,It is the poll obtained by characteristic point i.When result is chosen, according to each candidate image ballot and, Several images are selected from big to small as copy image.
4. the local feature region verification method based on holotopy consistency constraint as claimed in claim 1, it is characterised in that Described off-line learning part to implement step as follows:
1-1. selection great amount of images structure image libraries, and the local feature region and its Feature Descriptor of image in image library are extracted, The Feature Descriptor of extraction is built into into Sample Storehouse;
1-2. obtains visual vocabulary dictionary by Sample Storehouse;Specifically, the characteristic vector of Feature Descriptor in Sample Storehouse is carried out Packet, obtains K Ge Lei centers by K mean cluster in each feature group, and each class center is represented for a characteristic vector A root in visual vocabulary, root set of the K Ge Lei centers for this feature group;The root collection built in each feature group Conjunction is combined and obtains visual vocabulary dictionary.
5. the local feature region verification method based on holotopy consistency constraint as claimed in claim 1, it is characterised in that The quantization of characteristic point in described characteristic point quantized segment includes two parts:The quantization of local feature description's, principal direction, chi Degree and coordinate quantify.
2-1. local feature descriptions quantifies:Local feature point set S={ P is extracted to input picturei, i ∈ [0, Q] }, Q is defeated Enter the number of local feature region in image, PiRefer to i-th local feature region;And pass through group quantization according to visual vocabulary dictionary Method is by local feature region PiFeature Descriptor be quantified as visual vocabulary VWi;Comprise the following steps that:
2-1-1. extracts local feature region P from input pictureiFeature Descriptor Fi, position (Pxi,Pyi), yardstick σiAnd main formula To θiInformation, i.e. local feature region PiIt is expressed as [Fiii,Pxi,Pyi];
2-1-2. is to each local feature region PiFeature Descriptor Fi, obtained using grouping quantization method according to visual vocabulary dictionary To visual vocabulary;It is by Feature Descriptor F according to the group quantization of visual vocabulary dictionaryiIt is divided into M groups, per group is D/M feature, Wherein D is characterized the sub- F of descriptioniThe dimension of characteristic vector;Then the vision for per group of characteristic vector being trained according to step 1-2 Lexicon dictionary is individually quantified as Vj, then Feature Descriptor F is obtained using group quantizationiVisual vocabulary VWiFor: Wherein, L is the root number of correspondence group in visual vocabulary dictionary;So as to a local feature region PiIt is represented as [VWiii,Pxi,Pyi];Per stack features, the quantization of vector is by being searched most based on Euclidean distance in the root set of the group Jin Lei centers, and using the subscript at such center as its quantized result;
2-2. principal directions, yardstick and coordinate quantify:Principal direction θ being mentioned above is the radian value of a floating type, at this In be quantized into integer angle value θ:θii*180/π。
Same location information (Pxi,Pyi) and yardstick σiAlso it is quantified as shaping.In quantization scale, by σiTake advantage of 100 and then round, protect Certain precision is stayed.
CN201611109737.2A 2016-12-06 2016-12-06 Local feature point verification method based on global relationship consistency constraint Active CN106649624B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611109737.2A CN106649624B (en) 2016-12-06 2016-12-06 Local feature point verification method based on global relationship consistency constraint

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611109737.2A CN106649624B (en) 2016-12-06 2016-12-06 Local feature point verification method based on global relationship consistency constraint

Publications (2)

Publication Number Publication Date
CN106649624A true CN106649624A (en) 2017-05-10
CN106649624B CN106649624B (en) 2020-03-03

Family

ID=58818347

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611109737.2A Active CN106649624B (en) 2016-12-06 2016-12-06 Local feature point verification method based on global relationship consistency constraint

Country Status (1)

Country Link
CN (1) CN106649624B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104036012A (en) * 2014-06-24 2014-09-10 中国科学院计算技术研究所 Dictionary learning method, visual word bag characteristic extracting method and retrieval system
CN104573681A (en) * 2015-02-11 2015-04-29 成都果豆数字娱乐有限公司 Face recognition method
CN105678349A (en) * 2016-01-04 2016-06-15 杭州电子科技大学 Method for generating context descriptors of visual vocabulary

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104036012A (en) * 2014-06-24 2014-09-10 中国科学院计算技术研究所 Dictionary learning method, visual word bag characteristic extracting method and retrieval system
CN104573681A (en) * 2015-02-11 2015-04-29 成都果豆数字娱乐有限公司 Face recognition method
CN105678349A (en) * 2016-01-04 2016-06-15 杭州电子科技大学 Method for generating context descriptors of visual vocabulary

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
YAO J L 等: "Near-duplicate image retrieval based on contextual descriptor", 《IEEE SIGNAL PROCESSING LETTERS》 *
周文罡: "基于局部特征的视觉上下文分析及其应用", 《中国科学技术大学:信号与信息处理》 *

Also Published As

Publication number Publication date
CN106649624B (en) 2020-03-03

Similar Documents

Publication Publication Date Title
Yu et al. Spatial pyramid-enhanced NetVLAD with weighted triplet loss for place recognition
CN111062282B (en) Substation pointer instrument identification method based on improved YOLOV3 model
Messina et al. Segmentation-free handwritten Chinese text recognition with LSTM-RNN
Turcot et al. Better matching with fewer features: The selection of useful features in large database recognition problems
CN102693299B (en) System and method for parallel video copy detection
CN109948149B (en) Text classification method and device
CN107729865A (en) A kind of handwritten form mathematical formulae identified off-line method and system
CN102693311A (en) Target retrieval method based on group of randomized visual vocabularies and context semantic information
CN110490254B (en) Image semantic generation method based on double attention mechanism hierarchical network
WO2009032570A1 (en) Visual language modeling for image classification
CN110472652A (en) A small amount of sample classification method based on semanteme guidance
Zhu et al. Weighting scheme for image retrieval based on bag‐of‐visual‐words
CN113076465A (en) Universal cross-modal retrieval model based on deep hash
CN102184186A (en) Multi-feature adaptive fusion-based image retrieval method
CN110347857B (en) Semantic annotation method of remote sensing image based on reinforcement learning
CN104199842A (en) Similar image retrieval method based on local feature neighborhood information
Gao et al. Democratic diffusion aggregation for image retrieval
Montazer et al. A neuro-fuzzy inference engine for Farsi numeral characters recognition
Srihari et al. A system to read names and addresses on tax forms
CN109977958A (en) A kind of offline handwritten form mathematical formulae identification reconstructing method
CN113657098B (en) Text error correction method, device, equipment and storage medium
CN109766752B (en) Target matching and positioning method and system based on deep learning and computer
CN114169442A (en) Remote sensing image small sample scene classification method based on double prototype network
CN105678349B (en) A kind of sub- generation method of the context-descriptive of visual vocabulary
CN113095072A (en) Text processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20170510

Assignee: Hangzhou Zihong Technology Co., Ltd

Assignor: Hangzhou University of Electronic Science and technology

Contract record no.: X2021330000654

Denomination of invention: Local feature point verification method based on global relationship consistency constraint

Granted publication date: 20200303

License type: Common License

Record date: 20211104

EE01 Entry into force of recordation of patent licensing contract