CN105551022B - A kind of image error matching inspection method based on shape Interactive matrix - Google Patents
A kind of image error matching inspection method based on shape Interactive matrix Download PDFInfo
- Publication number
- CN105551022B CN105551022B CN201510888480.4A CN201510888480A CN105551022B CN 105551022 B CN105551022 B CN 105551022B CN 201510888480 A CN201510888480 A CN 201510888480A CN 105551022 B CN105551022 B CN 105551022B
- Authority
- CN
- China
- Prior art keywords
- formula
- images
- image
- matching
- matrix
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 129
- 239000011159 matrix material Substances 0.000 title claims abstract description 92
- 230000002452 interceptive effect Effects 0.000 title claims abstract description 68
- 238000007689 inspection Methods 0.000 title claims abstract description 27
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 claims abstract description 23
- 239000013598 vector Substances 0.000 claims description 37
- 230000003993 interaction Effects 0.000 claims description 7
- 238000004364 calculation method Methods 0.000 claims description 5
- 238000012545 processing Methods 0.000 claims description 5
- 238000013459 approach Methods 0.000 claims description 4
- 238000004422 calculation algorithm Methods 0.000 claims description 4
- 238000001514 detection method Methods 0.000 claims description 4
- 230000008569 process Effects 0.000 claims description 4
- 238000000605 extraction Methods 0.000 claims description 3
- 238000001914 filtration Methods 0.000 claims description 3
- 238000012549 training Methods 0.000 claims description 2
- 229940124913 IPOL Drugs 0.000 claims 1
- 230000000007 visual effect Effects 0.000 claims 1
- 230000009466 transformation Effects 0.000 description 32
- 230000008859 change Effects 0.000 description 8
- 238000010586 diagram Methods 0.000 description 5
- 238000011156 evaluation Methods 0.000 description 3
- 238000006243 chemical reaction Methods 0.000 description 2
- 239000013604 expression vector Substances 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 241001269238 Data Species 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000000052 comparative effect Effects 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000035772 mutation Effects 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 238000013139 quantization Methods 0.000 description 1
- 230000003014 reinforcing effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Classifications
-
- G06T3/02—
Abstract
The invention discloses a kind of image error matching inspection method for being based on shape Interactive matrix (SIM), by the characteristic point that is matched between two images to two shape Interactive matrix of the two images on standardizing homogeneous coordinates are calculated, difference between calculating two shape Interactive matrix by column by Euclidean distance method or cosine analogue method, obtains the erroneous matching pair of two images.After erroneous matching is removed, using remaining correct matching pair, handled for different application background is further.The field such as object identification that method provided by the invention can be applied in image retrieval, three-dimensional point cloud registration and image or video, extends application;Model is simple, and theoretical property is good, has stronger robustness for affine geometric distortion, real-time performance is notable, suitable for the application scenario higher to requirement of real-time.
Description
Technical field
The present invention relates to images match and field of image search, more particularly to one kind to be based on shape Interactive matrix (Shape
Interaction Matrix, abbreviation SIM) image error matching inspection method, this method has affine transformation consistency, uses
The erroneous matching pair of local feature region between a pair of images are detected.
Background technology
Concentrated from two given match points and remove erroneous matching to being computer vision, pattern-recognition, MultiMedia Field
In a basic problem, it is widely used, for example, structure from motion, image registration, Stereo matching, tracking, target know
Not and part multiimage is retrieved etc..In various application scenarios, first have to do be exactly obtain it is right two-by-two between point set
Answer matching relationship.Here point set both can be the local feature region or three-dimensional point cloud by being extracted in two dimensional image
The three-dimensional key point extracted in data.Matching relationship between candidate point, it usually needs be used as by two assumed conditions
Constrain to determine, one is similarity hypothesis, and another is then that Space Consistency is assumed.Similarity hypothesis requires matching centering
Description of two points each other should be as far as possible similar, and Space Consistency is assumed then to require that all correctly match should between
One unified geometric transform relation of the satisfaction.Generally we choose candidate by the use of similarity hypothesis as an adequate condition
The scope of pairing, and assumed to filter the erroneous matching of candidate matches centering as a necessary condition by the use of Space Consistency.Make
Into erroneous matching to the reason for, mainly including following three points:
Unknown geometric transformation generally be present between (1) two point set, these conversion may include translation transformation, rotation becomes
Change, change of scale, Shear Transform and projective transformation etc..
(2) due to by such as the objective condition such as object tone, illumination, material, texture in two dimensional image or three-dimensional scenic
Influence, the feature that corresponding two-dimentional or three-dimensional feature detector detects, its position generally has accuracy error.
(3) content in two dimensional image or three-dimensional scenic, which there may be, mutually blocks, and thus introduce many does not have in itself
The interference characteristic point of match point.
Due to the presence of erroneous matching pair, matching precision has declined, and this, which becomes, restricts its many application scenarios performance
A bottleneck.Remove erroneous matching method therefore need overcome the very noisy brought by above-mentioned reason, a large amount of outliers and
Strong geometric transformation is to adapt to the application scenarios of complexity.In order to solve this problem, there has been proposed much use Space Consistency
Assuming that come the method for detecting and removing erroneous matching, these methods are classified as two classes by us:Iterative approach method changes with non-
For filter method.
First kind method wish by way of alternating iteration come and meanwhile estimate that geometric transformation model matches pair with correct
Set.Because the geometric transformation model for estimating to obtain is influenceed the master very big, this kind of method is faced by outlier proportion
How in the case where outlier ratio is higher error matching points are robustly removed when wanting difficult.Classical random sampling uniformity
Method (RANSAC, referring to citation [1]) and its mutation method maximal possibility estimation sampling coherence method (MLESAC,
Referring to citation [2]) affine or projective transformation model can be estimated in the case where outlier ratio is not relatively high and is removed
Erroneous matching.Compatibility function differential method (ICF, referring to citation [3]) and three kinds of methods based on vector field uniformity
VFC, Fast-VFC, Sparse-VFC (referring to citation [4]) or even it can estimate in the case where outlier ratio is higher
Non-rigid transformation model simultaneously removes erroneous matching.This kind of method can estimate complex geometric transformation by way of iteration
Model, but also therefore can be computationally more time-consuming.
Second class method is then considered with a kind of non-iterative mode, directly using geometry priori come filter false matching
Point, without accurate estimation geometric transformation model.Such method is less due to taking, and is widely used in Large Scale Graphs all the time
Retrieval precision is lifted as searching field, the researcher in the field also proposes many effective methods for this.Wherein, based on phase
Have like the method for geometric transformation a priori assumption:Weak Geometrical consistency method (WGC, referring to citation [5]), reinforcing are weak geometrically consistent
Property method (EWGC, referring to citation [6]), strong Geometrical consistency method (SGC, referring to citation [7]) and in pairs geometry
With method (PGM, referring to citation [8]).The above method has used the principal direction of characteristic point and principal dimensions information to come effectively
Filter false matches, but is all the filter method from the angle design of isolated characteristic matching pair.More also be based on compared with
The method of strong similarity transformation model includes:Geometry coding method (GC, referring to citation [9]), low-rank overall situation Geometrical consistency method
(LRGGC, referring to citation [10]) and a norm overall situation Geometrical consistency method (L1GGC, referring to citation [11]) can
The global space information between island features point is effectively utilized, utilizes global geological information uniformly rapidly filter false
Match somebody with somebody, still, when the geometric transformation (affine transformation or projective transformation) between point set is more complicated, often do not reach good result.
Citation:
[1]M.A.Fischler and R.C.Bolles,“Random sample consensus:a paradigm
for model fitting with applications to image analysis and automated
cartography,”Communications of the ACM,vol.24,no.6,pp.381–395,1981.
[2]P.H.Torr and A.Zisserman,“MLESAC:A new robust estimator with
application to estimating image geometry,”Computer Vision and Image
Understanding,vol.78,no.1,pp.138–156,2000.
[3]X.Li and Z.Hu,“Rejecting mismatches by correspondence function,”
International Journal of Computer Vision,vol.89,no.1,pp.1–17,2010.
[4]J.Ma,J.Zhao,J.Tian,A.Yuille,and Z.Tu,“Robust point matching via
vector field consensus,”IEEE Transactions on Image Processing,vol.23,no.4,
pp.1706–1721,2014.
[5]Herve Jegou,Matthijs Douze,and Cordelia Schmid,“Hamming embedding
and weak geometric consistency for large scale image search,”in European
Conference on Computer Vision,2008,vol.5302,pp.304–317.
[6]Wan-Lei Zhao,Xiao Wu,and Chong-Wah Ngo,“On the annotation of web
videos by efficient near-duplicate search,”IEEE Transactions on Multimedia,
vol.12,no.5,pp.448–461,2010.
[7]Junqiang Wang,Jinhui Tang,and Yu-Gang Jiang,“Strong geometrical
consistency in large scale partialduplicate image search,”in Proceedings of
the 21st ACM International Conference on Multimedia,2013,pp.633–636.
[8]X.Li,M.Larson,and A.Hanjalic,“Pairwise geometric matching for
large-scale object retrieval,”in Computer Vision and Pattern Recognition,
2015.
[9]Wengang Zhou,Houqiang Li,Yijuan Lu,and Qi Tian,“SIFT match
verification by geometric coding for large scale partial-duplicate web image
search,”ACM Trans.on Multimedia Comput.Commun.Appl.,vol.9,no.1,pp.4:1–4:18,
2013.
[10]L.Yang,Y.Lin,Z.Lin,and H.Zha,“Low rank global geometric
consistency for partial-duplicate image search,”in International Conference
on Pattern Recognition,2014,pp.3939–3944.
[11]Y.Lin,C.Xu,L.Yang,Z.Lin,and H.Zha,“l1-norm global geometric
consistency for partial-duplicate image retrieval,”in International
Conference on Image Processing,2014,pp.3033–3037.
[12]D.Nister and H.Stewenius,“Scalable recognition with a vocabulary
tree,”in Computer Vision and Pattern Recognition,vol.2,2006,pp.2161–2168.
[13]G.Yu and J.-M.Morel,“ASIFT:An algorithm for fully affine
invariant comparison,”Image Processing On Line,vol.1,2011,http://dx.doi.org/
10.5201/ipol.2011.my-asift.
[14]D.G.Lowe,“Distinctive image features from scale-invariant
keypoints,”International Journal of Computer Vision,vol.60,no.2,pp.91–110,
2004.
The content of the invention
In order to overcome the above-mentioned deficiencies of the prior art, the present invention provides one kind and is based on shape Interactive matrix (Shape
Interaction Matrix, abbreviation SIM) the erroneous matching method of inspection with affine transformation consistency, handed over using shape
Mutual matrix, erroneous matching pair is obtained by the positional information of comparative feature point, model of the present invention is simple, and theoretical property is good, real-time
Can be significantly.
Technical scheme provided by the invention is:
Image error of the one kind based on shape Interactive matrix (Shape Interaction Matrix, abbreviation SIM) matches
The method of inspection, by the characteristic point that is matched between two images to two shapes of the two images on homogeneous coordinates are calculated
Interactive matrix, then by the difference between comparing two shape Interactive matrix by column, obtain the erroneous matching pair of two images, wrap
Include following steps:
1) two images are inputted as image to be compared, wherein a width is picture to be checked, another width is control picture;Profit
Two figures to be compared are extracted with affine-Scale invariant features transform method (Affine-SIFT, referring to citation [13])
The local feature region insensitive to affine transformation as in simultaneously calculates 128 dimension SIFT local descriptions (Local of each characteristic point
feature descriptor);The feature matching method (referring to citation [14]) provided according to SIFT, it is special by finding two
The Euclidean distance that son is described between sign point is closer to, and the characteristic point of Euclidean distance farther out between other feature point description
It is right, matched as the candidate feature between two images to (putative matched feature pairs);Further obtain
The coordinate of each characteristic point matched in two images, is represented with formula 1 and formula 2 respectively:
(formula 1)
(formula 2)
In formula 1 and formula 2,xijWith yijThe coordinate value of characteristic point respectively in two images;X1、X2's
Each arrange corresponds to the coordinate of one group of matching characteristic point (such as coordinate is (x in piece image11,y11) characteristic point and another width
Coordinate is (x in image21,y21) characteristic point be mutually matched, by that analogy);N is the sum of matching pair;
Next to two groups of homogeneous coordinates X1,X2Standardization is done respectively, the homogeneous coordinates after being standardized
(formula 3)
(formula 4)
In formula 3 and formula 4, the third line of homogeneous coordinates is 1, namely[X]3,c≡1;i∈{1,2},j∈
[1,n],WithFor the coordinate value after standardization, the calculation formula of standardization is:
(formula 5)
(formula 6)
In formula 5 and formula 6,WithRespectively xijWith yijAverage value and standard deviation, its calculation formula be:
(formula 7)
(formula 8)
(formula 9)
(formula 10)
2) two shape Interactive matrix of the two images on homogeneous coordinates, described two shape interactions are calculated respectively
Matrix is tried to achieve by formula 11 and formula 12 respectively:
(formula 11)
(formula 12)
In formula 11 and formula 12, X1、X2Each row correspond to the homogeneous coordinates of one group of matching characteristic point;Z1And Z2For two
Shape Interactive matrix, represented respectively by formula 13 and formula 14:
(formula 13)
(formula 14)
In formula 13 and formula 14,Mole-Peng Ruosi the generalized inverses of representing matrix, when each row linear independence of X matrix, i.e.,
XXTCan the inverse time,It is expressed as formula 15:
(formula 15)
3) by Euclidean distance method or cosine analogue method calculate two shape Interactive matrix by column between difference, obtain two
The erroneous matching pair of width image.
The above-mentioned image error matching inspection method based on shape Interactive matrix, further, the control picture are figure
As the picture that is retrieved in database.
The above-mentioned image error matching inspection method based on shape Interactive matrix, further, step 1) the extraction institute
The local feature region in two images to be compared is stated, two figures to be compared are extracted particular by Affine-SIFT methods
Local feature region as in;The Affine-SIFT methods are document " G.Yu and J.-M.Morel, " ASIFT:An
algorithm for fully affine invariant comparison,”Image Processing On Line,
Described method in vol.1,2011 ";Matching process between the step 1) local feature region, specially document
“D.G.Lowe,“Distinctive image features from scale-invariant keypoints,”
It is described in International Journal of Computer Vision, vol.60, no.2, pp.91-110,2004. "
Method.
The above-mentioned image error matching inspection method based on shape Interactive matrix, further, the step 3) Euclidean away from
From method by calculating the shape Interactive matrix Z1And Z2Euclidean distance, reset threshold value point of cut-off and be compared, obtain two width
The erroneous matching pair of image, specifically comprises the following steps:
11) Z is calculated respectively1I-th of column vector and Z2I-th of column vector between Euclidean distance di:
(formula 16)
In formula 16, Z1And Z2Respectively shape Interactive matrix Z1And Z2;[·]:,iRepresent i-th of column vector of matrix;
Square of the norm of representation vector two;I ∈ [1, n], i.e., need altogether to calculate n Euclidean distance value, n is the sum of matching pair;
12) by the Euclidean distance d between each column vectoriDescending arrangement, such as formula 17:
D_sort=SORT (d) (formula 17)
In formula 17, SORT () represents to arrange one group of numerical value descending;D_sort is the distance value after descending, is metd_sorti>d_sorti+1;
13) position of range coordinate origin closest approach is calculated, the position is put as threshold value point of cut-off accordingly,
The threshold value point of cut-off is determined (as shown in Figure 3) by formula 18:
(formula 18)
In formula 18, itRepresent that threshold value point of cut-off is located at i-th in d_sorttIt is individual;d_sortiRepresent Z1And Z2I-th row
Euclidean distance between vector after descending;maxj∈[1,n]dist_sortjRepresent n d_sortiIn maximum;
14) this threshold value point of cut-off i is being obtainedtAfterwards, it is all in dist_sort to be more thanPreceding k distance value
Corresponding is paired into erroneous matching, thus obtains the erroneous matching pair of two images.
The above-mentioned image error matching inspection method based on shape Interactive matrix, further, step 3) the cosine phase
Like method by calculating the shape Interactive matrix Z1And Z2Cosine similarity, reset fixed threshold and be compared, obtain two width
The erroneous matching pair of image, specifically comprises the following steps:
21) Z is calculated1I-th of column vector and Z2I-th of column vector between cosine similarity si:
(formula 19)
In formula 19, siFor Z1I-th of column vector and Z2I-th of column vector between cosine similarity,
Z1And Z2Respectively shape Interactive matrix Z1And Z2;[·]:,iRepresent i-th of column vector of matrix;The norm of representation vector two
Square;I ∈ [1, n], i.e., need altogether to calculate n cosine similarity value, n is the sum of matching pair;
22) a fixed threshold τ no more than 1 is set, by siIn corresponding to all cosine similarity values less than τ
Erroneous matching is paired into, thus obtains the erroneous matching pair of two images.
In above-mentioned cosine analogue method, further, the fixed threshold τ is according to erroneous matching to being matched all to middle institute
The ratio accounted for carrys out respective settings, erroneous matching to bigger in all matchings ratio shared in, set the fixed threshold τ as
Closer to 1 value.In embodiments of the present invention, the threshold range of the fixed threshold τ is [0.5,0.75].
The above-mentioned image error matching inspection method based on shape Interactive matrix can be applied to image retrieval, three-dimensional point cloud is matched somebody with somebody
The fields such as the object identification in accurate and image or video.
After erroneous matching is removed, remaining correct matching pair can be utilized, for different application background, is done further
Processing.Such as in image retrieval, retrieval result can be reordered by the use of the number that residue matches as sort by.
The method provided by the present invention is based on shape Interactive matrix (Shape Interaction Matrix, abbreviation SIM)
Image error matching inspection method is applied to image retrieval, including step is as follows:
A1 the detection and description of characteristic point) are carried out to all pictures in image data base;
A2) training obtains a vision word dictionary and quantifies all characteristic points detected, filters out and goes out in database
Occurrence number most 5% with minimum 10% vision word, using different vision words as node, all characteristic points for occurring using in image as
Element, establish the inverted index (Inverted Index) for being easy to retrieval;
A3 the lengthy and jumbled picture of 1,000,10,000,100,000 and 1,000,000 scales) is separately added on standard image data storehouse,
The candidate matches relation between inquiry picture and the picture that is retrieved is determined using inverted index;
A4 two images) are inputted as image to be compared, for the characteristic point pair matched between two images, obtain two width
The homogeneous coordinates of each characteristic point in image;
A5 two shape Interactive matrix of the two images on homogeneous coordinates) are calculated respectively;
A6) by Euclidean distance method or cosine analogue method calculate two shape Interactive matrix by column between difference, obtain
The erroneous matching pair of two images;
A7 erroneous matching therein) is removed, obtains correctly matching logarithm after filtering;
A8 step A4) is performed repeatedly) to A7), and count by current queries picture and all figures that are retrieved in database
The quantity of correct matching pair between piece, reorder the quantity of correct matching pair is descending, obtain image searching result.
Compared with prior art, the beneficial effects of the invention are as follows:
The present invention provides a kind of tool for being based on shape Interactive matrix (Shape Interaction Matrix, abbreviation SIM)
There is the erroneous matching method of inspection of affine transformation consistency, by the characteristic point that is matched between two images to two width are calculated
Two shape Interactive matrix of the image on homogeneous coordinates, then by the difference between comparing two shape Interactive matrix by column,
Obtain the erroneous matching pair of two images;Model of the present invention is simple, and theoretical property is good, and real-time performance is notable.The method provided by the present invention
A little mainly have:
(1) model is simple:The present invention only by the use of the positional information of characteristic point as input come final filtration error error hiding,
Without other geometry prior informations (principal direction, the principal dimensions information of such as characteristic point), detection method and description with characteristic point
Method is completely irrelevant, greatly expands its application;
(2) validity of the inventive method obtains theoretic checking:Shape Interactive matrix is initially used to cluster point
Analysis, it can portray and represent the geometrical relationship between each point;Further, when affine transformation be present between two point sets
(including translate, rotate, yardstick and Shear Transform), its shape Interactive matrix keep constant;Therefore, the method provided by the present invention for
Affine geometric distortion has stronger robustness;
(3) present invention performance highly significant in calculating speed:The present invention is calculated using non-iterative mode, is avoided
More time-consuming iterative fitting estimation procedure;The shape Interactive matrix of two point sets can pass through simple direct formula respectively
It is calculated, and subsequent shape Interactive matrix comparison procedure efficiency is very high, suitable for the applied field higher to requirement of real-time
Close.
Brief description of the drawings
Fig. 1 is the flow of the image error matching inspection method based on shape Interactive matrix used in the embodiment of the present invention
Block diagram.
Fig. 2 is the method for the image error matching inspection method based on shape Interactive matrix used in the embodiment of the present invention
Step schematic diagram;
Wherein, 1~9 in each figure is 9 different candidate feature Point matchings pair;(a) be candidate matches pair, in (a) on
Lower two width exemplary plots represent the characteristic point that is detected in two images each location in the picture respectively, it can be found that under
An obvious affine transformation (Affine Transformation) between the point in point and upper figure in figure be present, and No. 5 spies
The position of sign matching pair seems to be unsatisfactory for this unified affine transformation, it may be possible to erroneous matching pair;(b) it is SIM matrixes, (b)
Middle Z1With Z2Two shape Interactive matrix being respectively calculated with homogeneous coordinates;(c) it is differences, upper figure is Z in (c)1With
Z2By the difference size between element (element-wise), represent that difference is more notable closer to white, represented closer to black
Difference gets over unobvious, and figure below is Z in (c)1With Z2Difference between (column-wise) by column, it can be found that No. 5 characteristic matchings
It is very notable compared with other characteristic points to the difference between by column;(d) it is the erroneous matching detected, has been marked in (d) figure by this hair
Bright offer method detects obtained erroneous matching pair, namely No. 5 matchings pair, is marked with dotted line.
Fig. 3 is the schematic diagram of threshold value point of cut-off choosing method in the embodiment of the present invention;
Wherein, 10 be that the Euclidean distance curve that descending arranges represents to arrive above-mentioned to the closest place of the origin of coordinates, 11
The point at the closest place of the origin of coordinates is set as threshold value point of cut-off, and 12 be erroneous matching to corresponding in shape Interactive matrix
Euclidean distance value, 13 be correct matching to corresponding Euclidean distance value in shape Interactive matrix.
Fig. 4 is the comparison of a kind of existing ten methods and the method provided by the present invention average retrieval time on three databases
Schematic diagram;
Wherein, [0] is the present invention;[1]~[11] be respectively RANSAC, MLESAC, ICF, VFC (including VFC,
FastVFC, SparseVFC, because these three methods come from same document [4], we are subject to area with [4-1], [4-2], [4-3]
Not), WGC, EWGC, SGC, PGM, GC, LRGGC, L1GGC, BOF, method representated by [1]~[11] respectively with citation
[1]~[11] are corresponding.
Fig. 5 be the method provided by the present invention [0] and method BoF [12] retrieval result and accuracy rate on three databases-
Recall rate curve comparison schematic diagram;
Wherein, (a) is GCDup databases;(b) it is Holiday databases;(c) it is Oxford5k databases;[0] it is this
Invention offer method;[12] it is method BoF.
Embodiment
Below in conjunction with the accompanying drawings, the present invention, the model of but do not limit the invention in any way are further described by embodiment
Enclose.
The present invention provides a kind of image error match check with affine transformation consistency based on shape Interactive matrix
Method, Fig. 1 are the FB(flow block)s of the method provided by the present invention, are comprised the following steps:
Step 1:Input two images, wherein a width for inquiry picture, another width be database in be retrieved picture it
One.Existing affine-Scale invariant features transform method (the Affine-Scale Invariant Feature of industry are used first
Transformation, abbreviation Affine-SIFT, referring to citation [13]), different degrees of imitate is done to each image respectively
Penetrate conversion and obtain a few width affine transformation sample graphs, the side proposed on these affine transformation sampled images using citation [14]
Method, further do different degrees of change of scale and obtain a few width change of scale sample graphs;Then the difference of Gaussian used
(Different of Gauussian, abbreviation DoG, referring to citation [14]) detects the extreme value under multiple metric spaces
Point is used as local feature region, and these are to translating, rotating and change of scale has certain consistency;Again by each Scale invariant
During coordinate position contravariant of the local feature region in affine-change of scale sampled images gains original image space, to obtain
There is the local feature region of certain consistency under affine transformation space;Follow document [14] described, according to characteristic point pixel gradient
It is distributed to determine the principal direction of characteristic point and principal dimensions, afterwards on the basis of principal direction, using principal dimensions as scope, in each characteristic point
The gradient distribution on 8 directions is asked in 4 × 4 neighborhood respectively, obtains the histogram vectors of totally 128 dimensions as this feature point
" Feature Descriptor ".Next, according to document [14] Suo Shu, when finding characteristic matching, compare and retouched with the feature of certain characteristic point
Son nearest and secondary near characteristic point under Euclidean distance meaning is stated, if nearest Euclidean distance divided by secondary near Euclidean distance are small
In some proportion threshold values, (empirical value provided in document [14] is 0.5, is set to if higher to the required precision of matching pair
0.4, it can be set to 0.6), receive this pair of match points, so can obtain two images if relatively more to a number of pairs requirement
Between candidate feature match to (Fig. 2 (a)).If the coordinate for each characteristic point being mutually matched in two images is as follows:
(formula 1)
(formula 2)
In formula 1 and formula 2,X1、X2Each row correspond to the coordinate (such as one of one group of matching characteristic point
Coordinate is (x in width image11,y11) characteristic point and another piece image in coordinate be (x21,y21) characteristic point be mutually matched, with
This analogizes), n is the sum of matching pair.
Next to two groups of next coordinate X1,X2The homogeneous coordinates after being standardized are done respectively
(formula 3)
(formula 4)
The third line of wherein homogeneous coordinates is 1, namely[X]3,c≡1;i∈{1,2},j∈[1,n],
WithFor the coordinate value after standardization, the calculation formula of standardization is:
(formula 5)
(formula 6)
WhereinWithRespectively xijWith yijAverage value and standard deviation, its calculation formula be:
(formula 7)
(formula 8)
(formula 9)
(formula 10)
Step 2:The shape Interactive matrix (Fig. 2 (b)) of two images is calculated respectively:
(formula 11)
(formula 12)
WhereinMole-Peng Ruosi the generalized inverses (Moore-Penrose pseudo-inverse) of representing matrix, work as X
During matrix rows linear independence, namely XXTCan the inverse time, have:
(formula 13)
Therefore shape Interactive matrix can be tried to achieve by following formula:
(formula 14)
(formula 15)
Step 3:According to the property of shape Interactive matrix, correctly match between the expression vector in shape Interactive matrix
Should be similar, and erroneous matching should then differ greatly to the expression vector in shape Interactive matrix.Therefore we can
With by calculating two shape Interactive matrix Z1With Z2Difference between by column, to judge which is paired into erroneous matching (Fig. 2
(c)).This step can have two kinds of implementations:
Mode 1, Euclidean distance method:
11) Z is calculated respectively1I-th of column vector and Z2I-th of column vector between Euclidean distance di:
(formula 16)
In formula 16, Z1And Z2Represent the shape Interactive matrix obtained in step 2, []:,iRepresent matrix arrange for i-th to
Amount,Square of the norm of representation vector two, i ∈ [1, n], namely need to calculate n Euclidean distance value altogether, n is matching pair
Sum.
12) as shown in figure 3, by the descending arrangement of Euclidean distance between each column vector:
D_sort=SORT (d) (formula 17)
In formula 17, SORT () represents to arrange one group of numerical value descending, and d_sort is the distance value after descending, is metd_sorti>d_sorti+1。
13) as shown in figure 3, calculating the position of range coordinate origin closest approach, and using the point as threshold value point of cut-off, its
Determination mode is as follows:
(formula 18)
In formula 18, itRepresent that threshold value point of cut-off is located at i-th in d_sorttIt is individual, d_sortiRepresent Z1And Z2I-th row
Euclidean distance between vector, maxj∈[1,n]dist_sortjRepresent n d_sortiIn maximum.
14) this threshold value point of cut-off i is being obtainedtAfterwards, it is all in dist_sort to be more thanPreceding k distance value
Corresponding is paired into erroneous matching.
Mode 2, cosine analogue method:
21) Z is calculated1I-th of column vector and Z2I-th of column vector between cosine similarity si:
(formula 19)
In formula 19, Z1And Z2Represent the shape Interactive matrix obtained in step 2, []:,iRepresent matrix arrange for i-th to
Amount,Square of the norm of representation vector two, i ∈ [1, n], namely need to calculate n cosine similarity value altogether, n is matching
To sum.
22) due toStill can easily set a fixed threshold τ, by siIn it is all remaining less than τ
Erroneous matching is paired into corresponding to string Similarity value.τ selection can be according to erroneous matching to being matched all to middle institute
The ratio accounted for makes corresponding adjustment, and specifically, the bigger τ of ratio also mutually in requisition for the value for choosing closer 1, examines for image
In this application scenarios of rope, the reference threshold scope that we provide τ is [0.5,0.75].
After erroneous matching is removed, it can correctly be matched to (Fig. 2 (d)) using remaining, for different application background, done
Further processing.Such as in image retrieval, retrieval result can be carried out by the use of the number of residue matching as sort by
Reorder.
Implementation process of the present invention in image retrieval problem is described in detail by the following examples, the present invention is also
It can be applied in the field such as image and three-dimensional point cloud registration and object identification, it implements to refer in following implementation examples
Partial content, I will not elaborate.
In the present embodiment, database employs the database of three wide uses as the database that is retrieved, and is respectively
GCDup databases, Holiday databases and Oxford5k databases.Wherein, GCDup databases contain 1104 pictures altogether,
Totally 33 groups of similar pictures group;Holiday databases contain 1491 pictures, totally 500 groups of similar pictures group altogether;Oxford5k numbers
Contain 5062 pictures, totally 55 groups of similar pictures altogether according to storehouse.In addition, for closer to experiment, we additionally employ one it is superfluous
Miscellaneous picture database MIRFlickr-1M, wherein containing 1,000,000 pictures obtained from network.In implementation, each phase is used
It is like the pictures in picture group as inquiry picture, other pictures in same similar pictures group and lengthy and jumbled picture library are random
Upset order and carry out fusion and be used as target image storehouse, then therefrom retrieved by the method provided by the present invention related to inquiring about picture
Picture.
For the performance evaluation of the method provided by the present invention, the evaluation index that we use is Average Accuracy (mean
Average precision, abbreviation mAP) and average retrieval time, have by comparing the method provided by the present invention with other industries
Difference of the outstanding method in performance carries out performance evaluation.
The implementation steps of the present embodiment are as follows:
A) using affine invariants transform method (Affine-SIFT, referring to citation [13]) to all pictures (bag
Described above three are included to be retrieved database and lengthy and jumbled picture database) detection and description of characteristic point are carried out, be specially:
Do different degrees of affine transformation respectively to each image first and obtain a few width affine transformation sample graphs, adopted in these affine transformations
The method proposed in sampled images using citation [14], is further done different degrees of change of scale and obtains a few width change of scale
Sample graph;Then difference of Gaussian (Different of Gauussian, abbreviation DoG, referring to citation [the 14]) inspection used
Extreme point under multiple metric spaces is measured as local feature region, these points are to translating, rotating and change of scale has one
Fixed consistency;Coordinate position contravariant by the local feature region of each Scale invariant in affine-change of scale sampled images again
Gain in original image space, to obtain the local feature region under affine transformation space with certain consistency;Follow document
[14] it is described, it is distributed according to characteristic point pixel gradient to determine the principal direction of characteristic point and principal dimensions, afterwards using principal direction as base
Standard, using principal dimensions as scope, ask for the gradient distribution on 8 directions respectively in the neighborhood of each characteristic point 4 × 4, obtain totally 128
" SIFT feature description " of the histogram vectors of dimension as this feature point;
B) bag of words (Bag of Features, abbreviation BoF, referring to citation [12]) are based on, are retrieved with all
128 dimension SIFT of all characteristic points detected in the image in database describe son as input, use and " are layered K averages to gather
Class " algorithm (Hierarchical K-Means, reference can be made to citation [12]), first from all SIFT descriptions as input
10 are randomly selected in son as initial cluster centre, by way of iteration, is constantly grouped into each characteristic point and its SIFT
In a closest cluster centre of description, after all characteristic points are all grouped into this 10 classes respectively, then to belonging to
All characteristic points under same category seek the average of its SIFT description cluster centre new as 10, so iterate straight
Generic to all characteristic points is not changing;Next all characteristic points under each cluster are randomly selected with 10 again to gather
Class center, repeats the above steps, and finds 10 cluster results under the second level;Here we do altogether in 6 levels
Clustering algorithm is stated, when algorithmic statement, we have just obtained 106Individual cluster centre, a size is thus constituted as 1,000,000
The vision word dictionary of scale (in order to reduce, can filter out occurrence number most 5% and minimum 10% vision in database
Word);Now to the characteristic point of each input, we can be asked in the cluster closest with SIFT description of its 128 dimension
The heart, and this feature point is classified as in this cluster, compiled when we encode for all characteristic point identicals for being included into same cluster
After number, the quantization to characteristic point is just completed, all characteristic points in image just can be with different vision words come simple table
Show;With total number of images mesh divided by the total degree occurred comprising each vision word (cluster centre) in all images, then business
Number take the logarithm, just obtained each vision word inverse document frequency (Inverse Document Frequency, IDF, reference can be made to
Citation [12]);The histogram vectors of word frequency distribution can be obtained with the number that each vision word occurs in every width picture is counted
(Term Frequency,TF);Be multiplied by corresponding IDF with the TF of every width picture, just obtained in information retrieval commonly use word frequency-
Inverse document frequency vector (TF-IDF);
C) using different vision words as node, using all characteristic points occurred in image as the unit in node, with image ID,
Characteristic point coordinate in the picture, the affiliated vision words frequency (TF) of SIFT principal dimensions, principal direction, the characteristic point of characteristic point, feature
The information such as vision word inverse document frequency (IDF) is as the specific storage content in unit belonging to point, in three standard image datas
The lengthy and jumbled picture of 1,000,10,000,100,000 and 1,000,000 scales is separately added on storehouse, establishes the inverted index for being easy to retrieval respectively
Table (Inverted Index, reference can be made to citation [6]);
D) picture is inquired about to each width, quickly searched using inverted index table there are all of vision word common factor with inquiry picture
Be retrieved picture, delimit retrieval subset A;Using conventional method inquiry picture is calculated in subset A and the word frequency of picture of being retrieved-
COS distance (reference can be made to citation [12]) between inverse document frequency vector (TF-IDF), arranged according to order from small to large
Sequence, obtain rough retrieval result and record retrieval time;
E) in rough retrieval result, 1000~5000 pictures that are retrieved inspection smaller as scope before further choosing
Large rope collection B, carry out ensuing geometry verification and reset program process:First by searching inverted list, obtain current queries picture and
The coordinate information of the candidate matches relation of characteristic point and matching characteristic point between the picture that is retrieved in subset B;
F) according to described in the step 1 in the method provided by the present invention, to current queries picture and the picture that is each retrieved, distinguish
According to the homogeneous coordinates matrix of two matching characteristic points of Feature Points Matching sequential configuration between two width pictures;
G) according to described in the step 2 in the method provided by the present invention, using current queries picture with being each retrieved in picture
The homogeneous coordinates matrix of candidate matches characteristic point, two shape Interactive matrix are calculated respectively;
H) according to described in the step 3 in the method provided by the present invention, two are calculated using Euclidean distance method or cosine analogue method
Shape Interactive matrix by column between difference, to judge which is paired into erroneous matching, which matching is correct matching, thus
Obtain erroneous matching pair and correct matching pair;
I) step (f) to (h) is performed repeatedly, and is counted by current queries picture and all pictures that are retrieved in database
Between correct matching pair quantity, reorder the quantity of correct matching pair is descending, obtain accurate retrieval result,
Retrieval accuracy is calculated, and records retrieval time;
J) step (e) to (i) is performed repeatedly, to all by retrieval accuracy and retrieval time before and after the processing of the invention
It is averaging, obtains the average retrieval degree of accuracy and average retrieval time;
K) step (d) to (j) is performed repeatedly, and record is using the method provided by the present invention in three standard image data storehouses, four
The average retrieval degree of accuracy and average retrieval time in the case of the different lengthy and jumbled picture scales of kind;
L) has outstanding method in bat and average retrieval time with other industries by more of the invention
Performance, analyze our invention and the performance of the existing outstanding method of industry.
Performance interpretation of result is carried out to above example:
The performance comparison result of above-described embodiment is as shown in table 1, superfluous for all standard databases and all different scales
Miscellaneous database is retrieved, and it is averagely accurate better than the retrieval of the existing outstanding method of other industries that the method provided by the present invention achieves
Degree.
The existing outstanding method of the industry of table 1 and present invention different scales retrieval precision (mAP) on three standard databases
Compare
As shown in Figure 4, it can be seen that in the case that our invention ensures retrieval accuracy, in average retrieval time
It is also significantly better than the existing outstanding method of most of industry.Fig. 5 is that the one group of retrieval result figure each chosen is concentrated from three data
And corresponding accuracy rate-recall rate curve map (ROC), by contrast bag of words method (BoF) original in the figure and
The method provided by the present invention [0], it may be said that lifting highly significant of the bright present invention to original method in retrieval precision.
It should be noted that the purpose for publicizing and implementing example is that help further understands the present invention, but the skill of this area
Art personnel are appreciated that:Do not departing from the present invention and spirit and scope of the appended claims, various substitutions and modifications are all
It is possible.Therefore, the present invention should not be limited to embodiment disclosure of that, and the scope of protection of present invention is with claim
The scope that book defines is defined.
Claims (10)
1. image error matching inspection of the one kind based on shape Interactive matrix (Shape Interaction Matrix, abbreviation SIM)
Proved recipe method, handed over by the characteristic point matched between two images on two shapes of homogeneous coordinates two images are calculated
Mutual matrix, then by the difference between comparing two shape Interactive matrix by column, the erroneous matching pair of two images is obtained, including
Following steps:
1) two images are inputted as image to be compared, wherein a width is picture to be checked, another width is control picture;Extraction institute
The local feature region in two images to be compared is stated, the feature being mutually matched between two images is drawn according to feature point description
Point pair, then obtain the homogeneous coordinates of each characteristic point in two images;The coordinate for each characteristic point being mutually matched in two images point
Do not represented with formula 1 and formula 2:
In formula 1 and formula 2,xijWith yijThe coordinate value of characteristic point respectively in two images;X1、X2Each row
Correspond to the coordinate of one group of matching characteristic point;N is the sum of matching pair;
To above-mentioned two groups of homogeneous coordinates X1And X2Standardize respectively, the homogeneous coordinates after being standardizedRepresent respectively
For formula 3 and formula 4:
In formula 3 and formula 4, the third line of homogeneous coordinates is 1, namely[X]3,c≡1;i∈{1,2};j∈[1,n];WithRespectively to xijWith yijCoordinate value after standardization;
2) two shape Interactive matrix of the two images on homogeneous coordinates, described two shape Interactive matrix are calculated respectively
Tried to achieve respectively by formula 11 and formula 12:
In formula 11 and formula 12, X1、X2Each row correspond to the homogeneous coordinates of one group of matching characteristic point;Z1And Z2Handed over for two shapes
Mutual matrix, represented respectively by formula 13 and formula 14:
In formula 13 and formula 14,Mole-Peng Ruosi the generalized inverses of representing matrix, when each row linear independence of X matrix, i.e. XXTCan
Inverse time,It is expressed as formula 15:
3) by Euclidean distance method or cosine analogue method calculate two shape Interactive matrix by column between difference, obtain two width figures
The erroneous matching pair of picture.
2. the image error matching inspection method based on shape Interactive matrix as claimed in claim 1, it is characterized in that, the control
Picture is the picture that is retrieved in image data base.
3. the image error matching inspection method based on shape Interactive matrix as claimed in claim 1, it is characterized in that, step 1) institute
The local feature region extracted in two images to be compared is stated, it is to be compared particular by described two of ASIFT methods extraction
Local feature region in image;The ASIFT methods are document " G.Yu and J.-M.Morel, " ASIFT:An
algorithm for fully affine invariant comparison,”Image Processing On Line,
vol.1,2011,pp.11‐38,http:Described in //dx.doi.org/10.5201/ipol.2011.my-asift. "
Method;Matching process between the step 1) local feature region, specially document " D.G.Lowe, " Distinctive
image features from scale‐invariant keypoints,”International Journal of
Described method in Computer Vision, vol.60, no.2, pp.91-110,2004. ".
4. the image error matching inspection method based on shape Interactive matrix as claimed in claim 1, it is characterized in that, step 1) institute
StateWithRespectively to xijWith yijCoordinate value after standardization;The standardization by formula 5 and formula 6 calculate respectively
Arrive:
In formula 5 and formula 6,WithRespectively xijWith yijAverage value and standard deviation, pass through calculation formula formula 7 respectively
~formula 10 is calculated:
In 7~formula of formula 10, n is the sum of the matching pair in two images.
5. the image error matching inspection method based on shape Interactive matrix as claimed in claim 1, it is characterized in that, step 3) institute
Euclidean distance method is stated by calculating the shape Interactive matrix Z1And Z2Euclidean distance, reset threshold value point of cut-off and be compared,
The erroneous matching pair of two images is obtained, is specifically comprised the following steps:
11) Z is calculated respectively1I-th of column vector and Z2I-th of column vector between Euclidean distance di:
In formula 16, Z1And Z2Respectively shape Interactive matrix Z1And Z2;[·]:,iRepresent i-th of column vector of matrix;Represent
Square of vectorial two norms;I ∈ [1, n], i.e., need altogether to calculate n Euclidean distance value, n is the sum of matching pair;
12) by the Euclidean distance d between each column vectoriDescending arrangement, such as formula 17:
D_sort=SORT (d) (formula 17)
In formula 17, SORT () represents to arrange one group of numerical value descending;D_sort is the distance value after descending, is met
13) position of range coordinate origin closest approach is calculated, the position is put as threshold value point of cut-off accordingly, it is described
Threshold value point of cut-off is calculated by formula 18:
In formula 18, itRepresent that threshold value point of cut-off is located at i-th in d_sorttIt is individual;d_sortiRepresent Z1And Z2Column vector between Euclidean
Distance descending heel row i-th bit value;maxj∈[1,n]d_sortjRepresent n d_sortiIn maximum;
14) this threshold value point of cut-off i is being obtainedtAfterwards, it is all in d_sort to be more thanPreceding k distance value corresponding to
Erroneous matching is paired into, thus obtains the erroneous matching pair of two images.
6. the image error matching inspection method based on shape Interactive matrix as claimed in claim 1, it is characterized in that, step 3) institute
Cosine analogue method is stated by calculating the shape Interactive matrix Z1And Z2Cosine similarity, reset fixed threshold and be compared,
The erroneous matching pair of two images is obtained, is specifically comprised the following steps:
21) Z is calculated by formula 191I-th of column vector and Z2I-th of column vector between cosine similarity si:
In formula 19, siFor Z1I-th of column vector and Z2I-th of column vector between cosine similarity,Z1
And Z2Respectively shape Interactive matrix Z1And Z2;[·]:,iRepresent i-th of column vector of matrix;The norm of representation vector two is put down
Side;I ∈ [1, n], i.e., need altogether to calculate n cosine similarity value, n is the sum of matching pair;
22) fixed threshold a τ, s no more than 1 is setiIn being paired into corresponding to all cosine similarity values less than τ
Erroneous matching, thus obtain the erroneous matching pair of two images.
7. the image error matching inspection method based on shape Interactive matrix as claimed in claim 6, it is characterized in that, the fixation
Threshold tau is according to erroneous matching in all matchings ratio shared in, come respective settings, erroneous matching is in all matchings pair
In shared ratio it is bigger, set the fixed threshold τ as the value closer to 1.
8. the image error matching inspection method based on shape Interactive matrix as claimed in claim 6, it is characterized in that, the fixation
The threshold range of threshold tau is [0.5,0.75].
9. the image error matching inspection method based on shape Interactive matrix as described in any one of claim 1~8, it is special
Sign is that this method can be applied to the object identification in image retrieval, three-dimensional point cloud registration, image or video.
10. the image error matching inspection method based on shape Interactive matrix as claimed in claim 9, it is characterized in that, work as the party
When method is applied to image retrieval, it comprises the following steps:
A1 the detection and description of characteristic point) are carried out to all pictures in image data base;
A2) training obtains a visual dictionary and quantifies all characteristic points detected, filters out the occurrence number in database
Most 5% and minimum 10% vision word, using different vision words as node, using all characteristic points occurred in image as element,
Establish the inverted index for being easy to retrieval;
A3 the lengthy and jumbled picture of 1,000,10,000,100,000 and 1,000,000 scales) is separately added on standard image data storehouse, is utilized
Inverted index determines the candidate matches relation between inquiry picture and the picture that is retrieved;
A4 two images) are inputted as image to be compared, for the characteristic point pair matched between two images, obtain two images
In each characteristic point homogeneous coordinates;
A5 two shape Interactive matrix of the two images on homogeneous coordinates) are calculated respectively;
A6) by Euclidean distance method or cosine analogue method calculate two shape Interactive matrix by column between difference, obtain two width
The erroneous matching pair of image;
A7 erroneous matching therein) is removed, obtains correctly matching logarithm after filtering;
A8) perform step A4 repeatedly) to A7), and count by current queries picture with database it is all be retrieved picture it
Between correct matching pair quantity, reorder the quantity of correct matching pair is descending, obtain image searching result.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510888480.4A CN105551022B (en) | 2015-12-07 | 2015-12-07 | A kind of image error matching inspection method based on shape Interactive matrix |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510888480.4A CN105551022B (en) | 2015-12-07 | 2015-12-07 | A kind of image error matching inspection method based on shape Interactive matrix |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105551022A CN105551022A (en) | 2016-05-04 |
CN105551022B true CN105551022B (en) | 2018-03-30 |
Family
ID=55830198
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510888480.4A Active CN105551022B (en) | 2015-12-07 | 2015-12-07 | A kind of image error matching inspection method based on shape Interactive matrix |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105551022B (en) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107833207B (en) * | 2017-10-25 | 2020-04-03 | 北京大学 | Method for detecting error matching between images based on augmented homogeneous coordinate matrix |
CN109003331A (en) * | 2018-06-13 | 2018-12-14 | 东莞时谛智能科技有限公司 | A kind of image reconstructing method |
CN109242892B (en) | 2018-09-12 | 2019-11-12 | 北京字节跳动网络技术有限公司 | Method and apparatus for determining the geometric transform relation between image |
CN109712174B (en) * | 2018-12-25 | 2020-12-15 | 湖南大学 | Point cloud misregistration filtering method and system for three-dimensional measurement of complex special-shaped curved surface robot |
CN110458175B (en) * | 2019-07-08 | 2023-04-07 | 中国地质大学(武汉) | Unmanned aerial vehicle image matching pair selection method and system based on vocabulary tree retrieval |
CN110636263B (en) * | 2019-09-20 | 2022-01-11 | 黑芝麻智能科技(上海)有限公司 | Panoramic annular view generation method, vehicle-mounted equipment and vehicle-mounted system |
US11910092B2 (en) | 2020-10-01 | 2024-02-20 | Black Sesame Technologies Inc. | Panoramic look-around view generation method, in-vehicle device and in-vehicle system |
CN112529021A (en) * | 2020-12-29 | 2021-03-19 | 辽宁工程技术大学 | Aerial image matching method based on scale invariant feature transformation algorithm features |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103824294A (en) * | 2014-02-28 | 2014-05-28 | 中国科学院计算技术研究所 | Method for aligning electronic cross-sectional image sequence |
CN103823887A (en) * | 2014-03-10 | 2014-05-28 | 北京大学 | Based on low-order overall situation geometry consistency check error match detection method |
CN103823889A (en) * | 2014-03-10 | 2014-05-28 | 北京大学 | L1 norm total geometrical consistency check-based wrong matching detection method |
-
2015
- 2015-12-07 CN CN201510888480.4A patent/CN105551022B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103824294A (en) * | 2014-02-28 | 2014-05-28 | 中国科学院计算技术研究所 | Method for aligning electronic cross-sectional image sequence |
CN103823887A (en) * | 2014-03-10 | 2014-05-28 | 北京大学 | Based on low-order overall situation geometry consistency check error match detection method |
CN103823889A (en) * | 2014-03-10 | 2014-05-28 | 北京大学 | L1 norm total geometrical consistency check-based wrong matching detection method |
Non-Patent Citations (2)
Title |
---|
Low Rank Global Geometric Consistency for Partial-Duplicate Image Search;Li Yang 等;《2014 22nd International Conference on Pattern Recognition》;20140828;3939-3944 * |
Pairwise Geometric Matching for Large-scale Object Retrieval;Xinchao Li 等;《Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition》;20150630;5153-5161 * |
Also Published As
Publication number | Publication date |
---|---|
CN105551022A (en) | 2016-05-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105551022B (en) | A kind of image error matching inspection method based on shape Interactive matrix | |
Ma et al. | Locality preserving matching | |
Laga et al. | Landmark-free statistical analysis of the shape of plant leaves | |
US9141871B2 (en) | Systems, methods, and software implementing affine-invariant feature detection implementing iterative searching of an affine space | |
US8712156B2 (en) | Comparison of visual information | |
Mohamad et al. | Generalized 4-points congruent sets for 3d registration | |
Hao et al. | Efficient 2D-to-3D correspondence filtering for scalable 3D object recognition | |
WO2020020047A1 (en) | Key point matching method and device, terminal device and storage medium | |
Sundara Vadivel et al. | An efficient CBIR system based on color histogram, edge, and texture features | |
CN111507297B (en) | Radar signal identification method and system based on measurement information matrix | |
Ma et al. | Robust image feature matching via progressive sparse spatial consensus | |
Deng et al. | Efficient 3D face recognition using local covariance descriptor and Riemannian kernel sparse coding | |
CN109840529B (en) | Image matching method based on local sensitivity confidence evaluation | |
Yan et al. | Geometrically based linear iterative clustering for quantitative feature correspondence | |
CN109857895B (en) | Stereo vision retrieval method and system based on multi-loop view convolutional neural network | |
CN114332172A (en) | Improved laser point cloud registration method based on covariance matrix | |
Yang et al. | Non-rigid point set registration via global and local constraints | |
CN108447084B (en) | Stereo matching compensation method based on ORB characteristics | |
JP6793925B2 (en) | Verification equipment, methods, and programs | |
Wu et al. | A vision-based indoor positioning method with high accuracy and efficiency based on self-optimized-ordered visual vocabulary | |
CN113283478B (en) | Assembly body multi-view change detection method and device based on feature matching | |
Wu et al. | An accurate feature point matching algorithm for automatic remote sensing image registration | |
Tang et al. | A GMS-guided approach for 2D feature correspondence selection | |
CN110210443B (en) | Gesture recognition method for optimizing projection symmetry approximate sparse classification | |
Sur et al. | An a contrario model for matching interest points under geometric and photometric constraints |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |