CN102368237B - Image retrieval method, device and system - Google Patents

Image retrieval method, device and system Download PDF

Info

Publication number
CN102368237B
CN102368237B CN 201010514782 CN201010514782A CN102368237B CN 102368237 B CN102368237 B CN 102368237B CN 201010514782 CN201010514782 CN 201010514782 CN 201010514782 A CN201010514782 A CN 201010514782A CN 102368237 B CN102368237 B CN 102368237B
Authority
CN
China
Prior art keywords
image
local feature
matching
space
coupling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN 201010514782
Other languages
Chinese (zh)
Other versions
CN102368237A (en
Inventor
周文罡
李厚强
田奇
卢亦娟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Science and Technology of China USTC
Original Assignee
University of Science and Technology of China USTC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Science and Technology of China USTC filed Critical University of Science and Technology of China USTC
Priority to CN 201010514782 priority Critical patent/CN102368237B/en
Publication of CN102368237A publication Critical patent/CN102368237A/en
Application granted granted Critical
Publication of CN102368237B publication Critical patent/CN102368237B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses an image retrieval method, an image retrieval device and an image retrieval system, wherein the image retrieval method comprises the following steps: extracting the local features of a query image, and quantizing the local features into visual words; querying a preset visual-word inverted list in an image database by using the visual words so as to obtain matched local-feature pairs and matched images; respectively carrying out space encoding on relative space positions between matched local features in the query image and the matched images so as to obtain a space code picture of the query image and space code pictures of the matched images; executing a space consistency check on the space code picture of the query image and the space code pictures of the matched images so as to obtain the number of the matched local-feature pair in conformity with the space consistency; and according to the numbers of the matched local-feature pairs (in conformity with the space consistency) of different matched images, returning to the matched images according to the similarity of the matched images. By using the method provided by the invention, the image retrieval accuracy and the retrieval efficiency can be improved, and the time consuming for retrieval can be reduced.

Description

Image search method, Apparatus and system
Technical field:
The present invention relates to the data retrieval field, relate in particular to a kind of image search method, Apparatus and system.
Background technology:
In recent years, along with the develop rapidly of technique of internet and popularizing fast of digital equipment, the image on the network has reached hundred billion scales, and is the growth of index scale always.In the face of the view data of magnanimity like this, how effectively to manage, can find easily interested image to make things convenient for the user, namely image retrieval is that the while that is of practical significance very much also is very challenging work.In the image retrieval, the focus of a research is the partial replication image retrieval.The partial replication image generally is because the user cuts out one and then pastes in the another one image from former figure, or to the modification that former figure adds some literal, perhaps former figure is carried out simple projective transformation.Based on the partial replication image retrieval, be with a wide range of applications in the network multimedia field.
The partial replication image retrieval, namely finding from the image data base of a magnanimity with query image has closely similar image-region piece.The best method that addresses this problem at present just is based on the method for vision word code book, vision word code book generally is a large amount of local feature that obtains by to sampling in the training image, such as SIFT (Scale Invariant Feature Transform, yardstick invariant features converting characteristic), SURF (Speeded Up Robust Feature, the fast robust feature), carrying out cluster obtains.Training image can be an image subset in the image data base, also can be one group of incoherent image.Obtain after the vision word code book, can extract some local features to a new image in the image data base, and local feature is quantified as the vision word, thereby with the image representation vision word vector that is a higher-dimension, then, the matching ratio between query image and the matching image more just be converted between higher-dimension vision word vector matching ratio.If two images respectively have a local feature to be quantified as same vision word, then these two local features consist of a pair of coupling local feature.If two images consist of coupling without any a pair of local feature, then these two images are considered to fully incoherent.
In the prior art, a kind of basic thought of the images match retrieval based on complete geometric checking is as follows: quantize by local feature, obtain the preliminary matching result between query image and the matching image, if two images are really relevant, must there be so some total local features to consist of corresponding correct coupling in the image block of their partial replication.Must satisfy certain affined transformation between these correct local matchings, the coupling of the mistake that those are remaining does not then satisfy this affined transformation.Based on this hypotheses, geometric checking adopts the random sampling coherence method fully, and stochastic sampling is some to the coupling local feature, is used for estimating affined transformation; Then check other coupling local feature and the matching degree of described affined transformation, and record meets the number of the coupling of this affined transformation.When number of samples is abundant immediately, that affined transformation that matching number is maximum then may be corresponding correct affined transformation.If a unique point f is arranged in the query image 1On the coupling unique point f in the matching image 2, f 1And f 2Coordinate in image is respectively (u, v) and (x, y).Then the affined transformation between them can represent suc as formula (1):
x y = a b c d · u v + t 1 t 2 - - - ( 1 )
Because this affined transformation has six parameters, so need at least three pairs of coupling local feature points just may estimate to find the solution.If query image and all coupling local feature centerings of matching image, the shared ratio of correct coupling is p, three pairs of couplings of so each sampling all are that correct probability is p 3If in the correct coupling, can both correctly estimate affine transformation parameter for any three pairs, if sampling number is N, then having the local characteristic matching of three couples in the once sampling at least all is that correct probability is Np 3Obtain correct affine transformation parameter by such scheme, in order to judge the matching degree of query image and matching image, obtain result for retrieval.
By the research to technique scheme, the inventor finds: when containing a large amount of erroneous matching in the local characteristic matching, obtain possibly wrong affined transformation, the accuracy of impact retrieval.This scheme needs the sampling of more number of times simultaneously, just may realize sampling three pairs of correct coupling local feature points at probability, thereby estimate correct affined transformation, and therefore above-mentioned image retrieval scheme based on complete geometric checking is very consuming time.
Summary of the invention
For solving the problems of the technologies described above, the object of the present invention is to provide a kind of image search method, device and system, lower, the consuming time long and not high problem of recall precision of retrieval accuracy that exists to solve the conventional images retrieval scheme.
Image search method provided by the invention comprises:
Extract the local feature of query image, and described local feature is quantified as the vision word;
Use in the described vision word enquiring image data base default vision word inverted list, obtain mating local feature to and matching image;
Relative tertiary location between the coupling local feature in the query image is carried out space encoding, obtain query image space code figure, the relative tertiary location between the coupling local feature in the matching image is carried out space encoding, obtain matching image space code figure.In order to apply stronger geometrical constraint, can also be respectively take each local feature as reference origin, just each quadrant of the plane of delineation evenly is divided into several sector regions, then carries out space encoding, obtains space code figure.Space code figure has described the relative position relation each other of the local feature in the image.
The Space Consistency check is carried out in the right volume coordinate position of coupling local feature among described query image space code figure and the matching image space code figure, obtain the right number of coupling local feature that meets Space Consistency;
The number of the coupling local feature that meets Space Consistency that comprises according to the Different matching image calculates the similarity between this matching image and query image, and returns matching image according to described similarity.
Corresponding to above-mentioned image search method, the present invention also provides a kind of image retrieving apparatus, comprising:
The First Characteristic extraction module is for the local feature that extracts image to be checked;
The First Characteristic quantization modules is used for described local feature is quantified as the vision word;
Enquiry module is used for the vision word inverted list use described vision word enquiring image data base default, obtain mating local feature to and matching image;
The space encoding module, be used for the relative tertiary location between the query image coupling local feature is carried out space encoding, obtain query image space code figure, and the relative tertiary location between the coupling local feature in the matching image is carried out space encoding, obtain matching image space code figure;
The Space Consistency inspection module is used for described query image space code figure and the right volume coordinate position of matching image space code figure coupling local feature are carried out the Space Consistency check, to obtain the right number of coupling local feature that meets Space Consistency;
Result for retrieval returns module, is used for the right number of coupling local feature that meets Space Consistency according to the Different matching image, calculates the similarity between this matching image and query image, returns matching image according to described similarity.
In addition, the present invention also provides a kind of image retrieving apparatus, comprising:
Above-mentioned image retrieving apparatus;
Image data base is used for storage for the image of retrieval coupling.
Use the technical scheme that the embodiment of the invention provides, the image search method that provides, in device and the system, respectively the coding of the spatial relation between the local feature that is complementary in matching image and the retrieving images is formed space encoding figure, and whether consistent by the spatial relation of checking the local feature that mates, obtain meeting the correct coupling local feature of Space Consistency, thereby effectively got rid of a large amount of wrong local feature coupling that may exist, make the similarity definition between the matching image in query image and the image data base more accurate, can improve the accuracy of retrieval.Simultaneously, the computation complexity of the consistent check algorithm in space is low, compares with the scheme of prior art, and what can reduce to retrieve be consuming time, improves recall precision.
Description of drawings
In order to be illustrated more clearly in the embodiment of the invention or technical scheme of the prior art, the below will do to introduce simply to the accompanying drawing of required use in embodiment or the description of the Prior Art, apparently, accompanying drawing in the following describes only is some embodiments of the present invention, for those of ordinary skills, under the prerequisite of not paying creative work, can also obtain according to these accompanying drawings other accompanying drawing.
The image search method schematic flow sheet of Fig. 1 for providing in the embodiment of the invention one;
Fig. 2 is the spatial relation synoptic diagram of coupling local feature in the embodiment of the invention two;
Fig. 3 is the schematic flow sheet of determining the locus of coupling local feature in the embodiment of the invention two;
Fig. 4 is the another kind of spatial relation synoptic diagram of coupling local feature in the embodiment of the invention two;
The image retrieving apparatus structural representation of Fig. 5 for providing in the embodiment of the invention four;
Fig. 6 is the modular structure synoptic diagram of setting up inverted index table in the embodiment of the invention four;
A kind of structural representation of the Space Consistency inspection module that provides in the embodiment of the invention four is provided Fig. 7;
A kind of working method synoptic diagram of the image indexing system that provides in the embodiment of the invention five is provided Fig. 8.
Embodiment
Lower, the consuming time long and not high problem of recall precision of retrieval accuracy that the specific embodiment of the invention exists in order to solve the conventional images retrieval scheme provides a kind of image search method, device and system.
Described image search method comprises: extract the local feature of query image, and described local feature is quantified as the vision word; Use in the described vision word enquiring image data base default vision word inverted list, obtain mating local feature to and matching image; Relative tertiary location to coupling local feature in the query image carries out space encoding, obtains query image space code figure, and coupling local feature relative tertiary location in the matching image is carried out space encoding, obtains matching image space code figure; Described query image space code figure and matching image space code figure are carried out the Space Consistency check, obtain the right number of coupling local feature that meets Space Consistency; The right number of coupling local feature that meets Space Consistency according to the Different matching image returns matching image according to the similarity of matching image.
It more than is the application's core concept, below in conjunction with the accompanying drawing in the embodiment of the invention, the technical scheme in the embodiment of the invention is clearly and completely described, obviously, described embodiment only is the present invention's part embodiment, rather than whole embodiment.Based on the embodiment among the present invention, those of ordinary skills belong to the scope of protection of the invention not making the every other embodiment that obtains under the creative work prerequisite.
Embodiment one:
Referring to shown in Figure 1, a kind of schematic flow sheet of the image search method that provides for present embodiment, the method may further comprise the steps:
Step S101, the local feature of extraction query image;
Step S102 is quantified as the vision word with described local feature;
Wherein, described vision word can define in the following manner: extract local feature from training image, described local feature is carried out cluster, the center of determining each cluster is a vision word.Can obtain a plurality of vision words by a plurality of cluster centres, generate the vision word code book that comprises a plurality of local features, vision word and corresponding relation thereof.Because the vision word does not have clear and definite semantic information, the size of vision word code book, the number of the vision word that namely wherein comprises, generally the experience of test data is determined by experiment.In addition, can also directly use defined vision word code book in the identical or close technical field in the present embodiment.
After extracting the local feature of image to be checked, to each local feature wherein, access described vision word code book, according to wherein local feature and the distance between each vision word in the vision code book, this local feature can be quantified as that minimum vision word of distance.
Step S103 uses in the described vision word enquiring image data base default vision word inverted list, obtain mating local feature to and matching image;
For fear of image to be checked unnecessary with image data base in the comparison of complete uncorrelated image, used for reference the thought based on textual words inverted list index structure in the text retrieval in the present embodiment, its basic thought comprises: to each vision word, generate a tabulation, in the tabulation each is associated with the image that comprises this vision word, further can also comprise the frequency that comprises this vision word in the image.This tabulation is called vision word inverted list, by described inverted list, can learn that each vision word is comprised by which image, further, a given image to be checked, only need to search tabulation inverted list corresponding to vision word that described image to be checked comprises, just can know with described image to be checked to have the right matching image of coupling local feature.Wherein, there are one or more matching images in an image to be checked, has one or more coupling local features pair between image to be checked and the matching image.The local feature of coupling described in the present embodiment the local feature of identical vision word to referring to the correspondence that exists in described image to be checked and the matching image.
Wherein, can set up in the following way the vision word inverted list of image data base:
Extract the local feature of image in the image data base, described local feature is quantified as the vision word;
Set up the related of image in described vision word and the described image data base, obtain vision word inverted list.
Wherein, can extract the local feature of each image in the image data base, and all local features are quantified as the vision word.
Image in the described image data base can be set up related with one or more visual dictionaries, inverted list comprises one or more data table items in the described visual dictionary, can related one or more images, further can also comprise the frequency that comprises this vision word in the image.
Step S104, relative tertiary location between the coupling local feature in the query image is carried out space encoding, obtain query image space code figure, the relative tertiary location between the coupling local feature in the matching image is carried out space encoding, obtain matching image space code figure;
Wherein, space encoding is in the query image, be described by the relation of the relative tertiary location between the described coupling local feature that obtains among the step S103, and the relative tertiary location relation between the coupling local feature is described in the matching image to obtaining among the step S103.
Step S105 carries out the Space Consistency check to the space code figure that mates local feature in described query image and the matching image, obtains the right number of coupling local feature that meets Space Consistency;
If two images are partial replications, be that it owns certain image block areas together, then in the piece zone that this copies, the local feature of these two images necessarily meets the consistance of relative space position, those incongruent this conforming couplings are the coupling local feature pair of corresponding a mistake probably, should be deleted.This Space Consistency detection can be carried out xor operation by the volume coordinate position that comprises in the space encoding of mating local feature and carry out.
Step S106, the right number of coupling local feature that meets Space Consistency according to the Different matching image calculates the similarity between this matching image and query image, returns matching image according to described similarity.
Suppose query image q and matching image p through behind the matching inquiry, a is arranged to the coupling local feature, wherein b has passed through the Space Consistency check to the coupling local feature, so, can define the similarity of query image q and matching image p, suc as formula (2):
S ( p , q ) = b - a - b + 1 a · N ( p ) N max - - - ( 2 )
Wherein N (p) represents local feature total number among the matching image p, N MaxMaximum local feature numbers that image may contain in the presentation video database.
In this step, can sort according to the similarity size of the matching image in the database and query image, and return to the user, a threshold values that meets the coupling local feature number of Space Consistency can also be set, when the coupling local feature number that meets Space Consistency of a matching image reaches described threshold values, determine that this matching image is relevant with query image, returns to inquiring user with it; Perhaps can put the threshold values of similarity, when the similarity of a matching image reaches described threshold values, determine that this matching image is relevant with query image, returns to inquiring user with it.
In the image search method that present embodiment provides, respectively the coding of the spatial relation between the local feature that is complementary in matching image and the retrieving images is formed space encoding figure, and whether consistent by the spatial relation of checking the local feature that mates, obtain meeting the correct coupling local feature of Space Consistency, thereby effectively got rid of a large amount of wrong local feature coupling that may exist, make the similarity definition between the matching image in query image and the image data base more accurate, can improve the accuracy of retrieval.Simultaneously, the computation complexity of the consistent check algorithm in space is low, compares with the scheme of prior art, and what can reduce to retrieve be consuming time, improves recall precision.
Embodiment two:
Present embodiment provides a kind of relative tertiary location to the local feature in the image to concern the implementation of carrying out space encoding, and is specific as follows described:
Describe the relative position relation of the horizontal direction of two coupling local features, obtain horizontal direction space code figure;
Describe the relative position relation of the vertical direction of two coupling local features, obtain vertical direction space code figure.
For a query image or matching image, generate two space code figure, be designated as respectively X-map and Y-map.Wherein, X-map is in order to the relative tertiary location relation of the along continuous straight runs between the local feature of describing two couplings (xZhou), and Y-map is in order to the relative tertiary location relation of the vertical direction between the local feature of describing two couplings (Y-axis).For example, there be K coupling local feature { v between the given query image I, itself and matching image i, (i=1,2 ..., K) right, then the X-map of I and Y-map all can be the binary matrix of a K * K, define suc as formula (3) and formula (4):
Xmap ( i , j ) = 0 if x i < x j 1 if x i &GreaterEqual; x j - - - ( 3 )
Ymap ( i , j ) = 0 if y i < y j 1 if y i &GreaterEqual; y j - - - ( 4 )
Shown in Fig. 2 left side, the expression query image comprises 4 coupling local features, Fig. 2 right side has represented coupling local feature 1, the 3 and 4 relative tertiary location relations with respect to coupling local feature 2, wherein, coupling local feature 1 is on the upper left side of coupling local feature 2, coupling local feature 3 is in the lower right of coupling local feature 2, and coupling local feature 4 is in the lower left of coupling local feature 2.By above-mentioned relative tertiary location relation, convolution (3) and formula (4) can obtain two space code figure of this query image, respectively suc as formula (5) and formula (6):
Xmap = 1 0 0 1 1 1 0 1 1 1 1 1 0 0 0 1 - - - ( 5 )
Ymap = 1 1 1 1 0 1 1 1 0 0 1 1 0 0 0 1 - - - ( 6 )
In addition, for this spatial division and coding are carried out the vague generalization popularization, further the relative position relation between the local feature point is mated in refinement, namely centered by fixed reference feature point, after the plane of delineation is divided into four quadrants, again each quadrant on average is divided into a plurality of sector regions, determines then which sector region other each local feature points fall in.During specific implementation, can adopt following method.In the image search method that schematic flow sheet as shown in Figure 3, present embodiment provide, before carrying out the space encoding step, can also may further comprise the steps:
Step S301 take each coupling picture position, local feature place as reference origin, is divided into four quadrants with the image that mates the local feature place respectively;
Step S302 evenly is divided into a plurality of sector regions with each quadrant, and determines quadrant and sector region that other local feature point is positioned at respect to reference origin;
Step S303 is rotated about reference origin each volume coordinate position of mating the local feature place, obtains the new volume coordinate position of this coupling local feature;
Step S304 according to coupling local feature new volume coordinate position, obtains the relative tertiary location of itself and described reference origin.
After in the present embodiment described query image or matching image plane being divided into four quadrants, to each quadrant further evenly being divided into a plurality of sector regions.When implementation algorithm, this dividing mode can be decomposed into r son and divide.Every height can carry out space encoding with formula (3) and formula (4) after dividing and being rotated counterclockwise the θ angle, and is final, can obtain space code figure GX and the GY of two general three-dimensionals.
Concrete, can be to coupling local feature v in query image or the matching image iVolume coordinate position (x i, y i) be rotated counterclockwise about image origin
Figure BSA00000313142500091
(k=0,1 ..., r-1) degree obtains the new volume coordinate position (x of this coupling local feature i k, y i k) suc as formula shown in (7):
x i k y i k = cos ( &theta; ) sin ( &theta; ) - sin ( &theta; ) cos ( &theta; ) &CenterDot; x i y i - - - ( 7 )
Then, according to the new volume coordinate position (x that obtains i k, y i k), can obtain space code figure GX and the GY of two general three-dimensionals, shown in (8) and formula (9).
GX ( i , j , k ) = 0 if x i k < x j k 1 if x i k &GreaterEqual; x j k - - - ( 8 )
GY ( i , j , k ) = 0 if y i k < y j k 1 if y i k &GreaterEqual; y j k - - - ( 9 )
Wherein, x i kAnd x j kBe respectively k postrotational horizontal ordinate of i local feature point and j coupling local feature, y i kAnd y j kBe respectively k postrotational ordinate of i coupling local feature and j coupling local feature, shown in (7) formula.In fact, three-dimensional space code figure GX and GY have carried out more careful description to the relative position relation between each coupling local feature, pointed out that not only certain feature is positioned at which quadrant of the plane of delineation of dividing based on fixed reference feature point, points out further also it is positioned at which sheet sector region of this quadrant.
As shown in Figure 4, in order to show more intuitively the division of sector region, first image is divided into four quadrants, then each quadrant is divided into a plurality of uniform sector regions.The expression of Fig. 4 left side is divided into two sector regions uniformly with each quadrant of this image, thereby it is fan-shaped that whole image is divided into eight equal portions, the division of this image can be resolved into two son divisions, respectively such as the first half in the middle of Fig. 4 with shown in the middle the latter half of Fig. 4, wherein, the first half can adopt formula (3) and formula (4) to encode in the middle of 4, the latter half is after being rotated counterclockwise 45 in the middle of Fig. 4, shown in Fig. 4 right side, can adopt formula (8) and (9) to encode.
The technical scheme that present embodiment provides is a kind of scheme that realizes the relative tertiary location relation of the local feature in the image is carried out space encoding.In actual applications, those skilled in the art can select alternate manner to realize the relative tertiary location relation of different local features is carried out space encoding, and the technical scheme that provides among present embodiment and the embodiment one can cross-references, does not repeat them here.
Embodiment three:
A kind of implementation to the right volume coordinate position execution Space Consistency check of coupling local feature among described query image space code figure and the matching image space code figure that present embodiment provides, specifically can be as described below:
Quantize according to the coupling local feature, if query image I qWith matching image I mHave N to the local feature of coupling, then in the step S104 described in the embodiment one, respectively these are mated local features at I qAnd I mIn relative tertiary location carry out space encoding, obtain query image space code figure (GX q, GY q) and matching image space code figure (GX m, GY m).For the consistance of the relative tertiary location between the coupling local feature between comparison query image and the matching image, can be respectively to GX qAnd GX m, GY qAnd GY mCarry out xor operation, shown in (10) and formula (11):
V x ( i , j , k ) = GX q ( i , j , k ) &CirclePlus; GX m ( i , j , k ) - - - ( 10 )
V y ( i , j , k ) = GY q ( i , j , k ) &CirclePlus; GY m ( i , j , k ) - - - ( 11 )
In addition, present embodiment also provides a kind of implementation of obtaining the right number of the coupling local feature that meets Space Consistency, specifically can be as described below:
Ideally, if all N are correct to the coupling local feature, V so xAnd V yIn all elements all will be zero.But, if there is the coupling local feature of some mistakes, the coupling local feature of mistake corresponding at GX qAnd GX mIntermediate value will be inconsistent; In like manner, the coupling local feature of mistake corresponding at GY qAnd GY mIntermediate value also will be inconsistent.This value inconsistent will cause the as a result V of their XORs xAnd V yCorresponding value is 1.
Order S x ( i ) = &Sigma; j = 0 N - 1 &cup; k = 0 r - 1 V x ( i , j , k ) , S y ( i ) = &Sigma; j = 0 N - 1 &cup; k = 0 r - 1 V y ( i , j , k ) ,
If have i so that S x(i)>0, we define so I then *Individual local matching feature should be deleted it corresponding the coupling of a mistake probably.To S yAlso do same operation.Iteration carry out above-mentioned deletion action, the matching characteristic that to the last remains is to corresponding S xAnd S yIn value be complete zero.
Determine remaining coupling local feature to number be query image with corresponding matching image in meet the right number of coupling local feature of Space Consistency.
The technical scheme that present embodiment provides, it is a kind of scheme of the right volume coordinate position of coupling local feature among described query image space code figure and the matching image space code figure being carried out the Space Consistency check, in actual applications, those skilled in the art can select alternate manner to realize the Space Consistency check that the coupling local feature is right, the technical scheme that provides among present embodiment and the embodiment one can cross-references, does not repeat them here.
Embodiment four:
Correspond to above-mentioned image processing method, present embodiment also provides a kind of image retrieving apparatus, as shown in Figure 5, comprising:
First Characteristic extraction module 501 is for the local feature that extracts image to be checked;
First Characteristic quantization modules 502 is used for described local feature is quantified as the vision word;
Enquiry module 503 is used for the vision word inverted list use described vision word enquiring image data base default, obtain mating local feature to and matching image;
Space encoding module 504, be used for the relative tertiary location between the query image coupling local feature is carried out space encoding, obtain query image space code figure, and the relative tertiary location between the coupling local feature in the matching image is carried out space encoding, obtain matching image space code figure;
Space Consistency inspection module 505 is used for described query image space code figure and matching image space code figure are carried out the Space Consistency check, to obtain the right number of coupling local feature that meets Space Consistency;
Result for retrieval returns module 506, is used for the right number of coupling local feature that meets Space Consistency according to the Different matching image, calculates the similarity between this matching image and query image, returns matching image according to described similarity size.
In the present embodiment, the inverted index table of described vision word and described image data base can be by generating with lower module, and referring to shown in Figure 6, a kind of structural representation for this module specifically comprises:
Vision word generation module 601 is used for extracting local feature from training image, and described local feature is carried out cluster, and the center of determining each cluster is a vision word.
Second Characteristic extraction module 602 is for the local feature that extracts the image data base image;
Second Characteristic quantization modules 603, the local feature that is used for extracting from the image of image data base is quantified as the vision word;
Inverted list is set up module 604, is used for setting up image related of described vision word and described image data base, obtains vision word inverted list.
Relative position relation between two local features can be decomposed into relative position relation on the horizontal direction and the relative position relation on the vertical direction, and therefore described space encoding module 504 specifically can comprise:
The first coding unit is used for describing two relative position relations that mate the horizontal direction of local features, obtains horizontal direction space code figure;
The second coding unit is used for describing two relative position relations that mate the vertical direction of local features, obtains vertical direction space code figure.
Described the first coding unit specifically can obtain horizontal direction space code figure by the following formula coding:
Xmap ( i , j ) = 0 if x i < x j 1 if x i &GreaterEqual; x j ;
Described the second coding unit specifically can obtain vertical direction space code figure by the following formula coding:
Ymap ( i , j ) = 0 if y i < y j 1 if y i &GreaterEqual; y j ;
Wherein, x iAnd x jBe respectively the horizontal ordinate of two coupling local features, y iAnd y jBe respectively the ordinate of two coupling local features.
Via the operation of space encoding module, a pair of query image and matching image can obtain: query image horizontal direction space code figure, query image vertical direction space code figure, matching image horizontal direction space code figure and matching image vertical direction space code figure.
In addition, described image processing apparatus can also comprise:
Module is divided in the zone, is used for take each coordinate position that mates local feature as initial point, the image at coupling local feature place is divided four quadrants, and then each quadrant evenly is divided into r sector region;
The coordinate determination module is used for the volume coordinate position at each coupling local feature place is rotated about image origin, obtains the new volume coordinate position of this coupling local feature;
The locus determination module is used for according to the new volume coordinate position of coupling local feature, obtains the relative tertiary location of itself and described reference origin.
Described coordinate determination module, specifically can determine in the following way about the new volume coordinate position of the postrotational coupling local feature of reference origin:
Volume coordinate position (x to the coupling local feature that comprises in the image i, y i) be rotated counterclockwise about reference origin (k=0,1 ..., r-1) degree obtains the new volume coordinate position (x of this coupling local feature i k, y i k) shown in the following formula:
x i k y i k = cos ( &theta; ) sin ( &theta; ) - sin ( &theta; ) cos ( &theta; ) &CenterDot; x i y i ;
Wherein, r is the sector region number that image comprises, i=0, and 1 ..., N-1, wherein N is the right total number of matching characteristic;
Described space encoding module, specifically can carry out space encoding to the relative tertiary location between the coupling local feature in the image in the following way:
By the first coding unit, adopt the following formula coding to obtain horizontal direction space code figure:
GX ( i , j , k ) = 0 if x i k < x j k 1 if x i k &GreaterEqual; x j k ;
By the second coding unit, adopt the following formula coding to obtain vertical direction space code figure:
GY ( i , j , k ) = 0 if y i k < y j k 1 if y i k &GreaterEqual; y j k ;
Wherein, x i kAnd x j kBe respectively k postrotational horizontal ordinate of i coupling local feature and j coupling local feature, y i kAnd y j kBe respectively k postrotational ordinate of i coupling local feature and j coupling local feature.Three-dimensional space code figure GX and GY have carried out more careful description to the relative position relation between each coupling local feature, pointed out that not only certain feature is positioned at which quadrant of the plane of delineation of dividing based on fixed reference feature point, points out further also it is positioned at which sheet sector region of this quadrant.
Described Space Consistency inspection module 505 shown in the structural representation of Fig. 7, specifically can comprise:
XOR unit 505a is used for described query image space code figure and the right volume coordinate position of matching image space code figure coupling local feature are carried out xor operation;
Erroneous matching delete cells 505b, xor operation result according to the right volume coordinate position of coupling local feature among query image space code figure and the matching image space code figure, determine to meet the matching characteristic pair of Space Consistency, determine and deletion does not meet the matching characteristic pair of Space Consistency; Space code figure xor operation corresponding to the right volume coordinate position of the coupling local feature that meets Space Consistency that keeps at last as a result sum is 0.
Matching number determining unit 505c is used for determining the remaining right number of correct matching characteristic that meets Space Consistency.
Described XOR unit, specifically according to shown in the following formula, xor operation is carried out in the volume coordinate position that the coupling local feature is right:
V x ( i , j , k ) = GX q ( i , j , k ) &CirclePlus; GX m ( i , j , k ) ;
V y ( i , j , k ) = GY q ( i , j , k ) &CirclePlus; GY m ( i , j , k ) ;
Wherein, (GX q, GY q) be query image space code figure, (GX m, GY m) be matching image space code figure;
Described erroneous matching unit, adopt with following formula and obtain and delete the coupling local feature pair that does not meet Space Consistency:
Order S x ( i ) = &Sigma; j = 0 N - 1 &cup; k = 0 r - 1 V x ( i , j , k ) , S y ( i ) = &Sigma; j = 0 N - 1 &cup; k = 0 r - 1 V y ( i , j , k ) ,
If have i so that S x(i)>0, we define so I then *Individual local matching feature is to corresponding the coupling of a mistake probably, and we delete it.To S yAlso do same operation.Iteration carry out such deletion action, the matching characteristic that to the last remains is to corresponding S xAnd S yIn value be complete zero.
Described matching number determining unit, obtain the right number of matching characteristic that is left to meet the demands, as the right number of coupling local feature that meets Space Consistency.
In addition, described result for retrieval returns module 506, specifically can comprise:
Similarity calculated is used for according to the right number of matching characteristic that meets Space Consistency, and the number of the local feature that comprises separately in matching image and the query image, calculates the similarity between matching image and query image;
Described similarity calculated, specifically can calculate in the following way the similarity between matching image and query image:
If query image q and matching image p through inquiry after, a is arranged to the coupling local feature, wherein b has passed through the Space Consistency check to the coupling local feature, the similarity of query image q and matching image p then, S (p, q) is shown below:
S ( p , q ) = b - a - b + 1 a &CenterDot; N ( p ) N max ;
Wherein N (p) represents local feature total number among the matching image p, N MaxMaximum local feature numbers that image may contain in the presentation video database.
Matching image returns the unit, is used for according to the described matching image of described similarity size ordering, and returns described matching image according to described ordering.
Present embodiment is device embodiment corresponding to aforesaid way embodiment, and embodiment can referring to the description of embodiment of the method, not repeat them here.
In the image retrieving apparatus that present embodiment provides, respectively the coding of the spatial relation between the local feature that is complementary in matching image and the retrieving images is formed space encoding figure, and whether consistent by the spatial relation of checking the local feature that mates, obtain meeting the correct coupling local feature of Space Consistency, thereby effectively got rid of a large amount of wrong local feature coupling that may exist, that similarity definition between the matching image in query image and the image data base is more accurate, can improve the accuracy of retrieval, simultaneously, the computation complexity of the consistent check algorithm in space is low, compare with the scheme of prior art, what can reduce to retrieve be consuming time, improves recall precision.
Embodiment five:
Corresponding to above-mentioned image processing method and image processing apparatus, present embodiment also provides a kind of image processing system, specifically comprises:
The image retrieving apparatus that provides among the embodiment five;
And image data base, be used for storage for the image of retrieval coupling.
Further, described image processing system can also comprise:
Image collection module is used for obtaining new image from network, and stores in the described image data base; Described image collection module can be passed through web crawlers download pictures or URL (Uniform/Universal Resource Locator, URL(uniform resource locator)) corresponding to picture, constantly expands the raw image data storehouse.
The inverted list update module is used for obtaining inverted list corresponding to vision dictionary that described new image comprises, and upgrades image related in the described inverted list.
As shown in Figure 8, be a kind of working method synoptic diagram of this system.
Present embodiment is aforesaid way and system embodiment corresponding to device embodiment, and embodiment can referring to the description of method and apparatus embodiment, not repeat them here.
The image search method that the embodiment of the invention provides, in device and the system, respectively the coding of the spatial relation between the local feature that is complementary in matching image and the retrieving images is formed space encoding figure, and whether consistent by the spatial relation of checking the local feature that mates, obtain meeting the correct coupling local feature of Space Consistency, thereby effectively got rid of a large amount of wrong local feature coupling that may exist, that similarity definition between the matching image in query image and the image data base is more accurate, can improve the accuracy of retrieval, simultaneously, the computation complexity of the consistent check algorithm in space is low, compare with the scheme of prior art, what can reduce to retrieve be consuming time, improves recall precision.
For device of the present invention and system embodiment, because it is substantially corresponding to embodiment of the method, so relevant part gets final product referring to the part explanation of embodiment of the method.Device embodiment described above only is schematic, wherein said unit as the separating component explanation can or can not be physically to separate also, the parts that show as the unit can be or can not be physical locations also, namely can be positioned at a place, perhaps also can be distributed on a plurality of equipment.Can select according to the actual needs wherein some or all of module to realize the purpose of present embodiment scheme.Those of ordinary skills namely can understand and implement in the situation of not paying creative work.
In several embodiment that the application provides, should be understood that disclosed methods, devices and systems not surpassing in the application's the spirit and scope, can be realized in other way.Current embodiment is a kind of exemplary example, should be as restriction, and given particular content should in no way limit the application's purpose.For example, the division of described unit or subelement only is that a kind of logic function is divided, and during actual the realization other dividing mode can be arranged, and for example a plurality of unit or a plurality of subelement combine.In addition, a plurality of unit can or assembly can in conjunction with or can be integrated into another system, or some features can ignore, or do not carry out.
In addition, the synoptic diagram of institute's describing method, device and system and different embodiment, in the scope that does not exceed the application, can with other system, module, technology or method in conjunction with or integrated.Another point, the shown or coupling each other discussed or direct-coupling or communication connection can be by some interfaces, indirect coupling or the communication connection of device or unit can be electrically, machinery or other form.
Each embodiment adopts the mode of going forward one by one to describe in this instructions, and what each embodiment stressed is and the difference of other embodiment that identical similar part is mutually referring to getting final product between each embodiment.To the above-mentioned explanation of the disclosed embodiments, make this area professional and technical personnel can realize or use the present invention.Multiple modification to these embodiment will be apparent concerning those skilled in the art, and General Principle as defined herein can in the situation that does not break away from the spirit or scope of the present invention, realize in other embodiments.Therefore, the present invention will can not be restricted to these embodiment shown in this article, but will meet the widest scope consistent with principle disclosed herein and features of novelty.

Claims (8)

1. an image search method is characterized in that, comprising:
Extract the local feature of query image, and the local feature of described query image is quantified as the vision word;
Extract the local feature of image in the image data base, the local feature of image in the described image data base is quantified as the vision word;
Image in the vision word of setting up image in the described image data base and the described image data base related obtains vision word inverted list;
Use the vision word of described query image, default vision word inverted list in the query image database, obtain mating local feature to and matching image;
Relative tertiary location between the coupling local feature in the query image is carried out space encoding, obtain query image space code figure, the relative tertiary location between the coupling local feature in the matching image is carried out space encoding, obtain matching image space code figure;
The Space Consistency check is carried out in the right volume coordinate position of coupling local feature among described query image space code figure and the matching image space code figure, obtain the right number of coupling local feature that meets Space Consistency;
The number of the coupling local feature that meets Space Consistency that comprises according to the Different matching image calculates the similarity between this matching image and query image, and returns matching image according to described similarity size;
The vision word of the described query image of described use, vision word inverted list default in the query image database comprises:
Use the described vision word of the vision word lookup inverted list of described query image, all associated pictures vision word, in the image data base of determining to comprise described query image are the matching image of described query image;
Set up the matching relationship between the local feature between query image and matching image;
Similarity between this matching image of described calculating and query image comprises:
If query image q and matching image p through after inquiring about, have a to the coupling local feature, wherein b has passed through the Space Consistency check to the coupling local feature, and then the similarity S (p, q) of query image q and matching image p is shown below:
Figure FDA00002590668400021
Wherein N (p) represents local feature total number among the matching image p, N MaxMaximum local feature numbers that image may contain in the presentation video database;
Relative tertiary location between the coupling local feature is carried out space encoding, specifically comprises:
Describe the relative position relation of the horizontal direction of two coupling local features, obtain horizontal direction space code figure;
Describe the relative position relation of the vertical direction of two coupling local features, obtain vertical direction space code figure;
Relative tertiary location to the coupling local feature carries out also comprising before the space encoding:
Take each coupling local feature as reference, its position in image is considered as reference origin respectively, the image that mates the local feature place is divided into four quadrants;
Each quadrant evenly is divided into a plurality of sector regions, and determines quadrant and sector region that other coupling local feature is positioned at respect to reference origin;
Volume coordinate position to each coupling local feature is rotated about reference origin, obtains the new volume coordinate position of this coupling local feature;
According to coupling local feature new volume coordinate position, obtain the relative tertiary location of itself and described reference origin;
Described volume coordinate position to each coupling local feature is rotated about reference origin, obtains the new volume coordinate position of this coupling local feature and specifically comprises:
Volume coordinate position (x to the coupling local feature that comprises in the image i, y i) be rotated counterclockwise about reference origin
Figure FDA00002590668400022
(k=0,1 ..., r-1) degree obtains the new volume coordinate position of this coupling local feature
Figure FDA00002590668400023
Shown in the following formula:
x i k y i k = cos ( &theta; ) sin ( &theta; ) - sin ( &theta; ) cos ( &theta; ) &CenterDot; x i y i , Wherein, the sector region number that r comprises for each quadrant, i=0,1 ..., N-1, wherein N is the right total number of matching characteristic;
The space code figure that relative tertiary location between the coupling local feature in the image is carried out obtaining after the space encoding comprises:
Horizontal direction space code figure: GX ( i , j , k ) = 0 if x i k < x j k 1 if x i k &GreaterEqual; x j k ;
Vertical direction space code figure: GY ( i , j , k ) = 0 if y i k < y j k 1 if y i k &GreaterEqual; y j k ;
Wherein,
Figure FDA00002590668400033
With
Figure FDA00002590668400034
Be respectively k postrotational horizontal ordinate of i coupling local feature and j coupling local feature,
Figure FDA00002590668400035
With
Figure FDA00002590668400036
Be respectively k postrotational ordinate of i coupling local feature and j coupling local feature, determine each coupling local feature after image is divided with respect to reference origin by horizontal direction space code figure GX and vertical direction space code figure GY, the quadrant that is positioned at and sector region.
2. method according to claim 1 is characterized in that, also comprises:
Extract local feature from training image, described local feature is carried out cluster, the center of determining each cluster is a vision word.
3. method according to claim 1 is characterized in that:
The Space Consistency check is carried out in the right volume coordinate position of coupling local feature among described query image space code figure and the matching image space code figure, specifically comprises:
Xor operation is carried out in the right volume coordinate position of coupling local feature among described query image space code figure and the matching image space code figure;
The described right number of coupling local feature that meets Space Consistency that obtains specifically comprises:
Determine and delete all coupling local features that do not meet Space Consistency pair;
Determine that remaining coupling local feature is the right number of coupling local feature that meets Space Consistency to number.
4. method according to claim 3 is characterized in that:
Specifically according to shown in the lower mode, xor operation is carried out in the volume coordinate position that the coupling local feature is right:
V x ( i , j , k ) = GX q ( i , j , k ) &CirclePlus; GX m ( i , j , k ) ;
V y ( i , j , k ) = GY q ( i , j , k ) &CirclePlus; GY m ( i , j , k ) ;
Wherein, (GX q, GY q) be query image space code figure, (GX m, GY m) be matching image space code figure; Determine in the following ways and delete all coupling local features that do not meet Space Consistency pair:
Order S x ( i ) = &Sigma; j = 0 N - 1 &cup; k = 0 r - 1 V x ( i , j , k ) , S y ( i ) = &Sigma; j = 0 N - 1 &cup; k = 0 r - 1 V y ( i , j , k ) ,
If have i so that S x(i)〉0 or S y(i)〉0, definition
Figure FDA00002590668400043
Or
Figure FDA00002590668400044
And delete i *To local matching characteristic pair;
Iteration carry out above-mentioned deletion action, the matching characteristic that to the last remains is to corresponding S xAnd S yIn value be complete zero;
Obtain the right number of matching characteristic that is left to meet the demands, as the right number of coupling local feature that meets Space Consistency.
5. an image retrieving apparatus is characterized in that, comprising:
The First Characteristic extraction module is for the local feature that extracts image to be checked;
The First Characteristic quantization modules is used for described local feature is quantified as the vision word;
The Second Characteristic extraction module is for the local feature that extracts the image data base image;
The Second Characteristic quantization modules, the local feature that is used for extracting from the image of image data base is quantified as the vision word;
Inverted list is set up module, and the vision word that is used for setting up described query image is related with the image of described image data base, obtains vision word inverted list;
Enquiry module is used for using the vision word of described query image, default vision word inverted list in the query image database, obtain mating local feature to and matching image;
Module is divided in the zone, is divided into four quadrants take certain matching characteristic position as reference origin for the image that will mate the local feature place, and each quadrant is divided into a plurality of sector regions;
The coordinate determination module is used for determining that each mates quadrant and sector region that local feature is positioned at the image of dividing based on reference origin, specifically determines in the following way about the new volume coordinate position of the postrotational coupling local feature of reference origin:
Volume coordinate position (x to the coupling local feature that comprises in the image i, y i) be rotated counterclockwise about reference origin (k=0,1 ..., r-1) degree obtains the new volume coordinate position of this coupling local feature
Figure FDA00002590668400046
Shown in the following formula:
x i k y i k = cos ( &theta; ) sin ( &theta; ) - sin ( &theta; ) cos ( &theta; ) &CenterDot; x i y i , Wherein, r is the included sector region number of each quadrant, i=0, and 1 ..., N-1, wherein N is the right total number of matching characteristic;
The locus determination module is used for according to the new volume coordinate position of coupling local feature, obtains the relative tertiary location of itself and described reference origin;
The space encoding module, be used for the relative tertiary location between the query image coupling local feature is carried out space encoding, obtain query image space code figure, and the relative tertiary location between the coupling local feature in the matching image is carried out space encoding, obtain matching image space code figure;
The Space Consistency inspection module is used for described query image space code figure and the right volume coordinate position of matching image space code figure coupling local feature are carried out the Space Consistency check, to obtain the right number of coupling local feature that meets Space Consistency;
Result for retrieval returns module, is used for the right number of coupling local feature that meets Space Consistency according to the Different matching image, calculates the similarity between this matching image and query image, according to described similarity size, returns matching image;
Described space encoding module specifically comprises:
The first coding unit is used for describing two relative position relations that mate the horizontal direction of local features, obtains horizontal direction space code figure;
The second coding unit is used for describing two relative position relations that mate the vertical direction of local features, obtains vertical direction space code figure;
Described space encoding module, specifically in the following way the relative tertiary location between the coupling local feature in the image is carried out space encoding:
Adopt the following formula coding to obtain horizontal direction space code figure:
GX ( i , j , k ) = 0 if x i k < x j k 1 if x i k &GreaterEqual; x j k ;
Adopt the following formula coding to obtain vertical direction space code figure:
GY ( i , j , k ) = 0 if y i k < y j k 1 if y i k &GreaterEqual; y j k ;
Wherein,
Figure FDA00002590668400054
With
Figure FDA00002590668400055
Be respectively k postrotational horizontal ordinate of i coupling local feature and j coupling local feature,
Figure FDA00002590668400056
With
Figure FDA00002590668400057
Be respectively k postrotational ordinate of i coupling local feature and j coupling local feature, determine each coupling local feature after image is divided with respect to reference origin by horizontal direction space code figure GX and vertical direction space code figure GY, the quadrant that is positioned at and sector region;
Described result for retrieval returns module and specifically comprises:
Similarity calculated is used for according to the right number of matching characteristic that meets Space Consistency, and the number of the local feature that comprises separately in matching image and the query image, calculates the similarity between matching image and query image;
Matching image returns the unit, is used for according to the described matching image of described similarity size ordering, and returns described matching image according to described ordering;
Described similarity calculated, specifically calculate in the following way the similarity between matching image and query image:
If query image q and matching image p through inquiry after, a is arranged to the coupling local feature, wherein b has passed through the Space Consistency check to the coupling local feature, the similarity of query image q and matching image p then, S (p, q) is shown below:
S ( p , q ) = b - a - b + 1 a &CenterDot; N ( p ) N max ;
Wherein N (p) represents local feature total number among the matching image p, N MaxMaximum local feature numbers that image may contain in the presentation video database.
6. device according to claim 5 is characterized in that, also comprises:
Vision word generation module is used for extracting local feature from training image, and described local feature is carried out cluster, and the center of determining each cluster is a vision word.
7. device according to claim 5 is characterized in that, described Space Consistency inspection module specifically comprises:
The XOR unit is used for described query image space code figure and matching image space code figure are carried out xor operation;
The erroneous matching delete cells be used for to be determined and the vicious matching characteristic of deletion institute pair;
The matching number determining unit is used for determining that remaining coupling local feature is the right number of coupling local feature that meets Space Consistency to number.
8. device according to claim 7 is characterized in that:
Described XOR unit, specifically according to shown in the following formula, right space code figure carries out xor operation to the coupling local feature:
V x ( i , j , k ) = GX q ( i , j , k ) &CirclePlus; GX m ( i , j , k ) ;
V y ( i , j , k ) = GY q ( i , j , k ) &CirclePlus; GY m ( i , j , k ) ;
Wherein, (GX q, GY q) be query image space code figure, (GX m, GY m) be matching image space code figure;
Described erroneous matching unit, adopt with following formula and obtain and delete the coupling local feature pair that does not meet Space Consistency:
Order S x ( i ) = &Sigma; j = 0 N - 1 &cup; k = 0 r - 1 V x ( i , j , k ) , S y ( i ) = &Sigma; j = 0 N - 1 &cup; k = 0 r - 1 V y ( i , j , k ) ,
If have i so that S x(i)〉0 or S y(i)〉0, definition Or
Figure FDA00002590668400076
And delete i *To local matching characteristic pair;
Iteration carry out above-mentioned deletion action, the matching characteristic that to the last remains is to corresponding S xAnd S yIn value be complete zero;
Described matching number determining unit, obtain the right number of matching characteristic that is left to meet the demands, as the right number of coupling local feature that meets Space Consistency.
CN 201010514782 2010-10-18 2010-10-18 Image retrieval method, device and system Active CN102368237B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201010514782 CN102368237B (en) 2010-10-18 2010-10-18 Image retrieval method, device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201010514782 CN102368237B (en) 2010-10-18 2010-10-18 Image retrieval method, device and system

Publications (2)

Publication Number Publication Date
CN102368237A CN102368237A (en) 2012-03-07
CN102368237B true CN102368237B (en) 2013-03-27

Family

ID=45760802

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201010514782 Active CN102368237B (en) 2010-10-18 2010-10-18 Image retrieval method, device and system

Country Status (1)

Country Link
CN (1) CN102368237B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106354735A (en) * 2015-07-22 2017-01-25 杭州海康威视数字技术股份有限公司 Image target searching method and device

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103902605A (en) * 2012-12-28 2014-07-02 重庆凯泽科技有限公司 Compromise feature quantification method
CN103970769B (en) * 2013-01-29 2018-06-26 华为技术有限公司 Image search method and device
CN103488664B (en) * 2013-05-03 2016-12-28 中国传媒大学 A kind of image search method
CN103514276B (en) * 2013-09-22 2016-06-29 西安交通大学 Based on the graphic target retrieval localization method that center is estimated
CN103561276B (en) * 2013-11-07 2017-01-04 北京大学 A kind of image/video decoding method
WO2016015312A1 (en) * 2014-07-31 2016-02-04 华为技术有限公司 Trajectory data inquiry method and apparatus
CN104462199B (en) * 2014-10-31 2017-09-12 中国科学院自动化研究所 A kind of approximate multiimage searching method under network environment
CN104504406B (en) * 2014-12-04 2018-05-11 长安通信科技有限责任公司 A kind of approximate multiimage matching process rapidly and efficiently
CN105989001B (en) * 2015-01-27 2019-09-06 北京大学 Image search method and device, image search system
CN104699783A (en) * 2015-03-13 2015-06-10 西安电子科技大学 Social image searching method allowing adaptive adjustment and based on personalized vision dictionary
CN106156118B (en) * 2015-04-07 2019-07-23 阿里巴巴集团控股有限公司 Picture similarity calculating method and its system based on computer system
CN105045841B (en) * 2015-07-01 2017-06-23 北京理工大学 With reference to gravity sensor and the characteristics of image querying method of image characteristic point angle
CN105224619B (en) * 2015-09-18 2018-06-05 中国科学院计算技术研究所 A kind of spatial relationship matching process and system suitable for video/image local feature
CN106126572B (en) * 2016-06-17 2019-06-14 中国科学院自动化研究所 Image search method based on area validation
CN106203165B (en) * 2016-07-01 2017-09-22 广州同构信息科技有限公司 Information big data analysis method for supporting based on credible cloud computing
CN108184113B (en) * 2017-12-05 2021-12-03 上海大学 Image compression coding method and system based on inter-image reference
CN111353062A (en) * 2018-12-21 2020-06-30 华为技术有限公司 Image retrieval method, device and equipment
CN110458224A (en) * 2019-08-06 2019-11-15 北京字节跳动网络技术有限公司 Image processing method, device, electronic equipment and computer-readable medium
CN112948620B (en) * 2021-04-30 2024-03-08 北京爱笔科技有限公司 Image indexing method and device, electronic equipment and storage medium thereof

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2138957A2 (en) * 2008-06-27 2009-12-30 Palo Alto Research Center Incorporated System and method for finding a picture image in an image collection using localized two-dimensional visual fingerprints
CN101692224A (en) * 2009-07-08 2010-04-07 南京师范大学 High-resolution remote sensing image search method fused with spatial relation semantics
CN101859326A (en) * 2010-06-09 2010-10-13 南京大学 Image searching method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8233722B2 (en) * 2008-06-27 2012-07-31 Palo Alto Research Center Incorporated Method and system for finding a document image in a document collection using localized two-dimensional visual fingerprints

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2138957A2 (en) * 2008-06-27 2009-12-30 Palo Alto Research Center Incorporated System and method for finding a picture image in an image collection using localized two-dimensional visual fingerprints
CN101692224A (en) * 2009-07-08 2010-04-07 南京师范大学 High-resolution remote sensing image search method fused with spatial relation semantics
CN101859326A (en) * 2010-06-09 2010-10-13 南京大学 Image searching method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106354735A (en) * 2015-07-22 2017-01-25 杭州海康威视数字技术股份有限公司 Image target searching method and device

Also Published As

Publication number Publication date
CN102368237A (en) 2012-03-07

Similar Documents

Publication Publication Date Title
CN102368237B (en) Image retrieval method, device and system
Lobry et al. RSVQA: Visual question answering for remote sensing data
CN101404032B (en) Video retrieval method and system based on contents
CN111627065B (en) Visual positioning method and device and storage medium
CN104169946B (en) Extensible queries for visual search
CN105653700A (en) Video search method and system
CN103026368A (en) Object recognition using incremental feature extraction
CN103578093B (en) Method for registering images, device and augmented reality system
CN102254015A (en) Image retrieval method based on visual phrases
CN102737243A (en) Method and device for acquiring descriptive information of multiple images and image matching method
CN103745498A (en) Fast positioning method based on images
CN112990228B (en) Image feature matching method, related device, equipment and storage medium
CN105338619A (en) Positioning method and positioning device
CN104199842A (en) Similar image retrieval method based on local feature neighborhood information
CN105989001B (en) Image search method and device, image search system
CN104094255A (en) Method and apparatus for searching an image, and computer-readable recording medium for executing the method
CN104615676A (en) Picture searching method based on maximum similarity matching
CN104731847A (en) Search method, search program, and search device
CN103823887A (en) Based on low-order overall situation geometry consistency check error match detection method
CN113806601B (en) Peripheral interest point retrieval method and storage medium
CN105302833A (en) Content based video retrieval mathematic model establishment method
CN104699783A (en) Social image searching method allowing adaptive adjustment and based on personalized vision dictionary
CN103164433A (en) Image search method, device and server
CN105718965A (en) Chinese character writing font identification method and Chinese character writing font identification device
CN103823889B (en) L1 norm total geometrical consistency check-based wrong matching detection method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant