CN103336835A - Image retrieval method based on weight color-sift characteristic dictionary - Google Patents

Image retrieval method based on weight color-sift characteristic dictionary Download PDF

Info

Publication number
CN103336835A
CN103336835A CN2013102943852A CN201310294385A CN103336835A CN 103336835 A CN103336835 A CN 103336835A CN 2013102943852 A CN2013102943852 A CN 2013102943852A CN 201310294385 A CN201310294385 A CN 201310294385A CN 103336835 A CN103336835 A CN 103336835A
Authority
CN
China
Prior art keywords
image
color
sift
weights
edge
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2013102943852A
Other languages
Chinese (zh)
Other versions
CN103336835B (en
Inventor
李平舟
刘燕
刘宪龙
杨国瑞
孙雪萍
赵楠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201310294385.2A priority Critical patent/CN103336835B/en
Publication of CN103336835A publication Critical patent/CN103336835A/en
Application granted granted Critical
Publication of CN103336835B publication Critical patent/CN103336835B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses an image retrieval method based on a weight color-sift characteristic dictionary. The image retrieval method based on the weight color-sift characteristic dictionary comprises the following steps that training images are selected in images to be retrieved randomly, the edges of the training images are extracted, the color-sift characteristics of edge points of all the training images are extracted, and a characteristic dictionary is constructed according to the color-sift characteristics, an image which needs to be retrieved is input and the color-sift characteristics of the retrieval image and the edge points of the image to be retrieved are extracted, and weight histogram characteristics of the retrieval image and the image to be retrieved are extracted based on the characteristic dictionary; similarity matching based on the weight histogram characteristics is conducted on the retrieval image and the images to be retrieved in the database based on the weight histogram characteristics; whether all the images to be retrieved in the database are all traversed is detected,, the result is matched according to similarity and the image searching result is displayed if all the images to be retrieved in the database are all traversed, and the similarity matching is conducted again if all the images to be retrieved in the database are not traversed. The image retrieval method based on the weight color-sift characteristic dictionary improves accuracy and callback rate during searching of a large-scaled image database, has dimension invariance, translation invariance and rotation invariance, and has the advantages that the image retrieval method based on the color-sift characteristic dictionary has locality, particularity, multi-amount property and high-efficiency property.

Description

Image search method based on weights color-sift characteristics dictionary
Technical field
The invention belongs to the image retrieval technologies field, it specifically is a kind of image search method based on weights color-sift characteristics dictionary, be based on picture material, based on the process of image characteristics extraction to realize image is analyzed and retrieved, allow the user to import one or more picture, to search other pictures with same or similar content.
Background technology
Image is to description a kind of similarity of objective objects, vividness or description.Image is a kind of expression of objective objects in other words, and it has comprised and is described object for information about.It is the topmost information sources of people.According to statistics, the information nearly 75% obtained of people is from vision.As the saying goes " it is better to see once than hear a hundred times ", " coming into plain view " all reflected the effect unique of image in information is transmitted.How extracting picture material rapidly and accurately is a step of image retrieval most critical.SIFT is the characteristic detection method that David G Lowe2004 sums up the invariant technology, and a kind of of proposition describes operator to metric space, image convergent-divergent, rotation even affine constant image local feature.What is called is namely searched the image that contains specific objective based on Image Retrieval from image library, also comprise retrieving the video segment that contains specific objective from continuous video image.It is different from traditional image retrieval means, the present invention proposes a kind of image search method based on weights color-sift characteristics dictionary, has merged weights color-sift characteristics dictionary technology, thereby more effective retrieval method can be provided.Can be applicable to digital library, medical diagnosis, image classification, WEB related application, public safety and crime survey etc.
Patented claim " a kind of based on sketch Feature Extraction in Image the search method " (number of patent application 201110196051.2 that university of Tsing-Hua University proposes, publication number 201110196051.2) discloses a kind ofly based on sketch Feature Extraction in Image search method, related to field of image search.Described method comprises step: extract training feature vector, obtain feature lexicon; Extract input feature value, obtain the input feature value collection, feature lexicon is carried out counting operation, obtain the input feature vector frequency vector, and then obtain interest characteristics word and non-interest characteristics word; Extract searching characteristic vector, obtain the searching characteristic vector collection, and then obtain the retrieval character frequency vector; And then obtain interest retrieval character frequency vector, non-interest retrieval character frequency vector, interest input feature vector frequency vector and non-interest input feature vector frequency vector; And then calculate the similarity of importing sketch and each retrieval sketch, export result for retrieval.This method has efficient and the accuracy that good user interactivity has improved image retrieval, but intrinsic dimensionality is bigger, has considered two kinds of features of interest and non-interest, causes when being applied to large database recall precision lower, and speed is slower.
The patented claim that Communication University of China proposes " a kind of in conjunction with the interactive image retrieval method of user's evaluation with mark " (number of patent application 201310128036.3, publication number CN103164539A) discloses a kind of interactive image retrieval method in conjunction with user's evaluation and mark, belonged to the multimedia information retrieval field.This method has been utilized the integrated retrieval method that combines based on the physical features of image and text, in retrieving, allowing the user that query image is carried out text message describes, the perhaps key word that provides of selective system, by result for retrieval being carried out the relevant evaluation of " satisfaction " or " being unsatisfied with ", image indexing system carries out text mark to the relevant satisfied image of user's mark automatically, forms high-layer semantic information; Along with user's continuous use, this system can generate abundant semantic information database.Consider different user to same picture, same user's different time is to the difference of same picture text marking, and the present invention combines user's confidence level in the process of generative semantics information database.When retrieving, adopt the integrated retrieval mode that combines based on feature and text to retrieve to the query image that has semantic information, improved the accuracy of result for retrieval.The user estimates and the interactive image retrieval mode of mark though this method combines, obtain high-layer semantic information, improved the accuracy rate of real-time retrieval, but when being applied to large-scale image data base, because the similarity measure of all kinds of images is numerous and diverse, handle artificial numerous and diverse mark and improved computation complexity, reduced the recall precision of image, the readjustment rate and the recall ratio that cause returning retrieval set are not high.
Summary of the invention
The present invention is directed to above-mentioned the deficiencies in the prior art, propose a kind of image search method based on weights color-sift characteristics dictionary, effectiveness of retrieval, speed and readjustment rate when having improved the application large database.
A kind of image search method based on weights color-sift characteristics dictionary, it comprises,
Picked at random training image in image to be retrieved extracts the edge of described training image, and extracts the color-sift feature of all training image marginal points, and with described color-sift feature construction characteristics dictionary;
The image that input will be retrieved also extracts retrieving images and the color-sift feature of image border point to be retrieved, based on described characteristics dictionary retrieving images and image to be retrieved is extracted the weights histogram feature; Image to be retrieved in retrieving images and the database is carried out mating based on the similarity of weights histogram feature;
Detect the image whole to be retrieved that whether travels through in all databases, if, then according to the similarity matching result, show image searching result, if not, then carry out the similarity coupling again.
On the basis of technique scheme, the picked at random training image is included in the image data base to be retrieved and opens image composition training image database for every class image picked at random l in image to be retrieved.
On the basis of technique scheme, the edge that extracts described training image comprises: the edge that extracts described training image is included in the training image database training image of picked at random and does greyscale transformation, and handle by the direction adjustable filter, choosing two-dimensional Gaussian function is the filter kernel function, obtains the energy function W on each pixel 2L direction σ(θ), and through the remarkable pixel of threshold decision extraction edge of image, wherein L represents the number of direction for x, y, and x and y represent the coordinate figure of pixel, and θ is the value of direction, and scope is 0~2 π, is spaced apart π/L.
On the basis of technique scheme, the color-sift feature of extracting all training image marginal points comprises: according to the remarkable pixel in the edge of selected training image former training coloured image is extracted the color-sift feature at redness-R, green-G and three passages of blueness-B respectively, obtain redness-R, the green-G of each remarkable pixel in edge of this image and the color-sift feature dec of three passages of blueness-B r(e), dec g(e) and dec b(e), e represents e marginal point of this image, e=1, and 2 ..., E, E are the sum of all remarkable pixels in edge of this image.
On the basis of technique scheme, the color-sift feature of all training image marginal points of described extraction comprises: to the color-sift feature extraction of each all remarkable pixel in edge of width of cloth training image in the training image database, all images in the traversal training image database, its feature is followed successively by dec in three Color Channels R, m(e m), dec G, m(e m) and dec B, m(e m), m=1,2 ..., M, M are training image database size, e mRepresent that m opens the e of training image mIndividual edge pixel point, e m=1,2 ..., E m, E mIt is the sum that m opens all remarkable pixels in edge of training image.
On the basis of technique scheme, construction feature dictionary step comprises the color-sift feature dec to the redness of all remarkable pixels in edge of whole training images-R passage green-G passage and blueness-B passage R, m(e m), dec G, m(e m) and dec B, m(e m), by the K-means cluster calculation, get the two dimensional character dictionary cod that w is capable, K is listed as that K cluster centre obtains redness-R passage, green-G passage and blueness-B passage r, cod gAnd cod b, wherein K is the number of K-means cluster centre, i.e. the size of dictionary.
On the basis of technique scheme, described weights histogram feature X has comprised redness-R, green-G and the three-channel weights histogram feature of blueness-B, retrieving images and image to be retrieved are carried out mating based on the similarity of weights histogram feature, calculate 2 norm similarity distances and obtain D i(X, X ' i).
On the basis of technique scheme, retrieving images coding is calculated the weights histogram feature based on the color-sift characteristics dictionary of retrieving images, may further comprise the steps:
1) the color-sift feature sec of retrieving images all remarkable pixels in edge on red channel R r(o), calculate sec r(o) corresponding to red channel two dimensional character dictionary cod rThe Euclidean distance of K cluster centre, chosen distance value cluster centre hour carries out single order to all edge significant points and obtains retrieving images at the corresponding two dimensional character dictionary of red channel cod apart from statistical computation as the affiliated center of this marginal point rFrequency histogram his r
2) suppose to drop on k cluster centre q kIndividual edge significant point is to the q of cluster to k cluster centre kIndividual edge significant point calculates this significant point for the centrifugal weights li (k) of this cluster centre, and the centrifugal weights that calculate all significant point maximums of this cluster centre obtain the weight vector α (k) of this cluster centre, to the frequency histogram his of retrieving images rBe the weight vector hst that the matrix dot multiplication obtains retrieving images with corresponding weight vector α (k) r, namely corresponding element multiplies each other, hst rBe a K dimensional vector, k represents k cluster centre, and value is 1,2 ..., K, K represent cluster centre number, i.e. dictionary size;
3) do equally in the calculating of red channel R at green channel G, blue channel B, finally obtain the weight vector hst of image green channel G gWeight vector hst with blue channel B b, the weight vector in three passages is integrated the weights histogram feature X based on the color-sift characteristics dictionary that calculates retrieving images.
On the basis of technique scheme, described to the q of cluster to k cluster centre kIndividual edge significant point calculates this significant point for the centrifugal weights li (k) of this cluster centre, and the centrifugal weights that calculate all significant point maximums of this cluster centre obtain the weight vector α (k) of this cluster centre, adopts following formula to calculate:
li ( u , k ) = 1 Σ v = 1,2 , . . . , K | | sec r ( u ) - cod ( k ) | | 2 2 | | sec r ( u ) - cod ( v ) | | 2 2 ,
α(k)=max(li(u,k)),
Wherein, u represents to drop on u edge significant point of k cluster centre, and value is 1,2 ..., q k, k represents k cluster centre, value is 1,2 ..., K, K represent the cluster centre number, i.e. dictionary size, q kExpression drops on the sum of the edge significant point of k cluster centre.
With respect to prior art, service orientation adjustable filter of the present invention carries out edge extracting, can judge the sensing of each pixel edge principal direction effectively, the passing threshold judgement just can extract edge of image pixel information fast and accurately then, the image edge pixels dot information that obtains by extraction can carry out next step feature extraction fast and accurately, has improved speed and the accuracy retrieved when being applied to real time human-machine interaction and large-scale image data base.The search strategy that has adopted extraction edge direction pixel color-sift feature and encoder dictionary to combine, extract the color-sift feature of the remarkable pixel in edge and calculate the weights histogram feature by the encoder dictionary based on weights color-sift feature construction, expression to image has typicalness more, can be more the effective feature difference of presentation video, in the time of in being applied to retrieving, accuracy rate and readjustment rate when being applied to large-scale view data library searching have been improved.Adopted the image search method based on the color-sift characteristics dictionary of three passages of RGB, it is a kind of multiple dimensioned image retrieval algorithm, piece image is converted into the set of a plurality of features, comparing and obtain a result and then realize the image retrieval function by calculating Euclidean distance between two width of cloth characteristics of image vectors again. experimental result illustrates that this algorithm has yardstick, translation, rotational invariance, can carry out applications well.It is a kind of to metric space simultaneously, image convergent-divergent, image rotating local feature description operator.It has features such as locality, different property, volume and high efficiency.The sift feature extraction algorithm can be handled the matching problem that takes place between two width of cloth images under translation, rotation, the affined transformation situation, has improved accuracy rate and the readjustment rate of image retrieval.
Description of drawings
Fig. 1 is process flow diagram of the present invention.
Concrete implementing measure
Below in conjunction with accompanying drawing invention is described further.
Embodiment 1
The realization of the image search method based on weights color-sift characteristics dictionary of the present invention provides following specific embodiment with reference to Fig. 1:
Step 1: in image data base to be retrieved, open image for every class image picked at random l and form the training image database, this example uses the Corel-1000 image data base, need in the Corel-1000 image data base, retrieve image of the same type, image library comprises 10 class images, each class comprises 100 images, the l value is 10 in this example, and 10 classes are chosen 100 training images altogether altogether.
Step 2: training image of picked at random is done greyscale transformation in the training image database, handle by the direction adjustable filter, choosing two-dimensional Gaussian function is the filter kernel function, chooses suitable filters moving window size, obtains the energy function W on each pixel 2L direction σ(x, y, θ), extract the remarkable pixel of edge of image through threshold decision, L represents the number of direction, and the L value is 6, x and y represent the coordinate figure of pixel, and σ is the wave filter scale parameter, and the σ value is 1, θ is the value of direction, and scope is 0~2 π, is spaced apart π/L, θ is taken as 0, π/6 in this example ..., 11 π/6,2 π.
Step 3: according to the remarkable pixel in the edge of selected training image former training coloured image is extracted the color-sift feature at redness-R, green-G and three passages of blueness-B respectively, obtain redness-R, the green-G of each remarkable pixel in edge of this image and the color-sift feature dec of three passages of blueness-B r(e), dec g(e) and dec b(e), e represents e marginal point of this image, e=1, and 2 ..., E, E are the sum of all remarkable pixels in edge of this training image.
Step 4: each width of cloth training image execution in step 2-step 3 in the training image database is carried out the color-sift feature extraction of all remarkable pixels in edge of each image, all images in the traversal training image database, its feature is followed successively by dec in three Color Channels R, m(e m), dec G, m(e m) and dec B, m(e m), m=1,2 ..., M, M are training image database size, e mRepresent that m opens the e of training image mIndividual edge pixel point, e m=1,2 ..., E m, E mIt is the sum that m opens all remarkable pixels in edge of training image.
Step 5: to the color-sift feature dec of the redness-R passage of all remarkable pixels in edge of whole training images R, m(e m), by the K-means cluster calculation, get the two dimensional character dictionary cod that w is capable, K is listed as that K cluster centre obtains redness-R passage r, K is the number of K-means cluster centre, i.e. the size of dictionary, and same method is in green-G passage and blueness-B passage, respectively to dec G, m(e m) and dec B, m(e m) carry out the same calculating of redness-R passage and obtain respectively that green-G passage w is capable, the two dimensional character dictionary cod of K row gAnd the w of blueness-B passage is capable, the two dimensional character dictionary cod of K row b, w is that sift intrinsic dimensionality size is that 128, K value is 500 in this example, namely dictionary size is 500.
Step 6: the input retrieving images is one in the bus class image, for the same color-sift feature sec that extracts image border significant point redness-R, green-G and three passages of blueness-B of retrieving images execution in step 2-step 3 r(e), sec g(e) and sec b(e), the characteristics dictionary cod that obtains by step 5 r, cod gAnd cod bEncoding calculates the weights histogram feature X based on the color-sift characteristics dictionary of retrieving images, and weights histogram feature X has comprised redness-R, green-G and the three-channel weights histogram feature of blueness-B;
6a) the color-sift feature sec of retrieving images all remarkable pixels in edge on red channel R r(o), calculate sec r(o) corresponding to red channel two dimensional character dictionary cod rThe Euclidean distance of K cluster centre, chosen distance value cluster centre hour carries out single order to all edge significant points and obtains retrieving images at the corresponding two dimensional character dictionary of red channel cod apart from statistical computation as the affiliated center of this marginal point rFrequency histogram his r
6b) hypothesis drops on k cluster centre q kIndividual edge significant point is to the q of cluster to k cluster centre kIndividual edge significant point calculates this significant point for the centrifugal weights li (k) of this cluster centre, and the centrifugal weights that calculate all significant point maximums of this cluster centre obtain the weight vector α (k) of this cluster centre, adopts following formula to calculate:
li ( u , k ) = 1 Σ v = 1,2 , . . . , K | | sec r ( u ) - cod ( k ) | | 2 2 | | sec r ( u ) - cod ( v ) | | 2 2 ,
α(k)=max(li(u,k)),
Wherein, u represents to drop on u edge significant point of k cluster centre, and value is 1,2 ..., q k, k represents k cluster centre, value is 1,2 ..., K, K represent the cluster centre number, i.e. dictionary size, q kExpression drops on the sum of the edge significant point of k cluster centre.Frequency histogram his to retrieving images rBe the weight vector hst that the matrix dot multiplication obtains retrieving images with corresponding weight vector α (k) r, namely corresponding element multiplies each other, hst rBe a K dimensional vector, k represents k cluster centre, and value is 1,2 ..., K, K represent cluster centre number, i.e. dictionary size;
6c) do equally in the calculating of red channel R at green channel G, blue channel B, finally obtain the weight vector hst of image green channel G gWeight vector hst with blue channel B b, the weight vector in three passages is integrated the weights histogram feature X based on the color-sift characteristics dictionary that calculates retrieving images.
Step 7: carry out the color-sift feature extraction of all remarkable pixels in edge of each image from the total number of images size for extracting image execution in step a 2-step 4 to be retrieved the image data base to be retrieved of S, execution in step 6 obtains each image to be retrieved based on the weights histogram feature X ' of weights color-sift characteristics dictionary then i, all images in the traversal image data base, i=1,2 ..., S, S is total number of images to be retrieved, and the database that uses in this example is Corel-1000, comprises 10 classes, and each class comprises 100 images, and the value of S is 1000.。
Step 8: retrieving images and image to be retrieved are carried out mating based on the similarity of weights histogram feature, calculate 2 norm similarity distances and obtain D i(X, X ' i).
Step 9: for every image to be retrieved according to its D i(X, X ' i) the value order of carrying out from small to large arrange, show wherein before n open the result that image is retrieval, i=1,2 ..., S, S are total number of images to be retrieved, n is for returning the retrieving images number, value be artificial autonomous definite positive integer.The n value is 20 in this example.Retrieval 20 width of cloth bus class image relevant with the bus image that come out exactly from the Corel-1000 view data of success of the present invention, but with regard to this, retrieval rate is 100%.
The content of image to be described quickly and accurately always be emphasis and the difficult point of studying in the image retrieval technologies. traditional image characteristic extracting method is the color around image basically, texture, shape and spatial relationship are launched.The present invention at first in the training image database picked at random training image do greyscale transformation, handle by the direction adjustable filter, according to the remarkable pixel in the edge of whole training images former training coloured image is extracted the color-sift feature at redness-R, green-G and three passages of blueness-B respectively, obtain the color-sift feature of redness-R, green-G and three passages of blueness-B of each remarkable pixel in edge of training image.Retrieving images and image to be retrieved encoded based on characteristics dictionary calculate weights histogram based on the color-sift characteristics dictionary, retrieving images and image to be retrieved are carried out mating based on the similarity of weights histogram feature, obtain result for retrieval, improved efficient, speed and the readjustment rate of retrieving.
Embodiment 2 based on the image search method of weights color-sift characteristics dictionary with embodiment 1
This example is chosen the Corel-1000 image data base equally, comprise 10 class images in the image data base, each class comprises 100 images, in the database each is opened image carry out the same retrieving of embodiment 1, calculate the average retrieval rate of each class in whole 10 classes when returning retrieving images number n and be 20 and the average retrieval rate of 1000 images of whole 10 classes, to result for retrieval statistics and tabulation, and and state of the art in several search methods of knowing such as Jhanwar, the method that Hung proposes reaches the method based on color-texture-shape, contrast based on the method for SIFT-BOF with based on the method for SIFT-SPM, comparing result is as shown in table 1.As seen from Table 1, of the present invention return retrieving images number n be 1000 images of 20 o'clock whole 10 classes average retrieval rate apparently higher than above-mentioned each be used for search method of contrast, and the average retrieval rate of 100 images of each class is higher than most of search method that is used for contrast in whole 10 classes.Therefore, the present invention is when being applied to different classes of image and retrieving, the higher average retrieval rate that all can get is applicable to the image retrieval of the large-scale view data that the image kind is more, and for each class all can be stablized, more excellent average retrieval rate.
Table 1
Figure BDA00003502520400091
More than be two examples of the present invention, do not constitute any limitation of the invention, emulation experiment shows that the present invention can not only improve speed when using with large-scale image data base, also can realize for having of result for retrieval higher accuracy rate and readjustment rate.
To sum up, the image search method based on weights color-sift characteristics dictionary of the present invention, the raising of speed, accuracy rate and readjustment rate when mainly being devoted to prior art and being applied to large-scale image data base.Its method step is: picked at random training image in image to be retrieved, image is done greyscale transformation; Training image is handled by the direction adjustable filter; The result of bonding position adjustable filter extracts edge of image; Extract the color-sift feature of all training images; Carry out K-means cluster construction feature dictionary by the color-sift feature to the edge pixel point of all training images; The image that input will be retrieved is also carried out the step that is same as training image to retrieving images and image to be retrieved and is extracted the color-sift feature; Retrieving images and image to be retrieved are extracted the weights histogram feature based on the color-sift characteristics dictionary; Image to be retrieved in retrieving images and the database is carried out mating based on the similarity of weights histogram feature; Show image searching result according to the similarity matching result.The present invention especially for large-scale view data library searching the present invention have that retrieval rate is fast, accuracy rate and the higher advantage of readjustment rate, can be applicable to the image retrieval of real time human-machine interaction and large-scale image data base.

Claims (9)

1. image search method based on weights color-sift characteristics dictionary is characterized in that: it comprises,
Picked at random training image in image to be retrieved extracts the edge of described training image, and extracts the color-sift feature of all training image marginal points, and with described color-sift feature construction characteristics dictionary;
The image that input will be retrieved also extracts retrieving images and the color-sift feature of image border point to be retrieved, based on described characteristics dictionary retrieving images and image to be retrieved is extracted the weights histogram feature; Image to be retrieved in retrieving images and the database is carried out mating based on the similarity of weights histogram feature;
Detect the image whole to be retrieved that whether travels through in all databases, if, then according to the similarity matching result, show image searching result, if not, then carry out the similarity coupling again.
2. a kind of image search method based on weights color-sift characteristics dictionary as claimed in claim 1 is characterized in that: the picked at random training image is included in and opens image for every class image picked at random l in the image data base to be retrieved and form the training image database in image to be retrieved.
3. a kind of image search method based on weights color-sift characteristics dictionary as claimed in claim 2, it is characterized in that, the edge that extracts described training image comprises: the edge that extracts described training image is included in the training image database training image of picked at random and does greyscale transformation, and handle by the direction adjustable filter, choosing two-dimensional Gaussian function is the filter kernel function, obtains the energy function W on each pixel 2L direction σ(θ), and through the remarkable pixel of threshold decision extraction edge of image, wherein L represents the number of direction for x, y, and x and y represent the coordinate figure of pixel, and θ is the value of direction, and scope is 0~2 π, is spaced apart π/L.
4. a kind of image search method based on weights color-sift characteristics dictionary as claimed in claim 3, it is characterized in that, the color-sift feature of extracting all training image marginal points comprises: according to the remarkable pixel in the edge of selected training image former training coloured image is extracted the color-sift feature at redness-R, green-G and three passages of blueness-B respectively, obtain redness-R, the green-G of each remarkable pixel in edge of this image and the color-sift feature dec of three passages of blueness-B r(e), dec g(e) and dec b(e), e represents e marginal point of this image, e=1, and 2 ..., E, E are the sum of all remarkable pixels in edge of this image.
5. a kind of image search method based on weights color-sift characteristics dictionary as claimed in claim 4, it is characterized in that, the color-sift feature of all training image marginal points of described extraction comprises: to the color-sift feature extraction of each all remarkable pixel in edge of width of cloth training image in the training image database, all images in the traversal training image database, its feature is followed successively by dec in three Color Channels R, m(e m), dec G, m(e m) and dec B, m(e m), m=1,2 ..., M, M are training image database size, e mRepresent that m opens the e of training image mIndividual edge pixel point, e m=1,2 ..., E m, E mIt is the sum that m opens all remarkable pixels in edge of training image.
6. a kind of image search method based on weights color-sift characteristics dictionary as claimed in claim 5, it is characterized in that construction feature dictionary step comprises the color-sift feature dec to the redness of all remarkable pixels in edge of whole training images-R passage green-G passage and blueness-B passage R, m(e m), dec G, m(e m) and dec B, m(e m), by the K-means cluster calculation, get the two dimensional character dictionary cod that w is capable, K is listed as that K cluster centre obtains redness-R passage, green-G passage and blueness-B passage r, cod gAnd cod b, wherein K is the number of K-means cluster centre, i.e. the size of dictionary.
7. a kind of image search method based on weights color-sift characteristics dictionary as claimed in claim 1, it is characterized in that: described weights histogram feature X has comprised redness-R, green-G and the three-channel weights histogram feature of blueness-B, retrieving images and image to be retrieved are carried out mating based on the similarity of weights histogram feature, calculate 2 norm similarity distances and obtain D i(X, X ' i).
8. the image search method based on color-sift weights characteristics dictionary according to claim 1 is characterized in that: the retrieving images coding is calculated the weights histogram feature based on the color-sift characteristics dictionary of retrieving images, may further comprise the steps:
1) the color-sift feature sec of retrieving images all remarkable pixels in edge on red channel R r(o), calculate sec r(o) corresponding to red channel two dimensional character dictionary cod rThe Euclidean distance of K cluster centre, chosen distance value cluster centre hour carries out single order to all edge significant points and obtains retrieving images at the corresponding two dimensional character dictionary of red channel cod apart from statistical computation as the affiliated center of this marginal point rFrequency histogram his r
2) suppose to drop on k cluster centre q kIndividual edge significant point is to the q of cluster to k cluster centre kIndividual edge significant point calculates this significant point for the centrifugal weights li (k) of this cluster centre, and the centrifugal weights that calculate all significant point maximums of this cluster centre obtain the weight vector α (k) of this cluster centre, to the frequency histogram his of retrieving images rBe the weight vector hst that the matrix dot multiplication obtains retrieving images with corresponding weight vector α (k) r, namely corresponding element multiplies each other, hst rBe a K dimensional vector, k represents k cluster centre, and value is 1,2 ..., K, K represent cluster centre number, i.e. dictionary size;
3) do equally in the calculating of red channel R at green channel G, blue channel B, finally obtain the weight vector hst of image green channel G gWeight vector hst with blue channel B b, the weight vector in three passages is integrated the weights histogram feature X based on the color-sift characteristics dictionary that calculates retrieving images.
9. according to claim 8 retrieving images is encoded calculates the weights histogram feature based on the color-sift characteristics dictionary of retrieving images, wherein step 2) described to the q of cluster to k cluster centre kIndividual edge significant point calculates this significant point for the centrifugal weights li (k) of this cluster centre, and the centrifugal weights that calculate all significant point maximums of this cluster centre obtain the weight vector α (k) of this cluster centre, adopts following formula to calculate:
li ( u , k ) = 1 Σ v = 1,2 , . . . , K | | sec r ( u ) - cod ( k ) | | 2 2 | | sec r ( u ) - cod ( v ) | | 2 2 ,
α(k)=max(li(u,k)),,
Wherein, u represents to drop on u edge significant point of k cluster centre, and value is 1,2 ..., q k, k represents k cluster centre, value is 1,2 ..., K, K represent the cluster centre number, i.e. dictionary size, q kExpression drops on the sum of the edge significant point of k cluster centre.
CN201310294385.2A 2013-07-12 2013-07-12 Image retrieval method based on weight color-sift characteristic dictionary Active CN103336835B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310294385.2A CN103336835B (en) 2013-07-12 2013-07-12 Image retrieval method based on weight color-sift characteristic dictionary

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310294385.2A CN103336835B (en) 2013-07-12 2013-07-12 Image retrieval method based on weight color-sift characteristic dictionary

Publications (2)

Publication Number Publication Date
CN103336835A true CN103336835A (en) 2013-10-02
CN103336835B CN103336835B (en) 2017-02-08

Family

ID=49245000

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310294385.2A Active CN103336835B (en) 2013-07-12 2013-07-12 Image retrieval method based on weight color-sift characteristic dictionary

Country Status (1)

Country Link
CN (1) CN103336835B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103699436A (en) * 2013-12-30 2014-04-02 西北工业大学 Image coding method based on local linear constraint and global structural information
CN105138672A (en) * 2015-09-07 2015-12-09 北京工业大学 Multi-feature fusion image retrieval method
CN106503143A (en) * 2016-10-21 2017-03-15 广东工业大学 A kind of image search method and device
CN106570183A (en) * 2016-11-14 2017-04-19 宜宾学院 Color picture retrieval and classification method
CN107103325A (en) * 2017-04-20 2017-08-29 湘潭大学 A kind of histopathology image classification method
CN109711441A (en) * 2018-12-13 2019-05-03 泰康保险集团股份有限公司 Image classification method, device, storage medium and electronic equipment
CN111104936A (en) * 2019-11-19 2020-05-05 泰康保险集团股份有限公司 Text image recognition method, device, equipment and storage medium
CN111428122A (en) * 2020-03-20 2020-07-17 南京中孚信息技术有限公司 Picture retrieval method and device and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101546307A (en) * 2008-09-17 2009-09-30 厦门市美亚柏科资讯科技有限公司 Image retrieval system and image retrieval method based on edge gravity center characteristics
CN102236717A (en) * 2011-07-13 2011-11-09 清华大学 Image retrieval method based on sketch feature extraction
JP2011257979A (en) * 2010-06-09 2011-12-22 Olympus Imaging Corp Image retrieval device, image retrieval method, and camera
CN102629328A (en) * 2012-03-12 2012-08-08 北京工业大学 Probabilistic latent semantic model object image recognition method with fusion of significant characteristic of color
CN102693421A (en) * 2012-05-31 2012-09-26 东南大学 Bull eye iris image identifying method based on SIFT feature packs

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101546307A (en) * 2008-09-17 2009-09-30 厦门市美亚柏科资讯科技有限公司 Image retrieval system and image retrieval method based on edge gravity center characteristics
JP2011257979A (en) * 2010-06-09 2011-12-22 Olympus Imaging Corp Image retrieval device, image retrieval method, and camera
CN102236717A (en) * 2011-07-13 2011-11-09 清华大学 Image retrieval method based on sketch feature extraction
CN102629328A (en) * 2012-03-12 2012-08-08 北京工业大学 Probabilistic latent semantic model object image recognition method with fusion of significant characteristic of color
CN102693421A (en) * 2012-05-31 2012-09-26 东南大学 Bull eye iris image identifying method based on SIFT feature packs

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103699436B (en) * 2013-12-30 2017-02-22 西北工业大学 Image coding method based on local linear constraint and global structural information
CN103699436A (en) * 2013-12-30 2014-04-02 西北工业大学 Image coding method based on local linear constraint and global structural information
CN105138672B (en) * 2015-09-07 2018-08-21 北京工业大学 A kind of image search method of multiple features fusion
CN105138672A (en) * 2015-09-07 2015-12-09 北京工业大学 Multi-feature fusion image retrieval method
CN106503143A (en) * 2016-10-21 2017-03-15 广东工业大学 A kind of image search method and device
CN106503143B (en) * 2016-10-21 2020-02-07 广东工业大学 Image retrieval method and device
CN106570183A (en) * 2016-11-14 2017-04-19 宜宾学院 Color picture retrieval and classification method
CN106570183B (en) * 2016-11-14 2019-11-15 宜宾学院 A kind of Color Image Retrieval and classification method
CN107103325A (en) * 2017-04-20 2017-08-29 湘潭大学 A kind of histopathology image classification method
CN109711441A (en) * 2018-12-13 2019-05-03 泰康保险集团股份有限公司 Image classification method, device, storage medium and electronic equipment
CN109711441B (en) * 2018-12-13 2021-02-12 泰康保险集团股份有限公司 Image classification method and device, storage medium and electronic equipment
CN111104936A (en) * 2019-11-19 2020-05-05 泰康保险集团股份有限公司 Text image recognition method, device, equipment and storage medium
CN111428122A (en) * 2020-03-20 2020-07-17 南京中孚信息技术有限公司 Picture retrieval method and device and electronic equipment
CN111428122B (en) * 2020-03-20 2023-09-01 南京中孚信息技术有限公司 Picture retrieval method and device and electronic equipment

Also Published As

Publication number Publication date
CN103336835B (en) 2017-02-08

Similar Documents

Publication Publication Date Title
CN103336835A (en) Image retrieval method based on weight color-sift characteristic dictionary
Myers et al. Affordance detection of tool parts from geometric features
Yuan et al. Mid-level features and spatio-temporal context for activity recognition
CN104199931B (en) A kind of consistent semantic extracting method of trademark image and trade-mark searching method
Loghmani et al. Recurrent convolutional fusion for RGB-D object recognition
Faraki et al. Log‐Euclidean bag of words for human action recognition
US9626585B2 (en) Composition modeling for photo retrieval through geometric image segmentation
CN111914107B (en) Instance retrieval method based on multi-channel attention area expansion
CN106126581A (en) Cartographical sketching image search method based on degree of depth study
CN105574510A (en) Gait identification method and device
CN103854016B (en) Jointly there is human body behavior classifying identification method and the system of feature based on directivity
CN103714181B (en) A kind of hierarchical particular persons search method
CN103258037A (en) Trademark identification searching method for multiple combined contents
CN105528794A (en) Moving object detection method based on Gaussian mixture model and superpixel segmentation
CN106021603A (en) Garment image retrieval method based on segmentation and feature matching
CN106845513B (en) Manpower detector and method based on condition random forest
CN105868706A (en) Method for identifying 3D model based on sparse coding
Chen et al. TriViews: A general framework to use 3D depth data effectively for action recognition
CN108764019A (en) A kind of Video Events detection method based on multi-source deep learning
Wan et al. CSMMI: Class-specific maximization of mutual information for action and gesture recognition
Ali et al. Contextual object category recognition for RGB-D scene labeling
CN106845375A (en) A kind of action identification method based on hierarchical feature learning
Daniilidis et al. Computer Vision--ECCV 2010: 11th European Conference on Computer Vision, Heraklion, Crete, Greece, September 5-11, 2010, Proceedings, Part V
Ikizler-Cinbis et al. Web-based classifiers for human action recognition
El‐Henawy et al. Action recognition using fast HOG3D of integral videos and Smith–Waterman partial matching

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant