CN105138672B - A kind of image search method of multiple features fusion - Google Patents

A kind of image search method of multiple features fusion Download PDF

Info

Publication number
CN105138672B
CN105138672B CN201510564819.5A CN201510564819A CN105138672B CN 105138672 B CN105138672 B CN 105138672B CN 201510564819 A CN201510564819 A CN 201510564819A CN 105138672 B CN105138672 B CN 105138672B
Authority
CN
China
Prior art keywords
image
color
score
described image
vision
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510564819.5A
Other languages
Chinese (zh)
Other versions
CN105138672A (en
Inventor
段立娟
董帅
赵则明
崔嵩
马伟
杨震
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujian Youtong Industrial Co ltd
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN201510564819.5A priority Critical patent/CN105138672B/en
Publication of CN105138672A publication Critical patent/CN105138672A/en
Application granted granted Critical
Publication of CN105138672B publication Critical patent/CN105138672B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/5838Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content

Abstract

The invention discloses a kind of image search methods of multiple features fusion comprising, Step 1: inputting image I to be retrieved;Step 2: the color feature vector and SIFT feature vector of structure image I;Step 3, the image that training is inquired in picture library obtain color characteristic dictionary and SIFT feature dictionary, the image in picture library are indicated with the vision word;Step 4: indicating described image I with the vision word, candidate image collection Q is transferred from the inquiry picture library according to the vision word, calculates similarity value score (Q, I);Step 5: choosing the regional area Si with vision significance in described image I and repeating step 3 and step 4 acquisition candidate image collection K, similarity value score is calculatedsal(K,I);Step 6: the overlapping image set that two candidate fusions integrate merges score as Dsal(D, I) and score (D, I) calculate final similarity score* (D, I);The highest image of final similarity waits for the retrieval result of image I as described in Step 7:.The present invention has the advantages of reducing picture noise, improving retrieval accuracy.

Description

A kind of image search method of multiple features fusion
Technical field
The present invention relates to image search methods, it is more particularly related to a kind of image retrieval of multiple features fusion Method.
Background technology
Today's society has come into the big data epoch based on multi-medium data, wherein again most with digital image data For protrusion.Compared with other multi-medium datas, picture data content is more rich, and expression is more intuitive, becomes in people's daily life The most important form of Information Sharing.In face of increasing image data, how effectively to excavate contain in image data it is a large amount of Information gradually develops into rapidly and accurately find out the image that user really needs in large-scale image data library The Main Topics of the related fields such as computer vision, multimedia information retrieval.
Image characteristics extraction, Measurement of Similarity between Two Images are two committed steps in image retrieval technologies.Characteristics of image is The basis of image retrieval, the process for excavating effective information in image data is exactly the process of image characteristics extraction, passes through image spy Sign extraction will be stored in the image information browsed for people in storage device and be expressed as the form that computer " can also be understood ". After extracting characteristics of image, computer defines the similitude between image by calculating distance of the image in feature space, no Same characteristics of image will directly affect the performance of image indexing system.How image information is accurately expressed, and extraction more meets people The characteristics of image of class semanteme is an emphasis in image retrieval research field.
Stable image local feature and Bag-of-features graphical representation models are image retrieval in the prior art Solid foundation has been established in research, keeps image retrieval technologies fast-developing.But in image characteristics extraction and it is based on Bag-of- During features carries out image expression, there are a large amount of information losses, affect the accuracy of retrieval result.
Invention content
It is an object of the invention to solve at least the above, and provide the advantages of at least will be described later.
It is a still further object of the present invention to provide a kind of methods merging the multiple features of image, and fusion feature is Retrieval basis carries out image retrieval, and the image with identical fusion feature is retrieved from picture library as retrieval result.
In order to realize these purposes and other advantages according to the present invention, a kind of image retrieval of multiple features fusion is provided Method, including:
Step 1: inputting image I to be retrieved;
Step 2: being divided into multiple regional areas, structure to indicate each regional area color characteristic described image I Color feature vector and scale space invariant feature SIFT feature vector;
Step 3, to inquiry picture library in each image execute step 2, and cluster obtain color characteristic dictionary with SIFT feature dictionary constitutes vision with the SIFT word combinations in the color vocabulary and SIFT feature dictionary in color characteristic dictionary Word indicates the image in picture library with the vision word;
Step 4: indicating that the color feature vector of described image I and corresponding SIFT are special using the vision word Sign vector;Time of the image that there is identical vision word with described image I as described image I is transferred from the inquiry picture library Image set Q is selected, the similarity value based on color and SIFT characteristics of each the candidate image collection Q and described image I are calculated, It is denoted as score (Q, I);
Step 5: the vision significance of the vision significance mean value T of calculating described image I and each regional area is equal Value Ti, extraction Ti values are more than the regional area of T values as the regional area Si with vision significance;To the regional area Si It repeats step 3 and step 4 obtains candidate image collection K, calculate the view-based access control model of image and described image I in candidate image collection K The similarity value of conspicuousness, is denoted as scoresal(K,I);
Step 6: the candidate fusion integrates the overlapping image set of K and the candidate image Q as D, fusion is based on conspicuousness phase Like angle value scoresal(D, I) and it is based on color and SIFT feature similarity value score (D, I), calculated every in described image collection D The final similarity score* (D, I) of width image and described image I;
Step 7: the final highest image of similarity in described image collection D to be waited for the retrieval knot of image I as described in Fruit.
Preferably, in the image search method of the multiple features fusion, the specific steps are:
Step 1: inputting image I to be retrieved;
Step 2: described image I is divided into multiple regional areas, color having the same and phase in each regional area Color between the adjacent regional area is different;
Step 3: structure indicates color feature vector and the constant spy of scale space of each regional area color characteristic Property SIFT feature vector;
Step 4 executes step 2 and step 3 to each image in inquiry picture library, obtains each image in picture library Color feature vector and with SIFT feature vector and cluster and obtains color characteristic dictionary and SIFT feature dictionary, it is special with color The SIFT word combinations levied in the color vocabulary and SIFT feature dictionary in dictionary constitute vision word, according to color vocabulary Correspondence with SIFT vocabulary and the color feature vector and with SIFT feature vector, picture library is indicated with the vision word In image;
Step 5: being indicated the color feature vector of described image I and corresponding using the vision word The SIFT feature vector;Transferred from the inquiry picture library has the image of identical vision word as institute with described image I State the candidate image collection Q of image I;
Step 6: calculating the similar based on color and SIFT characteristics of each candidate image collection Q and described image I Angle value is denoted as score (Q, I);
Step 7: the vision significance for calculating the regional area of the vision significance mean value T and described image I of image I is equal The regional area of value Ti, extraction vision significance mean value Ti more than T is as the regional area Si with vision significance;
Step 8: the region S that the step 7 is obtainediIt repeats step 3 and step 5 obtains the candidate image collection K, The similarity value for calculating image and the view-based access control model conspicuousness of described image I in candidate image collection K, is denoted as scoresal(K,I);
Step 9: the candidate fusion integrates the overlapping image set of K and the candidate image Q as D, fusion is based on conspicuousness phase Like angle value scoresal(D, I) and it is based on color and SIFT feature similarity value score (D, I), calculated every in described image collection D The final similarity score* (D, I) of width image and described image I;
Step 10: the final highest image of similarity in described image collection D to be waited for the retrieval knot of image I as described in Fruit.
Preferably, in the image search method of the multiple features fusion, final similarity in the step 9 The calculation formula of score* (D, I) is:
Score* (D, I)=α score (D, I)+β scoresal(D,I)
Wherein, alpha+beta=1, α, β indicate the weighting coefficient of final similarity score.
Preferably, in the image search method of the multiple features fusion, each vision word indicates in image The color characteristic of one regional area and with its SIFT feature;
Each candidate image includes at least a matching area q, a partial zones of the matching area and described image I The color characteristics and SIFT characteristics of domain p are indicated with the same vision word.
Preferably, in the image search method of the multiple features fusion, the step 6 includes the following steps:
6.1, the matching score of the corresponding regional area p of matching area q is calculated:
Pre-set a Hamming distance threshold value κ;
The Hamming distance d for calculating the corresponding local characteristic region p of the matching area q is calculated;
As d >=κ, then the matching score of the corresponding local characteristic region of the matching area is zero;
As d < κ, then the calculation formula of the matching score of the corresponding local characteristic region of the matching area For:
Wherein QsAnd QcIndicate the quantitative formula of the SIFT feature and color characteristic;δ indicates Kronecker function;Indicate that carrying out matching using the Hamming distance local characteristic region corresponding to the matching area adds Power, σ is weight parameter;
6.2, l is utilized2Described image I is normalized in normal form, and normal formization processing formula is:
Wherein, tfsi,cjIndicate the number of local characteristic region corresponding with the vision word in described image I;M is indicated The number for the SIFT vocabulary that the SIFT visual dictionaries include;N indicates that the color dictionary includes the number of color vocabulary;
The similarity score based on color characteristic and SIFT feature of 6.3 each the candidate image Q and described image I Score (Q, I), formula are:
Wherein idf indicates that the SIFT visual dictionaries and the Color visual dictionaries set up the weighting coefficient of vision word Value.
Preferably, in the image search method of the multiple features fusion, the weighting coefficient values idf of the vision word Calculation formula be:
Wherein, WijIndicate that vision word, Si indicate that the vocabulary in SIFT feature dictionary, Cj indicate in color characteristic dictionary Vocabulary, N indicate the quantitative value of the image in the corresponding picture library of all vision words, nsi,cjIndicate that described image I is corresponding Picture number magnitude in picture library.
Preferably, in the image search method of the multiple features fusion, the step 7 includes the structure figure As the visual signature notable figure of I, builds the visual signature notable figure and include the following steps:
Described image I is uniformly syncopated as L nonoverlapping image block p by step 7.1i, i=1,2 ..., L, after making cutting Often row includes N number of image block, and each column includes J image block, and each image block is a square block, by each image block piVector Column vector fi is turned to, and dimensionality reduction is carried out by Principal Component Analysis Algorithm to institute's directed quantity, the matrix of a d × L is obtained after dimensionality reduction U, the i-th row correspondence image block piVector after dimensionality reduction;Matrix U is configured to:
U=[X1 X2 ... Xd]T
Step 7.2, each image block p is calculatediVision significance degree:
Vision significance degree is:
Mi=maxjij, j=1,2 ..., L
D=max { W, H }
Wherein,Indicate image block piAnd pjBetween dissimilar degree, ωijIndicate image block piAnd pjThe distance between, umn The element that representing matrix U m rows n-th arrange, (xpi,ypi)、(xpj,ypj) respectively represent segment piAnd pjOn former query image I Center point coordinate;
Step 7.3, the vision significance degree value of all image blocks according between each image block on former query image I Position relationship be organized into two dimensional form, constitute notable figure SalMap, specific value is:
SalMap (i, j)=Sal(i-1)·N+jI=1 .., J, j=1 ..., N
Preferably, in the image search method of the multiple features fusion, the detailed process of the step 7 is:
Step 7.1 calculates the conspicuousness mean value T of described image I visual signature notable figures, and formula is:
Wherein, described image I includes H pixel on its vertical direction, and x indicates a pixel on vertical direction Point;Described image I includes W pixel in its horizontal direction, and y indicates a pixel in horizontal direction;
Each regional area of the step 7.2 in described image I is narrowed down to including in the minimum rectangle of the regional area, The conspicuousness mean value T of each regional area is calculated in the rectanglei, calculation formula is:
Wherein, include in the direction of the x axis h pixel in the minimum rectangle, in the minimum rectangle in the y-axis direction Including w pixel;sal_mapsi(x, y) indicates each subcharacter region siSignificance value.;
Step 7.3 is weighted the conspicuousness mean value using conspicuousness weight, and is denoted as nT, compares conspicuousness mean value TiWith nT, the T is extractediRegional area of the value more than the nT, as the regional area with conspicuousness in described image I.
The present invention by being merged to SIFT feature and color characteristic, and draws on the basis of classical " bag of words " model Enter vision significance to constrain image-region, reduce the noise of image expression, makes the expression of image in a computer more Meet understanding of the mankind to image, semantic, there is good retrieval effectiveness.
Part is illustrated to embody by further advantage, target and the feature of the present invention by following, and part will also be by this The research and practice of invention and be understood by the person skilled in the art.
Description of the drawings
Fig. 1 is the flow chart of the image search method of multiple features fusion of the present invention;
Fig. 2 is based on color characteristics and SIFT characteristics to be obtained in the image search method of multiple features fusion of the present invention Similarity flow chart.
Specific implementation mode
Present invention will be described in further detail below with reference to the accompanying drawings, to enable those skilled in the art with reference to specification text Word can be implemented according to this.
It should be appreciated that such as " having ", "comprising" and " comprising " term used herein do not allot one or more The presence or addition of a other elements or combinations thereof.
As shown in Figure 1, the present invention provides a kind of image search method of multiple features fusion, including:
Step 1: inputting image I to be retrieved;
Step 2: being divided into multiple regional areas, structure to indicate each regional area color characteristic described image I Color feature vector and scale space invariant feature SIFT feature vector;
Step 3, to inquiry picture library in each image execute step 2, and cluster obtain color characteristic dictionary with SIFT feature dictionary constitutes vision with the SIFT word combinations in the color vocabulary and SIFT feature dictionary in color characteristic dictionary Word indicates the image in picture library with the vision word;
Step 4: indicating that the color feature vector of described image I and corresponding SIFT are special using the vision word Sign vector;Time of the image that there is identical vision word with described image I as described image I is transferred from the inquiry picture library Image set Q is selected, the similarity value based on color and SIFT characteristics of each the candidate image collection Q and described image I are calculated, It is denoted as score (Q, I);
Step 5: the vision significance of the vision significance mean value T of calculating described image I and each regional area is equal Value Ti, extraction Ti values are more than the regional area of T values as the regional area Si with vision significance;To the regional area Si It repeats step 3 and step 4 obtains candidate image collection K, calculate the view-based access control model of image and described image I in candidate image collection K The similarity value of conspicuousness, is denoted as scoresal(K,I);
Step 6: the candidate fusion integrates the overlapping image set of K and the candidate image Q as D, fusion is based on conspicuousness phase Like angle value scoresal(D, I) and it is based on color and SIFT feature similarity value score (D, I), calculated every in described image collection D The final similarity score* (D, I) of width image and described image I;
Step 7: the final highest image of similarity in described image collection D to be waited for the retrieval knot of image I as described in Fruit.
In said program, the detailed process of the image search method of multiple features fusion is:
Step 1: inputting image I to be retrieved;
Step 2: described image I is divided into multiple regional areas, color having the same and phase in each regional area Color between the adjacent regional area is different;
Step 3: structure indicates color feature vector and the constant spy of scale space of each regional area color characteristic Property SIFT feature vector;
Step 4 executes step 2 and step 3 to each image in inquiry picture library, obtains each image in picture library Color feature vector and with SIFT feature vector and cluster and obtains color characteristic dictionary and SIFT feature dictionary, it is special with color The SIFT word combinations levied in the color vocabulary and SIFT feature dictionary in dictionary constitute vision word, according to color vocabulary Correspondence with SIFT vocabulary and the color feature vector and with SIFT feature vector, picture library is indicated with the vision word In image;Each vision word indicates the color characteristic of a regional area and corresponding SIFT feature in image;Cause This, each candidate image includes at least a matching area q, and the matching area is with a regional area p's of described image I Color characteristics and SIFT characteristics are indicated with the same vision word;
Step 5: being indicated the color feature vector of described image I and corresponding using the vision word The SIFT feature vector;Transferred from the inquiry picture library has the image of identical vision word as institute with described image I State the candidate image collection Q of image I;
Step 6: calculating the similar based on color and SIFT characteristics of each candidate image collection Q and described image I Angle value, is denoted as score (Q, I), and detailed process is:
6.1, the matching score of the corresponding regional area p of matching area q is calculated:
Pre-set a Hamming distance threshold value κ;
The Hamming distance d for calculating the corresponding local characteristic region p of the matching area q is calculated;
As d >=κ, then the matching score of the corresponding local characteristic region of the matching area is zero;
As d < κ, then the calculation formula of the matching score of the corresponding local characteristic region of the matching area For:
Wherein QsAnd QcIndicate the quantitative formula of the SIFT feature and color characteristic;δ indicates Kronecker function;Indicate that carrying out matching using the Hamming distance local characteristic region corresponding to the matching area adds Power, σ is weight parameter;
6.2, l is utilized2Described image I is normalized in normal form, and normal formization processing formula is:
Wherein, tfsi,cjIndicate the number of local characteristic region corresponding with the vision word in described image I;M is indicated The number for the SIFT vocabulary that the SIFT visual dictionaries include;N indicates that the color dictionary includes the number of color vocabulary;
6.3 calculate the weighting coefficient values idf of the vision word, and formula is:
Wherein, WijIndicate that vision word, Si indicate that the vocabulary in SIFT feature dictionary, Cj indicate in color characteristic dictionary Vocabulary, N indicate the quantitative value of the image in the corresponding picture library of all vision words, nsi,cjIndicate that described image I is corresponding Picture number magnitude in picture library.
The similarity score based on color characteristic and SIFT feature of 6.4 each the candidate image Q and described image I Score (Q, I), formula are:
Wherein idf indicates that the SIFT visual dictionaries and the Color visual dictionaries set up the weighting coefficient of vision word Value.The image search method of multiple features fusion as claimed in claim 4, which is characterized in that
Step 7: the vision significance for calculating the regional area of the vision significance mean value T and described image I of image I is equal The regional area of value Ti, extraction vision significance mean value Ti more than T is specific as the regional area Si with vision significance Process is:
Visual signature notable figure described in 7.1 structure described image I:
Described image I is uniformly syncopated as L nonoverlapping image block p by step 7.1.1i, i=1,2 ..., L make cutting Often row includes N number of image block afterwards, and each column includes J image block, and each image block is a square block, by each image block piTo It is quantified as column vector fi, and dimensionality reduction is carried out by Principal Component Analysis Algorithm to institute's directed quantity, the square of a d × L is obtained after dimensionality reduction Battle array U, the i-th row correspondence image block piVector after dimensionality reduction;Matrix U is configured to:
U=[X1 X2 ... Xd]T
Step 7.1.2 calculates each image block piVision significance degree:
Vision significance degree is:
Mi=maxjij, j=1,2 ..., L
D=max { W, H }
Wherein,Indicate image block piAnd pjBetween dissimilar degree, ωijIndicate image block piAnd pjThe distance between, umn The element that representing matrix U m rows n-th arrange, (xpi,ypi)、(xpj,ypj) respectively represent segment piAnd pjOn former query image I Center point coordinate;
Step 7.1.3, the vision significance degree values of all image blocks according to each image block on former query image I it Between position relationship be organized into two dimensional form, constitute notable figure SalMap, specific value is:
SalMap (i, j)=Sal(i-1)·N+jI=1 .., J, j=1 ..., N
Step 7.2 calculates the conspicuousness mean value T of described image I visual signature notable figures, and formula is:
Wherein, described image I includes H pixel on its vertical direction, and x indicates a pixel on vertical direction Point;Described image I includes W pixel in its horizontal direction, and y indicates a pixel in horizontal direction;
Each regional area of the step 7.3 in described image I is narrowed down to including in the minimum rectangle of the regional area, The conspicuousness mean value T of each regional area is calculated in the rectanglei, calculation formula is:
Wherein, include in the direction of the x axis h pixel in the minimum rectangle, in the minimum rectangle in the y-axis direction Including w pixel;sal_mapsi(x, y) indicates each subcharacter region siSignificance value.;
Step 7.4 is weighted the conspicuousness mean value using conspicuousness weight, and is denoted as nT, compares conspicuousness mean value TiWith nT, the T is extractediRegional area of the value more than the nT, as the regional area with conspicuousness in described image I.
Step 8: the region S that the step 7 is obtainediIt repeats step 3 and step 5 obtains the candidate image collection K, The similarity value for calculating image and the view-based access control model conspicuousness of described image I in candidate image collection K, is denoted as scoresal(K,I);
Step 9: the candidate fusion integrates the overlapping image set of K and the candidate image Q as D, fusion is based on conspicuousness phase Like angle value scoresal(D, I) and it is based on color and SIFT feature similarity value score (D, I), calculated every in described image collection D The final similarity score* (D, I) of width image and described image I;The meter of the final similarity score* (D, I) Calculating formula is:
Score* (D, I)=α score (D, I)+β scoresal(D,I)
Wherein, alpha+beta=1, α, β indicate the weighting coefficient of final similarity score.
Step 10: the final highest image of similarity in described image collection D to be waited for the retrieval knot of image I as described in Fruit.
Although the embodiments of the present invention have been disclosed as above, but its is not only in the description and the implementation listed With it can be fully applied to various fields suitable for the present invention, for those skilled in the art, can be easily Realize other modification, therefore without departing from the general concept defined in the claims and the equivalent scope, the present invention is simultaneously unlimited In specific details and legend shown and described herein.

Claims (8)

1. a kind of image search method of multiple features fusion, which is characterized in that include the following steps:
Step 1: inputting image I to be retrieved;
Step 2: being divided into multiple regional areas, structure to indicate the face of each regional area color characteristic described image I The SIFT feature vector of color characteristic vector sum scale space invariant feature;
Step 3 executes step 2 to each image in inquiry picture library, and cluster obtains color characteristic dictionary and SIFT feature word Allusion quotation constitutes vision word with the SIFT word combinations in the color vocabulary and SIFT feature dictionary in color characteristic dictionary, uses institute State the image in vision word expression picture library;
Step 4: using the vision word indicate described image I color feature vector and corresponding SIFT feature to Amount;The image that transferred from the inquiry picture library has identical vision word with described image I is schemed as the candidate of described image I Image set Q calculates the similarity value based on color and SIFT characteristics of each the candidate image collection Q and described image I, is denoted as score(Q,I);
Step 5: calculating the vision significance mean value of the vision significance mean value T and each regional area of described image I Ti, extraction Ti values are more than the regional area of T values as the regional area Si with vision significance;To regional area Si weights Multiple step 3 and step 4 obtain candidate image collection K, and the view-based access control model for calculating image and described image I in candidate image collection K is aobvious The similarity value of work property, is denoted as scoresal(K,I);
Step 6: the candidate fusion integrates the overlapping image set of K and the candidate image Q as D, fusion is based on conspicuousness similarity Value scoresal(D, I) and it is based on color and SIFT feature similarity value score (D, I), calculates every width figure in described image collection D As the final similarity score* (D, I) with described image I;
Step 7: using the final highest image of similarity in described image collection D as the retrieval result of described image I.
2. a kind of image search method of multiple features fusion, which is characterized in that
Step 1: inputting image I to be retrieved;
Step 2: described image I is divided into multiple regional areas, color having the same and adjacent institute in each regional area The color stated between regional area is different;
Step 3: building the color feature vector and scale space invariant feature for indicating each regional area color characteristic SIFT feature vector;
Step 4 executes step 2 and step 3 to each image in inquiry picture library, obtains the color of each image in picture library Feature vector and with SIFT feature vector and cluster acquisition color characteristic dictionary and SIFT feature dictionary, with color characteristic word SIFT word combinations in a color vocabulary and SIFT feature dictionary in allusion quotation constitute vision word, according to color vocabulary and SIFT vocabulary and the color feature vector and the correspondence with SIFT feature vector, are indicated with the vision word in picture library Image;
Step 5: using the vision word indicate by the color feature vector of described image I and it is corresponding described in SIFT feature vector;Transferred from the inquiry picture library has the image of identical vision word as the figure with described image I As the candidate image collection Q of I;
Step 6: calculating the similarity based on color and SIFT characteristics of each the candidate image collection Q and described image I Value, is denoted as score (Q, I);
Step 7: the vision significance mean value Ti of the regional area of the vision significance mean value T and described image I of image I is calculated, Regional areas of the vision significance mean value Ti more than T is extracted as the regional area Si with vision significance;
Step 8: the region S that the step 7 is obtainediIt repeats step 3 and step 5 obtains the candidate image collection K, calculate The similarity value of image and the view-based access control model conspicuousness of described image I, is denoted as score in candidate image collection Ksal(K,I);
Step 9: the candidate fusion integrates the overlapping image set of K and the candidate image Q as D, fusion is based on conspicuousness similarity Value scoresal(D, I) and it is based on color and SIFT feature similarity value score (D, I), calculates every width figure in described image collection D As the final similarity score* (D, I) with described image I;
Step 10: using the final highest image of similarity in described image collection D as the retrieval result of described image I.
3. the image search method of multiple features fusion as claimed in claim 2, which is characterized in that most last phase in the step 9 Calculation formula like property value score* (D, I) is:
Score* (D, I)=α score (D, I)+β scoresal(D,I)
Wherein, alpha+beta=1, α, β indicate the weighting coefficient of final similarity score.
4. the image search method of multiple features fusion as claimed in claim 3, which is characterized in that each vision word table In diagram picture the color characteristic of a regional area and with its SIFT feature;
Each candidate image includes at least a matching area q, a regional area p of the matching area and described image I Color characteristics and SIFT characteristics indicated with the same vision word.
5. the image search method of multiple features fusion as claimed in claim 2, which is characterized in that the step 6 includes following Step:
6.1, the matching score of the corresponding regional area p of matching area q is calculated:
Pre-set a Hamming distance threshold value κ;
The Hamming distance d for calculating the corresponding local characteristic region p of the matching area q is calculated;
As d >=κ, then the matching score of the corresponding local characteristic region of the matching area is zero;
As d < κ, then the calculation formula of the matching score of the corresponding local characteristic region of the matching area is:
Wherein QsAnd QcIndicate the quantitative formula of the SIFT feature and color characteristic;δ indicates Kronecker function;Table Show and carry out matching weighting using the Hamming distance local characteristic region corresponding to the matching area, σ is weight Parameter;
6.2, l is utilized2Described image I is normalized in normal form, and normal formization processing formula is:
Wherein, tfsi,cjIndicate the number of local characteristic region corresponding with the vision word in described image I;Described in m is indicated The number for the SIFT vocabulary that SIFT visual dictionaries include;N indicates that the color dictionary includes the number of color vocabulary;
The similarity score score based on color characteristic and SIFT feature of 6.3 each the candidate image Q and described image I (Q, I), formula are:
Wherein idf indicates that the SIFT visual dictionaries and Color visual dictionaries set up the weighting coefficient values of vision word.
6. the image search method of multiple features fusion as claimed in claim 5, which is characterized in that the weighting of the vision word The calculation formula of coefficient value idf is:
Wherein, WijIndicate that vision word, Si indicate that the vocabulary in SIFT feature dictionary, Cj indicate the word in color characteristic dictionary It converges, N indicates the quantitative value of the image in the corresponding picture library of all vision words, nsi,cjIndicate the corresponding figures of described image I Picture number magnitude in library.
7. the image search method of multiple features fusion as claimed in claim 2, which is characterized in that the step 7 includes structure The visual signature notable figure for building described image I builds the visual signature notable figure and includes the following steps:
Described image I is uniformly syncopated as L nonoverlapping image block p by step 7.1i, i=1,2 ..., L often go after making cutting Including N number of image block, each column includes J image block, and each image block is a square block, by each image block piVector turns to Column vector fi, and dimensionality reduction is carried out by Principal Component Analysis Algorithm to institute's directed quantity, the matrix U of a d × L is obtained after dimensionality reduction, I-th row correspondence image block piVector after dimensionality reduction;Matrix U is configured to:
U=[X1 X2 ... Xd]T
Step 7.2, each image block p is calculatediVision significance degree:
Vision significance degree is:
Mi=maxjij, j=1,2 ..., L
D=max { W, H }
Wherein,Indicate image block piAnd pjBetween dissimilar degree, ωijIndicate image block piAnd pjThe distance between, umnIt indicates The element that matrix U m rows n-th arrange, (xpi,ypi)、(xpj,ypj) respectively represent segment piAnd pjCenter on former query image I Point coordinates;
Step 7.3, the vision significance degree value of all image blocks according to the position between each image block on former query image I Relational organization is set into two dimensional form, constitutes notable figure SalMap, specific value is:
SalMap (i, j)=Sal(i-1)·N+jI=1 .., J, j=1 ..., N.
8. the image search method of multiple features fusion as claimed in claim 7, which is characterized in that extraction regards in the step 7 Feeling regional areas of the conspicuousness mean value Ti more than T as the detailed process of the regional area Si with vision significance is:
Step 7.1 calculates the conspicuousness mean value T of described image I visual signature notable figures, and formula is:
Wherein, described image I includes H pixel on its vertical direction, and x indicates a pixel on vertical direction;Institute It includes W pixel in its horizontal direction to state image I, and y indicates a pixel in horizontal direction;
Each regional area of the step 7.2 in described image I is narrowed down to including in the minimum rectangle of the regional area, The conspicuousness mean value T of each regional area is calculated in the rectanglei, calculation formula is:
Wherein, include h pixel in the direction of the x axis in the minimum rectangle, include in the y-axis direction in the minimum rectangle W pixel;sal_mapsi(x, y) indicates each subcharacter region siSignificance value;
Step 7.3 is weighted the conspicuousness mean value using conspicuousness weight, and is denoted as nT, compares conspicuousness mean value TiWith NT extracts the TiRegional area of the value more than the nT, as the regional area with conspicuousness in described image I.
CN201510564819.5A 2015-09-07 2015-09-07 A kind of image search method of multiple features fusion Active CN105138672B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510564819.5A CN105138672B (en) 2015-09-07 2015-09-07 A kind of image search method of multiple features fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510564819.5A CN105138672B (en) 2015-09-07 2015-09-07 A kind of image search method of multiple features fusion

Publications (2)

Publication Number Publication Date
CN105138672A CN105138672A (en) 2015-12-09
CN105138672B true CN105138672B (en) 2018-08-21

Family

ID=54724019

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510564819.5A Active CN105138672B (en) 2015-09-07 2015-09-07 A kind of image search method of multiple features fusion

Country Status (1)

Country Link
CN (1) CN105138672B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105787965B (en) * 2016-01-26 2018-08-07 安徽创世科技股份有限公司 A kind of image search method based on color characteristic
CN107103270A (en) * 2016-02-23 2017-08-29 云智视像科技(上海)有限公司 A kind of face identification system of the dynamic calculation divided group coefficient based on IDF
CN107577687B (en) * 2016-07-20 2020-10-02 北京陌上花科技有限公司 Image retrieval method and device
CN111368126B (en) * 2017-02-13 2022-06-07 哈尔滨理工大学 Image retrieval-oriented generation method
CN107357834A (en) * 2017-06-22 2017-11-17 浙江工业大学 A kind of image search method of view-based access control model conspicuousness fusion
CN110147459B (en) * 2017-07-28 2021-08-20 杭州海康威视数字技术股份有限公司 Image retrieval method and device and electronic equipment
CN108170791A (en) * 2017-12-27 2018-06-15 四川理工学院 Video image search method
CN110019910A (en) * 2017-12-29 2019-07-16 上海全土豆文化传播有限公司 Image search method and device
CN109558823B (en) * 2018-11-22 2020-11-24 北京市首都公路发展集团有限公司 Vehicle identification method and system for searching images by images
CN113407756B (en) * 2021-05-28 2022-10-11 山西云时代智慧城市技术发展有限公司 Lung nodule CT image reordering method based on self-adaptive weight

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102117337A (en) * 2011-03-31 2011-07-06 西北工业大学 Space information fused Bag of Words method for retrieving image
CN103049446A (en) * 2011-10-13 2013-04-17 中国移动通信集团公司 Image retrieving method and device
CN103336835A (en) * 2013-07-12 2013-10-02 西安电子科技大学 Image retrieval method based on weight color-sift characteristic dictionary
CN103838864A (en) * 2014-03-20 2014-06-04 北京工业大学 Visual saliency and visual phrase combined image retrieval method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8488883B2 (en) * 2009-12-28 2013-07-16 Picscout (Israel) Ltd. Robust and efficient image identification

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102117337A (en) * 2011-03-31 2011-07-06 西北工业大学 Space information fused Bag of Words method for retrieving image
CN103049446A (en) * 2011-10-13 2013-04-17 中国移动通信集团公司 Image retrieving method and device
CN103336835A (en) * 2013-07-12 2013-10-02 西安电子科技大学 Image retrieval method based on weight color-sift characteristic dictionary
CN103838864A (en) * 2014-03-20 2014-06-04 北京工业大学 Visual saliency and visual phrase combined image retrieval method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
A Combined Model for Scan Path in Pedestrian Searching;Lijuan Duan et al.;《International Joint Conference on Neural Networks》;20141231;第2156-2161页 *
基于内容的图像检索与过滤关键技术研究;段立娟;《中国优秀博硕士学位论文全文数据库(博士)信息科技辑》;20170215(第2期);I138-181 *

Also Published As

Publication number Publication date
CN105138672A (en) 2015-12-09

Similar Documents

Publication Publication Date Title
CN105138672B (en) A kind of image search method of multiple features fusion
CN107291871B (en) Matching degree evaluation method, device and medium for multi-domain information based on artificial intelligence
Gao et al. Database saliency for fast image retrieval
JP3781696B2 (en) Image search method and search device
CN104200240B (en) A kind of Sketch Searching method based on content-adaptive Hash coding
CN104850633B (en) A kind of three-dimensional model searching system and method based on the segmentation of cartographical sketching component
Patil et al. Content based image retrieval using various distance metrics
JP4937395B2 (en) Feature vector generation apparatus, feature vector generation method and program
Huang et al. Sketch-based image retrieval with deep visual semantic descriptor
CN105843925A (en) Similar image searching method based on improvement of BOW algorithm
CN109408655A (en) The freehand sketch retrieval method of incorporate voids convolution and multiple dimensioned sensing network
JP5014479B2 (en) Image search apparatus, image search method and program
Wang et al. A new sketch-based 3D model retrieval approach by using global and local features
Xiao et al. Sketch-based human motion retrieval via selected 2D geometric posture descriptor
CN101276370A (en) Three-dimensional human body movement data retrieval method based on key frame
CN113380360A (en) Similar medical record retrieval method and system based on multi-mode medical record map
Baak et al. An efficient algorithm for keyframe-based motion retrieval in the presence of temporal deformations
CN104111947B (en) A kind of search method of remote sensing images
Rao et al. Deep learning-based image retrieval system with clustering on attention-based representations
Bouksim et al. New approach for 3D Mesh Retrieval using data envelopment analysis
Mumtaz et al. A novel texture image retrieval system based on dual tree complex wavelet transform and support vector machines
Mohammadpour et al. A method for Content-Based Image Retrieval using visual attention model
Li et al. Non-rigid 3D model retrieval using multi-scale local features
Zheng et al. Compounded Face Image Retrieval Based on Vertical Web Image Retrieval
Fang et al. Searching human actions based on a multi-dimensional time series similarity calculation method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20221021

Address after: 350015 North side of the first floor, the second floor and the third floor of Building 1 #, M9511 Industrial Park, No. 18, Majiang Road, Mawei District, Fuzhou City, Fujian Province (within the Free Trade Zone)

Patentee after: FUJIAN YOUTONG INDUSTRIAL Co.,Ltd.

Address before: 100124 No. 100 Chaoyang District Ping Tian Park, Beijing

Patentee before: Beijing University of Technology