CN108536772A - A kind of image search method to be reordered based on multiple features fusion and diffusion process - Google Patents

A kind of image search method to be reordered based on multiple features fusion and diffusion process Download PDF

Info

Publication number
CN108536772A
CN108536772A CN201810244844.9A CN201810244844A CN108536772A CN 108536772 A CN108536772 A CN 108536772A CN 201810244844 A CN201810244844 A CN 201810244844A CN 108536772 A CN108536772 A CN 108536772A
Authority
CN
China
Prior art keywords
image
value
ldp
feature
diffusion process
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810244844.9A
Other languages
Chinese (zh)
Other versions
CN108536772B (en
Inventor
周菊香
甘健侯
王俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yunnan University YNU
Yunnan Normal University
Original Assignee
Yunnan Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yunnan Normal University filed Critical Yunnan Normal University
Priority to CN201810244844.9A priority Critical patent/CN108536772B/en
Publication of CN108536772A publication Critical patent/CN108536772A/en
Application granted granted Critical
Publication of CN108536772B publication Critical patent/CN108536772B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • G06V10/464Salient features, e.g. scale invariant feature transforms [SIFT] using a plurality of salient features, e.g. bag-of-words [BoW] representations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/467Encoded features or binary features, e.g. local binary patterns [LBP]

Abstract

The invention discloses a kind of image search methods to be reordered based on multiple features fusion and diffusion process, including:Step1, image characteristics extraction;Step2, the characteristics of image feature of step Step1 extractions is normalized and is merged;Step3, the optimization of the characteristic distance based on diffusion process is carried out to the fusion feature of the image extracted by step Step2;Step4, it is reordered to the feature after step Step3 optimizations and is retrieved according to the result of rearrangement.Method fusion feature proposed by the present invention be easy to extraction, complexity it is relatively low, entire retrieving does not need the training process of image segmentation and image classification, it can effectively solve the problems, such as that the conventional retrieval method retrieval rate for being currently based on bottom visual signature is low, better meet actual demand of the user to content-based image retrieval.

Description

A kind of image search method to be reordered based on multiple features fusion and diffusion process
Technical field
The present invention relates to a kind of image search methods to be reordered based on multiple features fusion and diffusion process, belong to computer The related fields such as vision, image procossing, image understanding.
Background technology
With the development of computer technology, computer vision related field has obtained more and more researcher's concerns. Image processing techniques is all achieved in all trades and professions and is successfully applied in recent years, content-based image retrieval (Content Based image retrieval, CBIR) it is one of main typical case.CBIR is " scheme to search figure ", is different from tradition The search based on text keyword, CBIR is concerned with the vision content of image itself.Two key links of CBIR are exactly The extraction of characteristics of image and the similarity mode of image.
Characteristics of image can go to describe from the different visual angle such as color, texture and shape, be largely based on based on this The bottom Visual Feature Retrieval Process method of engineer is suggested.Then due to the complexity of image vision content, single feature is often Demand of the user to high retrieval rate is cannot be satisfied because being unable to comprehensive representation picture characteristics, therefore multiple features fusion method is drawn More concerns and research are played.When designing multiple features, not only need to consider the feature of image that single feature can characterize, and And the efficiency of the mutual supplement with each other's advantages and feature extraction between feature is synthetically considered from multiple dimensions, while avoiding Yin Te The negative effect that information redundancy between sign makes computation complexity get higher instead while no raising retrieval rate.Institute To propose that a kind of efficient image multiple features description still has challenge.
The matching of characteristics of image carries out similarity-rough set according to characteristics of image.Currently, most of image search methods are all It is to use traditional measuring similarity mode based on distance.This method only accounts for its in current queries image and database Point between his image has ignored potential data manifold structure between all images to relationship.In order to solve this problem, Diffusion process (Diffusion Process, DP) is suggested, with context-sensitive similar between image in mining data library Degree relationship, this method can effectively improve retrieval rate in the application of image retrieval.However, in the relevant image inspections of DP It is the application in the image data base based on shape most of in rope, is both in only a small amount of natural image application, still Characteristics of image so is stated using the single vision feature based on bottom, the retrieval on large-scale natural image data set Accuracy rate is not greatly enhanced.
Invention content
The present invention provides a kind of image search methods to be reordered based on multiple features fusion and diffusion process, for solution Certainly based on the problem that traditional image search method accuracy rate is not high in CBIR, realize in the retrieval of extensive natural image efficiently The target of retrieval.
The technical scheme is that:A kind of image search method to be reordered based on multiple features fusion and diffusion process, The method is as follows:
Step1, image characteristics extraction;
Step2, the characteristics of image feature of step Step1 extractions is normalized and is merged;
Step3, to carry out the characteristic distance based on diffusion process to the fusion feature of the image extracted by step Step2 excellent Change;
Step4, it is reordered to the feature after step Step3 optimizations and is retrieved according to the result of rearrangement.
The step Step1 is specially:
Color characteristic F in Step1.1, extraction image library per piece imageColor
LDP features F in Step1.2, extraction image library per piece imageLDP
SIFT visual word packet features F in Step1.3, extraction image library per piece imageBoF
The step Step1.1 is specially:
R, G of image, B color channel are quantified as Q respectivelyR、QG、QBGrade generates QR×QG×QBA new color bin, The all pixels of image are traversed by formula (1), count the number that each value c occursAnd obtain QR×QG ×QBThe color characteristic of dimension
In formula, H (i, j) indicates that the single channel value of each pixel is mapped to [0~QR×QG×QB- 1] one of section takes Value, image size are m × n, c=0,1 ..., QR×QG×QB-1。
The step Step1.2 is specially:
Centered on each pixel of image, convolution operation is carried out using 3 × 3 Kirsch operators to its 8 neighborhood and generates 8 Direction reflecting value, the corresponding position of k maximum direction reflecting value is set as 1 before selection, is otherwise 0, thus generates one 8 Binary coding is translated into LDP value of the decimal system as Current central element, and for some specific k, LDP values have Different value h is planted, thus each pixel can generate a LDP value, then traverse all pixels LDP values of image, pass through Formula (2) counts the number that each value occursAnd it obtainsThe LDP features of dimension
In formula, LDPk(i, j) indicates that the LDP values of image pixel (i, j) when maximum direction reflecting value takes k, image size are m ×n。
The step Step1.3 is specially:
By image uniform piecemeal, then the SIFT feature of the center pixel of each piece of extraction utilizes K-means clustering methods All image block central elements are clustered, K cluster centre is generated, each cluster centre corresponds to a vision word, calculates Each of image is chunked into the distance of each cluster centre, distributes it vision word index of cluster centre nearest from it Coding traverses piecemeal all in each image, and the number that each vision word occurs is counted by formula (3), can form K dimensions SIFT visual word packet features FBoF=[FBoF(v)]:
In formula, I (g) indicates the assigned index coding of g image blocks, NpatchExpression image block total number, v=1, 2 ..., K, image size are m × n.
The step Step2 is specially:The Step1 characteristics of image extracted is normalized and is merged using formula (4):
Wherein, F indicates final fusion feature, FColor(ii) color characteristic F is indicatedColorIi component, FLDP(jj) Indicate LDP features FLDPJj component, FBoF(kk) SIFT visual word packet features F is indicatedBoFKk component, QR×QG×QBTable Show the dimension of color characteristic,Indicate that the dimension of LDP features, K indicate the dimension of SIFT visual word packet features, R, G, B of image Color Channel is quantified as Q respectivelyR、QG、QBGrade.
The step Step3 is specially:
Step3.1, by step Step2, it is one extractable per piece imageThe figure of dimension As feature, utilizeCalculate the characteristic distance d of image Ii and image IjIi,Ij;WhereinWithTable respectively Show that the q dimensional features of I i width and I j width images, 1≤Ii≤N, 1≤Ij≤N, N indicate the image total number in image library, Remember D=[dIi,Ij] it is the characteristic distance matrix generated, the d in DIi,IjSmaller to indicate more similar, D is symmetrical matrix, the master couple of D Angle element is 0;QR×QG×QBIndicate the dimension of color characteristic,Indicate that the dimension of LDP features, K indicate SIFT visual words Bao Te The dimension of sign, R, G, the B color channel of image are quantified as Q respectivelyR、QG、QBGrade;
Step3.2, Distance matrix D is normalized to close relationship matrix A using formula (5) so that close relationship matrix A In value be 0 to 1, be worth it is bigger indicate it is more similar;
Wherein,Indicate the kth of I i rows in DnA maximum value;
Step3.3, initialization diffusion process W0=PkNN, P herekNNIt is the matrix of a N × N, formula (6) can be passed through It is calculated;Then pass throughTo matrix PkNNIt is normalized so that PkNNEvery a line value summation exist Between 0-1;
Wherein aIi,IjIt is the element of the I i rows Ij row of close relationship matrix A,Indicate the kth of I i rows in An A maximum value;
Step3.4, transfer matrix T=P is definedkNN
Step3.5 simultaneously passes through Wt+1=TWtTTTo update diffusion process matrix W;
Step3.6, compare the front and back W of updatetAnd Wt+1Element clooating sequence of the matrix per a line calculates every a line sequence Variation number ri, and find out average value
Step3.7, given threshold ε, whenWhen stop Step3.5 renewal process, obtain final diffusion process W, Matrix A is used in combination*To indicate.
The step Step4 is specially:
Step4.1, to A*Each row carry out descending sort, and record corresponding row subscript;
Step4.2, by preceding NpValue on a position replaces with corresponding value in matrix D;
Step4.3, again to preceding NpValue on a position carries out descending sort, obtains a new sequence;
Step4.4, to can be obtained each query image in the sequence of Step4.3 similar to database other images Degree sequence, is finally completed the retrieval of image;
Wherein, NpValue is set greater than user search and needs the amount of images L returned and not super only 2L.
The beneficial effects of the invention are as follows:The method of the present invention effective integration color histogram (Color Histogram, CH) Feature, local direction pattern (Local Directional Pattern, LDP) feature and SIFT visual words packet (Bag of Visual Words, BoVW) feature, each comfortable description color of three kinds of features, the advantage of texture and vpg connection have been given full play to, Description have stronger distinguishability, while dexterously by based on bottom visual signature and based on high-rise image information Fusion Features get up to reduce Image Visual Feature to " semantic gap " between image high-level semantic, to which more accurately reflection is schemed The internal characteristics of picture.Meanwhile on the basis of this fusion feature, introduces DP and characteristics of image distance matrix is optimized, and needle DP methods are caused to return to image in retrieval due to describing similarity relation inaccurate image similar in the most in small neighbourhood Width number shows poor problem when less, propose a kind of thought to reorder.Method fusion feature proposed by the present invention is easy to carry Take, complexity it is relatively low, entire retrieving does not need the training process of image segmentation and image classification, can effectively solve current Based on the low problem of the conventional retrieval method retrieval rate of bottom visual signature, user is better met to the figure based on content As the actual demand of retrieval.
Description of the drawings
Fig. 1 is image search method flow chart proposed by the present invention;
Fig. 2 is Kirsch operator masterplates.
Fig. 3 is the citing for 8 directions reflecting value position in step Step1.2;
Fig. 4 is for the binary-coded locations drawing of LDP in step Step1.2;
Fig. 5 is for LDP values (k=3) sample calculation in step Step1.2.
Specific implementation mode
Embodiment 1:As shown in Figure 1, a kind of image search method to be reordered based on multiple features fusion and diffusion process, this Embodiment is by taking the image data base for the image construction that N (1000) a size is m × n (192 × 168) as an example, all per piece image Respectively as query image, complete to retrieve by acquiring the similarity of each width query image and other images in database. Detailed process includes:It extracts the feature (Step1) of all images and is normalized and merges (Step2), calculate characteristics of image The distance between matrix (can be obtained the similarity of other images in every width query image and database, apart from smaller, image is got over It is similar), it is re-introduced into diffusion process and (Step3) is optimized to the distance matrix, finally reordered to it and complete to retrieve (Step4)。
The present invention in the retrieving, proposes a kind of image characteristic extracting method of multiple features fusion, and counting Calculate the method for reordering based on distance optimization proposed when the distance between characteristics of image.It is finally formed in the present embodiment 1000*1000 matrixes, wherein I i rows I j row i.e. represent i query image of I in image library I j width images it is similar Degree, you can the retrieval to i query image of I is completed by the descending sequence of I i rows.
Described image search method is as follows:
Step1, image characteristics extraction;
It is as follows it is possible to further which image characteristics extraction is arranged:
Color histogram feature in Step1.1, extraction image library per piece image;
If quantization level QR=QG=QB=4, the RGB color channel section of image is carried out respectively by formula (7) uniform Quantization.
Wherein R, G, B indicate the RGB color channel value after quantization respectively, then by formula (8) by the list of each pixel Channel value is mapped to a value H (i, j) in [0~63] section, that is, quantifies to 64 bin grades of colors.
H (i, j)=16Ri,j+4Gi,j+Bi,j (8)
Wherein, i=1,2 ..., m;J=1,2 ..., n;Ri,j、Gi,j、Bi,jThree face of image pixel (i, j) are indicated respectively Chrominance channel quantized value.The all pixels of image are traversed finally by formula (9), count the number that each value c occursAnd obtain the color characteristic of 64 dimensions
LDP features in Step1.2, extraction image library per piece image;
Centered on each pixel of image, its 8 neighborhood is respectively adopted the template of 3 × 3 Kirsch operators (shown in Fig. 2) MpConvolution operation is carried out, and generates 8 direction reflecting value mp(p=1,2 ..., 8), pair of k maximum direction reflecting value before selection It answers position to be set as 1, is otherwise 0, thus generate one 8 binary codings, be translated into decimal system conduct currently The LDP values of heart element.The process can be calculated by formula (10), wherein mpFor the direction reflecting value of current p-th of position, mk For k-th of maximum direction reflecting value, bpFor the corresponding binary value in p-th of position, specific calculating process is as shown in Figure 3-Figure 5, By scheming it is found that LDP values (k=3):LDP binary codings:00010011, the LDP values of formation are 19.
For some specific k, LDP values haveThe different value of kind, if k=3, you can generate 56 different LDP and take Value h.Thus each pixel can generate a LDP value, then traverse all pixels LDP values of image, be united by formula (11) Count the number that each value occursAnd obtain the LDP features of 56 dimensionsWherein, LDP3(i, j) indicates k= The LDP values of current pixel (i, j) when 3.
The SIFT visual word packet features of each width in Step1.3, extraction image library;
Step1.3.1:Convert RGB image to gray level image;
Step1.3.2:Using 8 pixels as step-length, 16 × 16 uniform grid piecemeal is divided the image into.And it extracts each The SIFT feature (128 dimension) of piecemeal central element.
Step1.3.3:Then all image blocks are clustered using K-means clustering methods, and generates K=100 A cluster centre v (v=1,2 ..., 100), the i.e. corresponding vision word (128 dimension) of each cluster centre.
Step1.3.4:Calculate the distance that each of image is chunked into each cluster centre, it is distributed one it is nearest from it The vision word index coding of cluster centre.
Step1.3.5:Piecemeal all in each image is traversed, counts what each vision word occurred by formula (12) NumberForm the SIFT visual word packet features of 100 dimensions
Wherein, NpatchIndicate that image block total number, I (g) indicate the assigned index coding of g image blocks.
Step2, the characteristics of image feature of step Step1 extractions is normalized and is merged;
It is specially it is possible to further which the Step2 is arranged:Step1 through the above steps is extracted per piece image The BoF features of the color characteristic of 64 dimensions, the LDP features of 56 dimensions and 100 dimensions.Using formula (13) to three kinds of features of extraction It is normalized and merges, you can form the fusion feature F of one 220 dimension.
Wherein, FColor(ii) ii component of color characteristic, F are indicatedLDP(jj) jj component of LDP features, F are indicatedBoF (kk) kk component of BoF features is indicated.
Step3, to carry out the characteristic distance based on diffusion process to the fusion feature of the image extracted by step Step2 excellent Change;
It is specially it is possible to further which the Step3 is arranged:
Step3.1:By step 2, per piece image, the characteristics of image of extractable one 220 dimension, utilizesCalculate the characteristic distance d of image Ii and image IjIi,Ij, whereinWithI i width and the are indicated respectively The q dimensional features of Ij width images, 1≤Ii≤N, 1≤Ij≤N.Remember D=[dIi,Ij] it is the characteristic distance matrix generated, in D dIi,IjSmaller to indicate more similar, D is symmetrical matrix, and the main diagonal element of D is 0.
Step3.2:Distance matrix D is normalized to close relationship matrix A using formula (14) so that the value in A is 0 To 1, it is more similar to be worth bigger expression.
WhereinIndicate the kth of I i rows in DnA maximum value, k in the present embodimentn=5.
Step3.3:Initialize diffusion process W0=PkNN, P herekNNIt is the matrix of a N × N, formula can be passed through (15) it is calculated.Then pass throughTo matrix PkNNIt is normalized so that PkNNEvery a line value it is total And between 0-1.
Wherein aIi,IjIt is the element of the I i rows Ij row of matrix A,Indicate the kth of I i rows in AnA maximum value.
Step3.4:Define transfer matrix T=PkNN
Step3.5:And pass through Wt+1=TWtTTTo update diffusion process matrix W.
Step3.6:Compare the front and back W of updatetAnd Wt+1Element clooating sequence of the matrix per a line calculates every a line sequence Variation number ri, and find out average value
Step3.7:Given threshold ε=0.3, whenWhen stop Step3.5 renewal process, obtain final diffusion Process W, is used in combination matrix A*To indicate.
Step4, it is reordered to the feature after step Step3 optimizations and is retrieved according to the result of rearrangement.
It is specially it is possible to further which the Step4 is arranged:
Step4.1:To A*Each row carry out descending sort, and record corresponding row subscript;
Step4.2:By preceding NpValue on a position replaces with corresponding value in matrix D;
Step4.3:Again to preceding NpValue on a position carries out descending sort, obtains a new sequence;
Step4.4:It can be obtained the similarity row of other images of each query image Yu database in sorting above Sequence is finally completed the retrieval of image.
Wherein NpValue can be arranged needs the amount of images L returned more than user search.Specifically, when user retrieves When, if it is generally necessary to returning to L=100 similar image, NpIt needs to be set as a number more than 100, usually not More than 2L.
The specific implementation mode of the present invention is explained in detail above in conjunction with attached drawing, but the present invention is not limited to above-mentioned Embodiment within the knowledge of a person skilled in the art can also be before not departing from present inventive concept Put that various changes can be made.

Claims (8)

1. a kind of image search method to be reordered based on multiple features fusion and diffusion process, it is characterised in that:The method It is as follows:
Step1, image characteristics extraction;
Step2, the characteristics of image feature of step Step1 extractions is normalized and is merged;
Step3, the optimization of the characteristic distance based on diffusion process is carried out to the fusion feature of the image extracted by step Step2;
Step4, it is reordered to the feature after step Step3 optimizations and is retrieved according to the result of rearrangement.
2. the image search method according to claim 1 to be reordered based on multiple features fusion and diffusion process, feature It is:The step Step1 is specially:
Color characteristic F in Step1.1, extraction image library per piece imageColor
LDP features F in Step1.2, extraction image library per piece imageLDP
SIFT visual word packet features F in Step1.3, extraction image library per piece imageBoF
3. the image search method according to claim 2 to be reordered based on multiple features fusion and diffusion process, feature It is:The step Step1.1 is specially:
R, G of image, B color channel are quantified as Q respectivelyR、QG、QBGrade generates QR×QG×QBA new color bin, passes through Formula (1) traverses all pixels of image, counts the number that each value c occursAnd obtain QR×QG×QB The color characteristic of dimension
In formula, H (i, j) indicates that the single channel value of each pixel is mapped to [0~QR×QG×QB- 1] value in section, figure As size is m × n, c=0,1 ..., QR×QG×QB-1。
4. the image search method according to claim 2 to be reordered based on multiple features fusion and diffusion process, feature It is:The step Step1.2 is specially:
Centered on each pixel of image, convolution operation is carried out using 3 × 3 Kirsch operators to its 8 neighborhood and generates 8 directions Reflecting value, the corresponding position of k maximum direction reflecting value is set as 1 before selection, is otherwise 0, thus generate the two of one 8 into System coding, is translated into LDP value of the decimal system as Current central element, for some specific k, LDP values haveKind is not With value h, thus each pixel can generate a LDP value, then traverse image all pixels LDP values, pass through formula (2) number that each value occurs is countedAnd it obtainsThe LDP features of dimension
In formula, LDPk(i, j) indicates that the LDP values of image pixel (i, j) when maximum direction reflecting value takes k, image size are m × n.
5. the image search method according to claim 2 to be reordered based on multiple features fusion and diffusion process, feature It is:The step Step1.3 is specially:
By image uniform piecemeal, then the SIFT feature of the center pixel of each piece of extraction utilizes K-means clustering methods to institute Some image block central element clusters, generate K cluster centre, and each cluster centre corresponds to a vision word, calculates image Each of be chunked into the distance of each cluster centre, the vision word index for distributing it cluster centre nearest from it is compiled Code traverses piecemeal all in each image, and the number that each vision word occurs is counted by formula (3), can form K dimensions SIFT visual word packet features FBoF=[FBoF(v)]:
In formula, I (g) indicates the assigned index coding of g image blocks, NpatchExpression image block total number, v=1,2 ..., K, image size are m × n.
6. the image search method according to claim 1 to be reordered based on multiple features fusion and diffusion process, feature It is:The step Step2 is specially:The Step1 characteristics of image extracted is normalized and is merged using formula (4):
Wherein, F indicates final fusion feature, FColor(ii) color characteristic F is indicatedColorIi component, FLDP(jj) it indicates LDP features FLDPJj component, FBoF(kk) SIFT visual word packet features F is indicatedBoFKk component, QR×QG×QBIndicate face The dimension of color characteristic,Indicate that the dimension of LDP features, K indicate the dimension of SIFT visual word packet features, R, G, B color of image Channel is quantified as Q respectivelyR、QG、QBGrade.
7. the image search method according to claim 1 to be reordered based on multiple features fusion and diffusion process, feature It is:The step Step3 is specially:
Step3.1, by step Step2, it is one extractable per piece imageThe image of dimension is special Sign utilizesCalculate the characteristic distance d of image Ii and image IjIi,Ij;WhereinWithIs indicated respectively The q dimensional features of Ii width and I j width images, 1≤Ii≤N, 1≤Ij≤N, N indicate the image total number in image library, remember D= [dIi,Ij] it is the characteristic distance matrix generated, the d in DIi,IjSmaller to indicate more similar, D is symmetrical matrix, the main diagonal element of D It is 0;QR×QG×QBIndicate the dimension of color characteristic,Indicate that the dimension of LDP features, K indicate the dimension of SIFT visual word packet features Number, R, G, the B color channel of image are quantified as Q respectivelyR、QG、QBGrade;
Step3.2, Distance matrix D is normalized to close relationship matrix A using formula (5) so that in close relationship matrix A Value is 0 to 1, and it is more similar to be worth bigger expression;
Wherein,Indicate the kth of I i rows in DnA maximum value;
Step3.3, initialization diffusion process W0=PkNN, P herekNNIt is the matrix of a N × N, can be calculated by formula (6) It obtains;Then pass throughTo matrix PkNNIt is normalized so that PkNNEvery a line value summation 0-1 it Between;
Wherein aIi,IjIt is the element of the I i rows Ij row of close relationship matrix A,Indicate the kth of I i rows in AnIt is a most Big value;
Step3.4, transfer matrix T=P is definedkNN
Step3.5 simultaneously passes through Wt+1=TWtTTTo update diffusion process matrix W;
Step3.6, compare the front and back W of updatetAnd Wt+1Element clooating sequence of the matrix per a line, calculates the variation of every a line sequence Number ri, and find out average value
Step3.7, given threshold ε, whenWhen stop Step3.5 renewal process, obtain final diffusion process W, be used in combination Matrix A*To indicate.
8. the image search method according to claim 7 to be reordered based on multiple features fusion and diffusion process, feature It is:The step Step4 is specially:
Step4.1, to A*Each row carry out descending sort, and record corresponding row subscript;
Step4.2, by preceding NpValue on a position replaces with corresponding value in matrix D;
Step4.3, again to preceding NpValue on a position carries out descending sort, obtains a new sequence;
Step4.4, the similarity row that can be obtained other images of each query image Yu database in the sequence of Step4.3 Sequence is finally completed the retrieval of image;
Wherein, NpValue is set greater than user search and needs the amount of images L returned and not super only 2L.
CN201810244844.9A 2018-03-23 2018-03-23 Image retrieval method based on multi-feature fusion and diffusion process reordering Active CN108536772B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810244844.9A CN108536772B (en) 2018-03-23 2018-03-23 Image retrieval method based on multi-feature fusion and diffusion process reordering

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810244844.9A CN108536772B (en) 2018-03-23 2018-03-23 Image retrieval method based on multi-feature fusion and diffusion process reordering

Publications (2)

Publication Number Publication Date
CN108536772A true CN108536772A (en) 2018-09-14
CN108536772B CN108536772B (en) 2020-08-14

Family

ID=63485068

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810244844.9A Active CN108536772B (en) 2018-03-23 2018-03-23 Image retrieval method based on multi-feature fusion and diffusion process reordering

Country Status (1)

Country Link
CN (1) CN108536772B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109816037A (en) * 2019-01-31 2019-05-28 北京字节跳动网络技术有限公司 The method and apparatus for extracting the characteristic pattern of image
CN111508409A (en) * 2019-01-31 2020-08-07 联咏科技股份有限公司 Driving device of display panel and operation method thereof

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102324102A (en) * 2011-10-08 2012-01-18 北京航空航天大学 Method for automatically filling structure information and texture information of hole area of image scene
US20160042252A1 (en) * 2014-08-05 2016-02-11 Sri International Multi-Dimensional Realization of Visual Content of an Image Collection

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102324102A (en) * 2011-10-08 2012-01-18 北京航空航天大学 Method for automatically filling structure information and texture information of hole area of image scene
US20160042252A1 (en) * 2014-08-05 2016-02-11 Sri International Multi-Dimensional Realization of Visual Content of an Image Collection

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
FAN YANG,ET AL: "Re-ranking by Multi-feature Fusion with Diffusion for Image Retrieval", 《IEEE COMPUTER SOCIETY》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109816037A (en) * 2019-01-31 2019-05-28 北京字节跳动网络技术有限公司 The method and apparatus for extracting the characteristic pattern of image
CN111508409A (en) * 2019-01-31 2020-08-07 联咏科技股份有限公司 Driving device of display panel and operation method thereof

Also Published As

Publication number Publication date
CN108536772B (en) 2020-08-14

Similar Documents

Publication Publication Date Title
CN106126581B (en) Cartographical sketching image search method based on deep learning
CN108920720A (en) The large-scale image search method accelerated based on depth Hash and GPU
CN110059206A (en) A kind of extensive hashing image search method based on depth representative learning
CN104361096B (en) The image search method of a kind of feature based rich region set
CN101551823A (en) Comprehensive multi-feature image retrieval method
CN106909887A (en) A kind of action identification method based on CNN and SVM
CN103955703A (en) Medical image disease classification method based on naive Bayes
CN107085607A (en) A kind of image characteristic point matching method
CN110175249A (en) A kind of search method and system of similar pictures
CN104036012A (en) Dictionary learning method, visual word bag characteristic extracting method and retrieval system
CN103955952A (en) Extraction and description method for garment image color features
CN108154158B (en) Building image segmentation method for augmented reality application
CN107918761A (en) A kind of single sample face recognition method based on multiple manifold kernel discriminant analysis
CN106682681A (en) Recognition algorithm automatic improvement method based on relevance feedback
CN104850859A (en) Multi-scale analysis based image feature bag constructing method
CN106649665A (en) Object-level depth feature aggregation method for image retrieval
CN107577994A (en) A kind of pedestrian based on deep learning, the identification of vehicle auxiliary product and search method
Xing et al. Oracle bone inscription detection: a survey of oracle bone inscription detection based on deep learning algorithm
CN108536772A (en) A kind of image search method to be reordered based on multiple features fusion and diffusion process
CN111125396B (en) Image retrieval method of single-model multi-branch structure
CN110188864B (en) Small sample learning method based on distribution representation and distribution measurement
CN108647726A (en) A kind of image clustering method
CN113032613B (en) Three-dimensional model retrieval method based on interactive attention convolution neural network
CN108897847A (en) Multi-GPU Density Peak Clustering Method Based on Locality Sensitive Hashing
Dong et al. Color space quantization-based clustering for image retrieval

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant