CN109299305A - A kind of spatial image searching system based on multi-feature fusion and search method - Google Patents

A kind of spatial image searching system based on multi-feature fusion and search method Download PDF

Info

Publication number
CN109299305A
CN109299305A CN201811273146.8A CN201811273146A CN109299305A CN 109299305 A CN109299305 A CN 109299305A CN 201811273146 A CN201811273146 A CN 201811273146A CN 109299305 A CN109299305 A CN 109299305A
Authority
CN
China
Prior art keywords
image
module
fusion
feature
similarity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811273146.8A
Other languages
Chinese (zh)
Inventor
王鑫
路翰霖
王春枝
王毅超
吴盼
蔡文成
周方禹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hubei University of Technology
Original Assignee
Hubei University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hubei University of Technology filed Critical Hubei University of Technology
Priority to CN201811273146.8A priority Critical patent/CN109299305A/en
Publication of CN109299305A publication Critical patent/CN109299305A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • G06V10/464Salient features, e.g. scale invariant feature transforms [SIFT] using a plurality of salient features, e.g. bag-of-words [BoW] representations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention belongs to image retrieval technologies fields, disclosing a kind of spatial image searching system based on multi-feature fusion and search method, spatial image searching system based on multi-feature fusion includes: input module, main control module, image detection module, characteristic extracting module, similarity measurement module, Fusion Features module, matching module, display module.The present invention uses big gradient algorithm matrix by characteristic extracting module, divides more gradient sections, and the mobile calculating of HOG cell is omitted, and greatly reduces operand, and bulk velocity promotes about 4 times, is very suitable to the higher application of requirement of real-time;Meanwhile will be classified to the fused characteristic pattern of heterologous image block by matching module, rather than classify to cascade feature vector, be conducive to the performance for improving network in this way;Heterologous image matching method proposed by the present invention based on deep learning is not only better than other methods in performance, also superior to other methods on training effectiveness.

Description

A kind of spatial image searching system based on multi-feature fusion and search method
Technical field
The invention belongs to image retrieval technologies field more particularly to a kind of spatial image retrieval based on multi-feature fusion systems System and search method.
Background technique
The development of image retrieval is one from simple to the complicated, process from rudimentary to advanced, from initial text information Inquiry develops to content-based image retrieval.Simultaneously as people deepen continuously to image understanding, image recognition research, mention Go out the retrieval based on image, semantic, taken full advantage of the semantic information of image, improves the ability of image indexing system.Separately Outside, in order to solve the problems, such as semantic gap, there has been proposed the information retrieval technique based on feedback, using human-computer interaction behavior, The ability of improvement system improves the accuracy of search result.Finally, with the development of artificial intelligence and information technology, Yi Zhongzhi The knouledge-based information searching system of energy becomes the developing direction of information retrieval field.Knouledge-based information retrieval technique will View-based access control model feature and technology based on text semantic are combined together, by establishing knowledge base, realization automatically extract it is semantic and The function of characteristics of image, and fully take into account influence of the user characteristics to searching system, this be establish it is efficient, practical, quick Image indexing system inevitable developing direction.However, operand is big in conventional images retrieving;Simultaneously as convolution The feature vector that neural network is extracted loses a large amount of spatial information of image, leads to final image block matching accuracy rate not It is high.
In conclusion problem of the existing technology is:
Operand is big in conventional images retrieving;Simultaneously as the feature vector that convolutional neural networks extract loses The a large amount of spatial information of image causes final image block matching accuracy rate not high.
Existing shape similarity often has the least mean-square error and geometry of probability statistics algorithm, characteristic value with recognition methods The Weighted Average Algorithm etc. of external appearance characteristic necessary condition.Although achieving certain efficiency, there is also some shortcomings: algorithm The matching of realization process and image resolution is not intuitive;Algorithm is complicated, causes data processing amount big, and operating cost is high;Algorithm Evenness analysis causes the variation of important geometrical characteristic in figure to the influence of overall similarity, and Stability and veracity is caused to be deposited In certain deviation.In the prior art, poor by quality metric effect of the local quality Score on Prediction to whole image.
Summary of the invention
In view of the problems of the existing technology, the present invention provides a kind of spatial image retrieval based on multi-feature fusion systems System and search method.
The invention is realized in this way a kind of spatial image search method based on multi-feature fusion, comprising:
The similarity of associated picture and query image is calculated using similar programs;Specific with good grounds figure minimum containment rectangle Appropriate thresholding is arranged in length-width ratio, is filtered;Thresholding is set according to side length each in the figure of source and the minimum value of perimeter ratio, removes mesh Surpriseization part in shape of marking on a map;Abbreviation processing is made to targeted graphical number of edges, makes that there is identical number of edges with source figure;Acquisition source figure With the Euclidean distance of vector most like in targeted graphical eigenmatrix and maximum phase and coefficient;
Multi-scale feature fusion is carried out according to the similarity between image using fusion program, obtains query image and each time Select the final similarity between image;Specifically have:
Objective video quality evaluation model OM is chosen, by comparing original reference video and distortion video, calculated distortion view Frequently the prediction score value of every frame, and the frame level fractional marks that will acquire are vector X, video totalframes is labeled as N;
The length of window of sliding window is winLen, carries out slide window processing to the frame level mass fraction of acquisition, that is, after handling The frame level score of n-th frame is the mean value of the frame level score of [n-winLen+1, n] frame, by the frame level fractional marks after slide window processing For vector WX;
It is ranked up WX is ascending, and is WX ' by the result queue after sequence, take the average value of worst p% frame, It as the quality metric score value of entire video sequence, is ranked up, the smallest p% frame mean value is final measurement results.
Further, comprising:
Firstly, establishing the eigenmatrix P of source figure P and targeted graphical Q respectively counterclockwiseEAnd QE:
PE=[P1 T P2 T … P2N-1 T P2N T];
QE=[Q1 T Q2 T … Q2N-1 T Q2N T];
Euclidean distance formula d (x, y) and included angle cosine formula sim (x, y) are as follows:
With d (x, y) and it is the basis sim (x, y), redefines two matrix Ds and S, make:
Find out the minimum value in D and S;
Eu is enabled respectivelye=min { Dij, 1≤i≤j=2N;Sime=max { Sij, 1≤i≤j=2N;
Then the eigenmatrix of needle directional structure vectorical structure figure P and Q, the above-mentioned calculation method of repetition find out two features in order again Minimum value Eu in matrix between most complete vectorcAnd Simc
Finally enable Eu=min { Eue, Euc};
Sim=min { Sime, Simc};
Eu and Sim be two figure of P, Q correspond to most like vector Euclidean distance and it is maximum mutually and coefficient;
The Euclidean distance of most like vector and maximum phase are gone back with after coefficient in acquisition source figure and targeted graphical eigenmatrix It needs to carry out: the enhancement of calculated result is handled, comprising:
Initial vector is carried out once to repeatedly deformation, on the basis of with adjacent corner sequence structure initial vector, then The geometrical characteristic for adding figure, using the adjacent corner of order of addition than as new initial vector;Initial vector is carried out Multiple nonlinear processing is once arrived, carries out evolution processing using by initial vector;
Multiple similarity calculation is carried out to deformed initial vector, finally by weighted average value, with Euclidean distance Eu It is as follows with the evaluation formula mutually with coefficient S im:
N is the number of vector deformation, k in above formulaiFor weight coefficient, EuiAnd SimiVector is European after deforming for i-th Distance, Eu (P, Q) are the evaluation of Euclidean distance, n=4, kiTake 0.25;
Multi-scale feature fusion is carried out according to the similarity between image using fusion program, obtains query image and each time It selects in the final similarity between image;
All frame level scores that OM model is calculated successively carry out slide window processing, it may be assumed that
Wherein, winLen indicates length of window when sliding window filters, and is the parameter for needing to adjust, and X (t) indicates t frame Mass fraction, WX (n) then indicate the mass fraction of the n-th frame after slide window processing;
Prediction frame level score is merged with time slot worst time-domain information fusion method using based on intra-frame trunk, most Whole prediction score:
Wherein, p% be parameter to be adjusted, N be video totalframes, WX ' (t) indicate it is ascending be ranked up after t-th Frame level score, OMwinPoolingFor the final appraisal results of the quality of the video.
Further, the spatial image search method based on multi-feature fusion includes:
Step 1 inputs retrieving image information using keyboard by input module;
Step 2, main control module are corresponding according to input retrieval infomation detection using detection program by image detection module Image information;
Step 3 utilizes the associated picture and query image in extraction procedure extraction detection image by characteristic extracting module Primitive image features, the primitive image features include color enhancement Laplacian CLOG feature and fast robust SURF Feature;
Step 4 calculates the similarity of associated picture and query image by similarity measurement module using similar programs; Multi-scale feature fusion is carried out according to the similarity between image using fusion program by Fusion Features module, obtains query image Final similarity between each candidate image;
Step 5, the target image for utilizing matcher to be retrieved according to final similarity mode by matching module;
Step 6 shows the target image retrieved using display by display module.
Further, characteristic extracting module extracting method is as follows:
(1) gradient algorithm matrix size is set;
(2) gradient of each pixel of gradient algorithm matrix is calculated;
(3) the affiliated gradient section of the gradient of each pixel is determined;
(4) its gradient length is calculated according to the gradient of each pixel;
(5) block eigenvalue is calculated;
The step of calculating the gradient of each pixel of gradient algorithm matrix include:
Calculate the initial gray G of each pixel0(x,y);
To the initial gray G0(x, y) carries out Gamma transformation, obtains optimization gray scale G (x, y);
According to the gradient operator G of the optimization gray scale G (x, y) of each pixel and each pixel X, Y-directiono, calculate described each The gradient d of pixel X, Y-directionx、dy
Each pixel X-direction gradient:
dx=G (x+3, y) * 3+G (x+2, y) * 2+G (x+1, y)-G (x-3, y) * 3-G (x-2, y) * 2-G (x-1, y), In, G (x+1, y), G (x+2, y), G (x+3, y) respectively indicate the latter pixel of center pixel horizontal direction, rear two pixel, rear three picture Element optimization gray scale, G (x-1, y), G (x-2, y), G (x-3, y) respectively indicate the previous pixel of center pixel horizontal direction, the first two Pixel, the optimization gray scale of first three pixel;
Each pixel Y-direction gradient:
dy=G (x, y+3) * 3+G (x, y+2) * 2+G (x, y+1)-G (x, y-1)-G (x, y-2) * 2-G (x, y-3) * 3, In, G (x, y+1), G (x, y+2), G (x, y+3) respectively indicate the latter pixel of center pixel vertical direction, rear two pixel, rear three picture Element optimization gray scale, G (x, y-1), G (x, y-2), G (x, y-3) respectively indicate the previous pixel of center pixel vertical direction, the first two Pixel, the optimization gray scale of first three pixel.
Further, matching module matching process includes:
1) required matched heterologous image making data set, is obtained by one group of training set and eight using VIS-NIR data set Group test set;
2), institute's matched heterologous image in need is pre-processed, obtains pretreated heterologous image;
3), obtain image block characteristics figure: by pretreated each pair of heterologous image image block A and image block B carry out Left and right splicing, extracts feature using improved VGG network after splicing, obtains the characteristic pattern of input picture;Then by resulting spy Sign figure or so is divided equally, then respectively obtains and the corresponding characteristic pattern V and characteristic pattern N corresponding with image block B of image block A;
4), characteristic pattern merges: carrying out step 3) resulting characteristic pattern V and characteristic pattern N to do difference operation, and after making the difference Characteristic pattern is normalized, and obtains fused characteristic pattern;
5), training image matching network: with full articulamentum and cross entropy loss function to being merged obtained in step 4) after Characteristic pattern carry out two classification, obtain the weight of matching network;
6), prediction and matching probability: matching network weight trained in step 5) is loaded into model, and is successively read All test set data obtain the heterologous images match and unmatched predicted value of softmax classifier output;
The resulting characteristic pattern V and characteristic pattern N of step 3) carries out global average pond respectively, obtains corresponding with image A Feature vector v and feature vector n corresponding with image B;
According to resulting feature vector v and feature vector n, is maximized using comparison loss function and mismatch image block spy It levies the average Euclidean distance of vector and minimizes the average Euclidean distance of matching image block eigenvector;
The calculating process for comparing loss, comprises the following steps that
A, feature vector of the note characteristic pattern V and characteristic pattern N behind global average pond is respectively v and n;Then feature vector Average Euclidean distance D (n, v) are as follows:
Wherein, k indicates the dimension of feature vector;
B, using comparison loss function formula (1) come maximize mismatch image block characteristics vector average Euclidean distance and Minimize the average Euclidean distance of matching image block eigenvector:
Wherein, y indicates that the true tag of input data, Q are a constant, and the natural constant of e, L (y, n, v) is comparison damage Lose function.
Another object of the present invention is to provide a kind of spatial image retrieval computer program based on multi-feature fusion, described Spatial image retrieval computer program based on multi-feature fusion realizes the spatial image based on multi-feature fusion retrieval Method.
Another object of the present invention is to provide a kind of terminal, and it is described based on multiple features fusion that the terminal at least carries realization Spatial image search method controller.
Another object of the present invention is to provide a kind of computer readable storage medium, including instruction, when its on computers When operation, so that computer executes the spatial image search method based on multi-feature fusion.
Another object of the present invention, which is to provide, a kind of implements the spatial image search method based on multi-feature fusion Spatial image searching system based on multi-feature fusion, the spatial image searching system based on multi-feature fusion include:
Input module is connect with main control module, for inputting retrieving image information by keyboard;
Main control module, with input module, image detection module, characteristic extracting module, similarity measurement module, Fusion Features Module, matching module, display module connection, work normally for controlling modules by single-chip microcontroller;
Image detection module is connect with main control module, for corresponding according to input retrieval infomation detection by detection program Image information;
Characteristic extracting module is connect with main control module, for extracting the associated picture in detection image by extraction procedure With the primitive image features of query image, the primitive image features include color enhancement Laplacian CLOG feature and fast Fast robust SURF feature;
Similarity measurement module, connect with main control module, for calculating associated picture and query image by similar programs Similarity;
Fusion Features module, connect with main control module, more for being carried out by fusion program according to the similarity between image Scale feature fusion, obtains the final similarity between query image and each candidate image;
Matching module is connect with main control module, the target for being retrieved by matcher according to final similarity mode Image;
Display module is connect with main control module, for showing the target image retrieved by display.
Another object of the present invention is that providing one kind at least carries the spatial image retrieval based on multi-feature fusion system The medical examination device of system.
Advantages of the present invention and good effect are as follows:
The present invention uses big gradient algorithm matrix by characteristic extracting module, divides more gradient sections, and omit The mobile calculating of HOG cell, greatly reduces operand, and bulk velocity promotes about 4 times, is very suitable to requirement of real-time Higher application;Meanwhile heterologous image block being stitched together by matching module and is input in network as a whole, The information for not only contributing to heterologous image block in this way merges and then improves the accuracy rate of network, and keeps network structure simpler; Meanwhile in order to retain the more features of input data, the present invention is classified to the fused characteristic pattern of heterologous image block, Rather than classify to cascade feature vector, be conducive to the performance for improving network in this way;It is proposed by the present invention to be based on depth The heterologous image matching method of study is not only better than other methods in performance, but also also superior to its other party on training effectiveness Method.
Polygonal profile similarity detection method provided by the invention improves machine and imitates to the resolution of shape similarity Fruit, test pattern effect have stronger stability and reliability;Detection time is short, and efficiently, implementation result is at low cost for operation.This hair It is bright that only the side of figure is inquired, reduce data processing amount.For the present invention by the eigenmatrix of constructing graphic, it is suitable to choose Decision criteria, and multiple enhancement nonlinear transformation is carried out to eigenmatrix element, with most values, the weighted average of multi-standard Value establishes Measurement of Similarity, has reached algorithm efficiently and has had stronger stability.
Fusion method provided by the invention is improved in Percentile fusion method, complexity of the invention It is not high, it is easy to implement.Mainly it is suitable for the objective video quality evaluation algorithms based on frame level Mass Calculation;Consider frame and frame Between connection, using the data of each frame of sliding window average value processing, so that estimation accuracy greatly promotes.
Detailed description of the invention
Fig. 1 is that the present invention implements the spatial image search method flow chart based on multi-feature fusion provided.
Fig. 2 is that the present invention implements the spatial image searching system structural block diagram based on multi-feature fusion provided.
In figure: 1, input module;2, main control module;3, image detection module;4, characteristic extracting module;5, similarity measurement Module;6, Fusion Features module;7, matching module;8, display module.
Specific embodiment
In order to make the objectives, technical solutions, and advantages of the present invention clearer, with reference to embodiments, to the present invention It is further elaborated.It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, it is not used to Limit the present invention.
With reference to the accompanying drawing and specific embodiment is further described application principle of the invention.
As shown in Figure 1, spatial image search method based on multi-feature fusion provided in an embodiment of the present invention, including it is following Step:
S101 inputs retrieving image information using keyboard by input module;
S102, main control module are schemed using detection program according to input retrieval infomation detection accordingly by image detection module As information;
S103 extracts using extraction procedure associated picture and query image in detection image by characteristic extracting module Primitive image features, the primitive image features include color enhancement Laplacian CLOG feature and fast robust SURF special Sign;
S104 calculates the similarity of associated picture and query image by similarity measurement module using similar programs;It is logical Cross Fusion Features module using fusion program according between image similarity carry out multi-scale feature fusion, obtain query image with Final similarity between each candidate image;
S105, the target image for utilizing matcher to be retrieved according to final similarity mode by matching module;
S106 shows the target image retrieved using display by display module.
As shown in Fig. 2, spatial image searching system based on multi-feature fusion provided in an embodiment of the present invention, comprising: defeated Enter module 1, main control module 2, image detection module 3, characteristic extracting module 4, similarity measurement module 5, Fusion Features module 6, Matching module 7, display module 8.
Input module 1 is connect with main control module 2, for inputting retrieving image information by keyboard;
Main control module 2, with input module 1, image detection module 3, characteristic extracting module 4, similarity measurement module 5, spy It levies Fusion Module 6, matching module 7, display module 8 to connect, be worked normally for controlling modules by single-chip microcontroller;
Image detection module 3 is connect with main control module 2, for retrieving infomation detection phase according to input by detection program The image information answered;
Characteristic extracting module 4 is connect with main control module 2, for extracting the related figure in detection image by extraction procedure The primitive image features of picture and query image, the primitive image features include color enhancement Laplacian CLOG feature and Fast robust SURF feature;
Similarity measurement module 5 is connect with main control module 2, for calculating associated picture and query graph by similar programs The similarity of picture;
Fusion Features module 6 is connect with main control module 2, for being carried out by fusion program according to the similarity between image Multi-scale feature fusion obtains the final similarity between query image and each candidate image;
Matching module 7 is connect with main control module 2, the mesh for being retrieved by matcher according to final similarity mode Logo image;
Display module 8 is connect with main control module 2, for showing the target image retrieved by display.
4 extracting method of characteristic extracting module provided by the invention is as follows:
(1) gradient algorithm matrix size is set;
(2) gradient of each pixel of gradient algorithm matrix is calculated;
(3) the affiliated gradient section of the gradient of each pixel is determined;
(4) its gradient length is calculated according to the gradient of each pixel;
(5) block eigenvalue is calculated.
The step of gradient provided by the invention for calculating each pixel of gradient algorithm matrix includes:
Calculate the initial gray G of each pixel0(x,y);
To the initial gray G0(x, y) carries out Gamma transformation, obtains optimization gray scale G (x, y);
According to the gradient operator G of the optimization gray scale G (x, y) of each pixel and each pixel X, Y-directiono, calculate described each The gradient d of pixel X, Y-directionx、dy
Each pixel X-direction gradient provided by the invention:
dx=G (x+3, y) * 3+G (x+2, y) * 2+G (x+1, y)-G (x-3, y) * 3-G (x-2, y) * 2-G (x-1, y), In, G (x+1, y), G (x+2, y), G (x+3, y) respectively indicate the latter pixel of center pixel horizontal direction, rear two pixel, rear three picture Element optimization gray scale, G (x-1, y), G (x-2, y), G (x-3, y) respectively indicate the previous pixel of center pixel horizontal direction, the first two Pixel, the optimization gray scale of first three pixel;
Each pixel Y-direction gradient:
dy=G (x, y+3) * 3+G (x, y+2) * 2+G (x, y+1)-G (x, y-1)-G (x, y-2) * 2-G (x, y-3) * 3, In, G (x, y+1), G (x, y+2), G (x, y+3) respectively indicate the latter pixel of center pixel vertical direction, rear two pixel, rear three picture Element optimization gray scale, G (x, y-1), G (x, y-2), G (x, y-3) respectively indicate the previous pixel of center pixel vertical direction, the first two Pixel, the optimization gray scale of first three pixel.
7 matching process of matching module provided by the invention is as follows:
1) required matched heterologous image making data set, is obtained by one group of training set and eight using VIS-NIR data set Group test set;
2), institute's matched heterologous image in need is pre-processed, obtains pretreated heterologous image;
3), obtain image block characteristics figure: by pretreated each pair of heterologous image image block A and image block B carry out Left and right splicing, extracts feature using improved VGG network after splicing, obtains the characteristic pattern of input picture;Then by resulting spy Sign figure or so is divided equally, then respectively obtains and the corresponding characteristic pattern V and characteristic pattern N corresponding with image block B of image block A;
4), characteristic pattern merges: carrying out step 3) resulting characteristic pattern V and characteristic pattern N to do difference operation, and after making the difference Characteristic pattern is normalized, and obtains fused characteristic pattern;
5), training image matching network: with full articulamentum and cross entropy loss function to being merged obtained in step 4) after Characteristic pattern carry out two classification, obtain the weight of matching network;
6), prediction and matching probability: matching network weight trained in step 5) is loaded into model, and is successively read All test set data obtain the heterologous images match and unmatched predicted value of softmax classifier output.
The resulting characteristic pattern V and characteristic pattern N of step 3) provided by the invention carries out global average pond respectively, obtains and schemes As the corresponding feature vector v and feature vector n corresponding with image B of A;
Meanwhile according to resulting feature vector v and feature vector n, mismatch figure is maximized using comparison loss function As the average Euclidean distance of block eigenvector and the average Euclidean distance of minimum matching image block eigenvector.
The calculating process of comparison loss provided by the invention, comprises the following steps that
A, feature vector of the note characteristic pattern V and characteristic pattern N behind global average pond is respectively v and n;Then feature vector Average Euclidean distance D (n, v) are as follows:
Wherein, k indicates the dimension of feature vector;
B, using comparison loss function formula (1) come maximize mismatch image block characteristics vector average Euclidean distance and Minimize the average Euclidean distance of matching image block eigenvector:
Wherein, y indicates that the true tag of input data, Q are a constant, and the natural constant of e, L (y, n, v) is comparison damage Lose function.
Below with reference to concrete analysis, the invention will be further described.
Spatial image search method based on multi-feature fusion provided in an embodiment of the present invention, comprising:
The similarity of associated picture and query image is calculated using similar programs;Specific with good grounds figure minimum containment rectangle Appropriate thresholding is arranged in length-width ratio, is filtered;Thresholding is set according to side length each in the figure of source and the minimum value of perimeter ratio, removes mesh Surpriseization part in shape of marking on a map;Abbreviation processing is made to targeted graphical number of edges, makes that there is identical number of edges with source figure;Acquisition source figure With the Euclidean distance of vector most like in targeted graphical eigenmatrix and maximum phase and coefficient;
Multi-scale feature fusion is carried out according to the similarity between image using fusion program, obtains query image and each time Select the final similarity between image;Specifically have:
Objective video quality evaluation model OM is chosen, by comparing original reference video and distortion video, calculated distortion view Frequently the prediction score value of every frame, and the frame level fractional marks that will acquire are vector X, video totalframes is labeled as N;
The length of window of sliding window is winLen, carries out slide window processing to the frame level mass fraction of acquisition, that is, after handling The frame level score of n-th frame is the mean value of the frame level score of [n-winLen+1, n] frame, by the frame level fractional marks after slide window processing For vector WX;
It is ranked up WX is ascending, and is WX ' by the result queue after sequence, take the average value of worst p% frame, It as the quality metric score value of entire video sequence, is ranked up, the smallest p% frame mean value is final measurement results.
Include:
Firstly, establishing the eigenmatrix P of source figure P and targeted graphical Q respectively counterclockwiseEAnd QE:
PE=[P1 T P2 T … P2N-1 T P2N T];
QE=[Q1 T Q2 T … Q2N-1 T Q2N T];
Euclidean distance formula d (x, y) and included angle cosine formula sim (x, y) are as follows:
With d (x, y) and it is the basis sim (x, y), redefines two matrix Ds and S, make:
Find out the minimum value in D and S;
Eu is enabled respectivelye=min { Dij, 1≤i≤j=2N;Sime=max { Sij, 1≤i≤j=2N;
Then the eigenmatrix of needle directional structure vectorical structure figure P and Q, the above-mentioned calculation method of repetition find out two features in order again Minimum value Eu in matrix between most complete vectorcAnd Simc
Finally enable Eu=min { Eue, Euc};
Sim=min { Sime, Simc};
Eu and Sim be two figure of P, Q correspond to most like vector Euclidean distance and it is maximum mutually and coefficient;
The Euclidean distance of most like vector and maximum phase are gone back with after coefficient in acquisition source figure and targeted graphical eigenmatrix It needs to carry out: the enhancement of calculated result is handled, comprising:
Initial vector is carried out once to repeatedly deformation, on the basis of with adjacent corner sequence structure initial vector, then The geometrical characteristic for adding figure, using the adjacent corner of order of addition than as new initial vector;Initial vector is carried out Multiple nonlinear processing is once arrived, carries out evolution processing using by initial vector;
Multiple similarity calculation is carried out to deformed initial vector, finally by weighted average value, with Euclidean distance Eu It is as follows with the evaluation formula mutually with coefficient S im:
N is the number of vector deformation, k in above formulaiFor weight coefficient, EuiAnd SimiVector is European after deforming for i-th Distance, Eu (P, Q) are the evaluation of Euclidean distance, n=4, kiTake 0.25;
Multi-scale feature fusion is carried out according to the similarity between image using fusion program, obtains query image and each time It selects in the final similarity between image;
All frame level scores that OM model is calculated successively carry out slide window processing, it may be assumed that
Wherein, winLen indicates length of window when sliding window filters, and is the parameter for needing to adjust, and X (t) indicates t frame Mass fraction, WX (n) then indicate the mass fraction of the n-th frame after slide window processing;
Prediction frame level score is merged with time slot worst time-domain information fusion method using based on intra-frame trunk, most Whole prediction score:
Wherein, p% be parameter to be adjusted, N be video totalframes, WX ' (t) indicate it is ascending be ranked up after t-th Frame level score, OMwinPoolingFor the final appraisal results of the quality of the video.
In the above-described embodiments, can come wholly or partly by software, hardware, firmware or any combination thereof real It is existing.When using entirely or partly realizing in the form of a computer program product, the computer program product include one or Multiple computer instructions.When loading on computers or executing the computer program instructions, entirely or partly generate according to Process described in the embodiment of the present invention or function.The computer can be general purpose computer, special purpose computer, computer network Network or other programmable devices.The computer instruction may be stored in a computer readable storage medium, or from one Computer readable storage medium is transmitted to another computer readable storage medium, for example, the computer instruction can be from one A web-site, computer, server or data center pass through wired (such as coaxial cable, optical fiber, Digital Subscriber Line (DSL) Or wireless (such as infrared, wireless, microwave etc.) mode is carried out to another web-site, computer, server or data center Transmission).The computer-readable storage medium can be any usable medium or include one that computer can access The data storage devices such as a or multiple usable mediums integrated server, data center.The usable medium can be magnetic Jie Matter, (for example, floppy disk, hard disk, tape), optical medium (for example, DVD) or semiconductor medium (such as solid state hard disk Solid State Disk (SSD)) etc..
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the invention, all in essence of the invention Made any modifications, equivalent replacements, and improvements etc., should all be included in the protection scope of the present invention within mind and principle.

Claims (10)

1. a kind of spatial image search method based on multi-feature fusion, which is characterized in that the sky based on multi-feature fusion Between image search method include:
The similarity of associated picture and query image is calculated using similar programs;Specific with good grounds figure minimum containment rectangle length and width Than appropriate thresholding is arranged, it is filtered;Thresholding is set according to side length each in the figure of source and the minimum value of perimeter ratio, removes target figure Surpriseization part in shape;Abbreviation processing is made to targeted graphical number of edges, makes that there is identical number of edges with source figure;Acquisition source figure and mesh The Euclidean distance of most like vector and maximum phase and coefficient in shape of marking on a map eigenmatrix;
Multi-scale feature fusion is carried out according to the similarity between image using fusion program, obtains query image and each candidate figure Final similarity as between;Specifically have:
Objective video quality evaluation model OM is chosen, by comparing original reference video and distortion video, calculated distortion video is every The prediction score value of frame, and the frame level fractional marks that will acquire are vector X, video totalframes is labeled as N;
The length of window of sliding window is winLen, carries out slide window processing to the frame level mass fraction of acquisition, that is, n-th frame after handling Frame level score be [n-winLen+1, n] frame frame level score mean value, by the frame level fractional marks after slide window processing be vector WX;
It is ranked up WX is ascending, and is WX ' by the result queue after sequence, take the average value of worst p% frame, as The quality metric score value of entire video sequence, is ranked up, the smallest p% frame mean value is final measurement results.
2. spatial image search method based on multi-feature fusion as described in claim 1, which is characterized in that
Multi-scale feature fusion is carried out according to the similarity between image using fusion program, obtains query image and each candidate figure In final similarity as between;
All frame level scores that OM model is calculated successively carry out slide window processing, it may be assumed that
Wherein, winLen indicates length of window when sliding window filters, and is the parameter for needing to adjust, and X (t) indicates the quality of t frame Score, WX (n) then indicate the mass fraction of the n-th frame after slide window processing;
Prediction frame level score is merged with time slot worst time-domain information fusion method using based on intra-frame trunk, final Predict score:
Wherein, p% be parameter to be adjusted, N be video totalframes, WX ' (t) indicate it is ascending be ranked up after t-th of frame level Score, OMwinPoolingFor the final appraisal results of the quality of the video.
3. spatial image search method based on multi-feature fusion as described in claim 1, which is characterized in that described based on more The spatial image search method of Fusion Features includes:
Step 1 inputs retrieving image information using keyboard by input module;
Step 2, main control module retrieve the corresponding image of infomation detection according to input using detection program by image detection module Information;
Step 3 utilizes the original of associated picture and query image in extraction procedure extraction detection image by characteristic extracting module Beginning characteristics of image, the primitive image features include color enhancement Laplacian CLOG feature and fast robust SURF special Sign;
Step 4 calculates the similarity of associated picture and query image by similarity measurement module using similar programs;Pass through Fusion Features module carries out multi-scale feature fusion according to the similarity between image using fusion program, obtains query image and each Final similarity between a candidate image;
Step 5, the target image for utilizing matcher to be retrieved according to final similarity mode by matching module;
Step 6 shows the target image retrieved using display by display module.
4. spatial image search method based on multi-feature fusion as claimed in claim 3, which is characterized in that feature extraction mould Block extracting method is as follows:
(1) gradient algorithm matrix size is set;
(2) gradient of each pixel of gradient algorithm matrix is calculated;
(3) the affiliated gradient section of the gradient of each pixel is determined;
(4) its gradient length is calculated according to the gradient of each pixel;
(5) block eigenvalue is calculated.
5. spatial image search method based on multi-feature fusion as claimed in claim 3, which is characterized in that matching module Method of completing the square includes:
1) required matched heterologous image making data set, is obtained by one group of training set and eight groups of surveys using VIS-NIR data set Examination collection;
2), institute's matched heterologous image in need is pre-processed, obtains pretreated heterologous image;
3) image block characteristics figure, is obtained: by the image block A and image block B or so in pretreated each pair of heterologous image Splicing extracts feature using improved VGG network after splicing, obtains the characteristic pattern of input picture;Then by resulting characteristic pattern Left and right is divided equally, then respectively obtains and the corresponding characteristic pattern V and characteristic pattern N corresponding with image block B of image block A;
4), characteristic pattern merges: do difference operation for step 3) resulting characteristic pattern V and characteristic pattern N, and by the feature after making the difference Figure is normalized, and obtains fused characteristic pattern;
5), training image matching network: with full articulamentum and cross entropy loss function to fused spy obtained in step 4) Sign figure carries out two classification, obtains the weight of matching network;
6), prediction and matching probability: matching network weight trained in step 5) is loaded into model, and is successively read all Test set data obtain the heterologous images match and unmatched predicted value of softmax classifier output;
The resulting characteristic pattern V and characteristic pattern N of step 3) carries out global average pond respectively, obtains spy corresponding with image A Levy vector v and feature vector n corresponding with image B;
According to resulting feature vector v and feature vector n, maximized using comparison loss function mismatch image block characteristics to The average Euclidean distance of amount and the average Euclidean distance for minimizing matching image block eigenvector.
6. a kind of spatial image based on multi-feature fusion retrieves computer program, which is characterized in that described to be melted based on multiple features The spatial image retrieval computer program of conjunction realizes space based on multi-feature fusion described in Claims 1 to 5 any one Image search method.
7. a kind of terminal, which is characterized in that the terminal, which is at least carried, to be realized described in Claims 1 to 5 any one based on more The controller of the spatial image search method of Fusion Features.
8. a kind of computer readable storage medium, including instruction, when run on a computer, so that computer is executed as weighed Benefit requires spatial image search method based on multi-feature fusion described in 1-5 any one.
9. a kind of implement the based on multi-feature fusion of spatial image search method based on multi-feature fusion described in claim 1 Spatial image searching system, which is characterized in that the spatial image searching system based on multi-feature fusion includes:
Input module is connect with main control module, for inputting retrieving image information by keyboard;
Main control module, with input module, image detection module, characteristic extracting module, similarity measurement module, Fusion Features mould Block, matching module, display module connection, work normally for controlling modules by single-chip microcontroller;
Image detection module is connect with main control module, for being schemed accordingly by detection program according to input retrieval infomation detection As information;
Characteristic extracting module is connect with main control module, for extracting the associated picture in detection image by extraction procedure and looking into The primitive image features of image are ask, the primitive image features include color enhancement Laplacian CLOG feature and quick Shandong Stick SURF feature;
Similarity measurement module, connect with main control module, for calculating the phase of associated picture and query image by similar programs Like degree;
Fusion Features module, connect with main control module, multiple dimensioned for being carried out by fusion program according to the similarity between image Fusion Features obtain the final similarity between query image and each candidate image;
Matching module is connect with main control module, the target image for being retrieved by matcher according to final similarity mode;
Display module is connect with main control module, for showing the target image retrieved by display.
10. a kind of medical inspection at least carrying spatial image searching system based on multi-feature fusion described in claim 9 is set It is standby.
CN201811273146.8A 2018-10-30 2018-10-30 A kind of spatial image searching system based on multi-feature fusion and search method Pending CN109299305A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811273146.8A CN109299305A (en) 2018-10-30 2018-10-30 A kind of spatial image searching system based on multi-feature fusion and search method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811273146.8A CN109299305A (en) 2018-10-30 2018-10-30 A kind of spatial image searching system based on multi-feature fusion and search method

Publications (1)

Publication Number Publication Date
CN109299305A true CN109299305A (en) 2019-02-01

Family

ID=65158936

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811273146.8A Pending CN109299305A (en) 2018-10-30 2018-10-30 A kind of spatial image searching system based on multi-feature fusion and search method

Country Status (1)

Country Link
CN (1) CN109299305A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110084295A (en) * 2019-04-19 2019-08-02 广东石油化工学院 Control method and control system are surrounded in a kind of grouping of multi-agent system
CN110321451A (en) * 2019-04-25 2019-10-11 吉林大学 Image retrieval algorithm based on Distribution Entropy gain loss function
CN111078940A (en) * 2019-12-16 2020-04-28 腾讯科技(深圳)有限公司 Image processing method, image processing device, computer storage medium and electronic equipment
CN111159456A (en) * 2019-12-30 2020-05-15 云南大学 Multi-scale clothing retrieval method and system based on deep learning and traditional features
CN111191612A (en) * 2019-12-31 2020-05-22 深圳云天励飞技术有限公司 Video image matching method and device, terminal equipment and readable storage medium
CN111653088A (en) * 2020-04-21 2020-09-11 长安大学 Vehicle driving quantity prediction model construction method, prediction method and system
CN113129330A (en) * 2020-01-14 2021-07-16 北京地平线机器人技术研发有限公司 Track prediction method and device for movable equipment
CN114485668A (en) * 2022-01-17 2022-05-13 上海卫星工程研究所 Optical double-star positioning multi-moving-target association method and system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104142978A (en) * 2014-07-14 2014-11-12 重庆邮电大学 Image retrieval system and image retrieval method based on multi-feature and sparse representation
CN105354866A (en) * 2015-10-21 2016-02-24 郑州航空工业管理学院 Polygon contour similarity detection method
CN105979266A (en) * 2016-05-06 2016-09-28 西安电子科技大学 Interframe relevance and time slot worst based time domain information fusion method
CN108182442A (en) * 2017-12-29 2018-06-19 惠州华阳通用电子有限公司 A kind of image characteristic extracting method
CN108537264A (en) * 2018-03-30 2018-09-14 西安电子科技大学 Heterologous image matching method based on deep learning
CN108563767A (en) * 2018-04-19 2018-09-21 深圳市商汤科技有限公司 Image search method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104142978A (en) * 2014-07-14 2014-11-12 重庆邮电大学 Image retrieval system and image retrieval method based on multi-feature and sparse representation
CN105354866A (en) * 2015-10-21 2016-02-24 郑州航空工业管理学院 Polygon contour similarity detection method
CN105979266A (en) * 2016-05-06 2016-09-28 西安电子科技大学 Interframe relevance and time slot worst based time domain information fusion method
CN108182442A (en) * 2017-12-29 2018-06-19 惠州华阳通用电子有限公司 A kind of image characteristic extracting method
CN108537264A (en) * 2018-03-30 2018-09-14 西安电子科技大学 Heterologous image matching method based on deep learning
CN108563767A (en) * 2018-04-19 2018-09-21 深圳市商汤科技有限公司 Image search method and device

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110084295A (en) * 2019-04-19 2019-08-02 广东石油化工学院 Control method and control system are surrounded in a kind of grouping of multi-agent system
CN110321451B (en) * 2019-04-25 2022-08-05 吉林大学 Image retrieval algorithm based on distribution entropy gain loss function
CN110321451A (en) * 2019-04-25 2019-10-11 吉林大学 Image retrieval algorithm based on Distribution Entropy gain loss function
CN111078940A (en) * 2019-12-16 2020-04-28 腾讯科技(深圳)有限公司 Image processing method, image processing device, computer storage medium and electronic equipment
CN111078940B (en) * 2019-12-16 2023-05-23 腾讯科技(深圳)有限公司 Image processing method, device, computer storage medium and electronic equipment
CN111159456B (en) * 2019-12-30 2022-09-06 云南大学 Multi-scale clothing retrieval method and system based on deep learning and traditional features
CN111159456A (en) * 2019-12-30 2020-05-15 云南大学 Multi-scale clothing retrieval method and system based on deep learning and traditional features
CN111191612A (en) * 2019-12-31 2020-05-22 深圳云天励飞技术有限公司 Video image matching method and device, terminal equipment and readable storage medium
CN113129330A (en) * 2020-01-14 2021-07-16 北京地平线机器人技术研发有限公司 Track prediction method and device for movable equipment
CN113129330B (en) * 2020-01-14 2024-05-10 北京地平线机器人技术研发有限公司 Track prediction method and device for movable equipment
CN111653088A (en) * 2020-04-21 2020-09-11 长安大学 Vehicle driving quantity prediction model construction method, prediction method and system
CN114485668A (en) * 2022-01-17 2022-05-13 上海卫星工程研究所 Optical double-star positioning multi-moving-target association method and system
CN114485668B (en) * 2022-01-17 2023-09-22 上海卫星工程研究所 Optical double-star positioning multi-moving-object association method and system

Similar Documents

Publication Publication Date Title
CN109299305A (en) A kind of spatial image searching system based on multi-feature fusion and search method
Shi et al. An image mosaic method based on convolutional neural network semantic features extraction
Liu et al. RGB-D joint modelling with scene geometric information for indoor semantic segmentation
Yin et al. An optimised multi-scale fusion method for airport detection in large-scale optical remote sensing images
Feng et al. Bag of visual words model with deep spatial features for geographical scene classification
CN108805102A (en) A kind of video caption detection and recognition methods and system based on deep learning
Tan et al. Automobile Component Recognition Based on Deep Learning Network with Coarse‐Fine‐Grained Feature Fusion
Wang et al. Insulator defect detection based on improved you-only-look-once v4 in complex scenarios
CN105844299B (en) A kind of image classification method based on bag of words
Nan et al. Infrared object image instance segmentation based on improved mask-RCNN
Liu et al. Moving object detection based on improved ViBe algorithm
Yan et al. Alpha matting with image pixel correlation
CN110503110A (en) Feature matching method and device
Xiu et al. Double discriminative face super-resolution network with facial landmark heatmaps
Shi et al. Real-time saliency detection for greyscale and colour images
Zheng et al. Research on Target Detection Algorithm of Bank Card Number Recognition
Li et al. A method of inpainting moles and acne on the high‐resolution face photos
Li et al. Nonlocal variational model for saliency detection
Ma et al. Salient object detection via light-weight multi-path refinement networks
Han et al. Effective search space reduction for human pose estimation with Viterbi recurrence algorithm
Jin et al. A vehicle detection algorithm in complex traffic scenes
Arsirii et al. Architectural objects recognition technique in augmented reality technologies based on creating a specialized markers base
Wang et al. Image Semantic Segmentation Algorithm Based on Self-learning Super-Pixel Feature Extraction
Zhong et al. Sequence recognition of natural scene house number based on convolutional neural network
Wang et al. A Document Image Quality Assessment Method Based on Feature Fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190201