CN107506429A - A kind of image rearrangement sequence method integrated based on marking area and similitude - Google Patents

A kind of image rearrangement sequence method integrated based on marking area and similitude Download PDF

Info

Publication number
CN107506429A
CN107506429A CN201710721851.9A CN201710721851A CN107506429A CN 107506429 A CN107506429 A CN 107506429A CN 201710721851 A CN201710721851 A CN 201710721851A CN 107506429 A CN107506429 A CN 107506429A
Authority
CN
China
Prior art keywords
image
similitude
marking area
region
result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710721851.9A
Other languages
Chinese (zh)
Inventor
刘宏哲
赵小艳
张子帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Union University
Original Assignee
Beijing Union University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Union University filed Critical Beijing Union University
Priority to CN201710721851.9A priority Critical patent/CN107506429A/en
Publication of CN107506429A publication Critical patent/CN107506429A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/5838Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/5862Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using texture

Abstract

The present invention discloses a kind of based on marking area and similitude integration sort method, the contrast of Utilization prospects and background is extracted salient region of image and normalized, and it is iterated calculating by the method for similitude integration for each characteristics of image in marking area, the multiple comparative sorting of similitude between image is carried out, after incoherent image is rejected to maximum likelihood into row.Noise can be eliminated well for conventional method, makes different size, the marking area of shape normalizes and neatly can carry out feature comparison for marking area, and purpose and accuracy greatly enhance.The advantages that present invention is succinct with calculating, and target area is clear and definite, and image characteristics extraction is efficient, the result of image rearrangement sequence and actual vision sequence uniformity are very high, are adapted to accurately reordering after preliminary search.

Description

A kind of image rearrangement sequence method integrated based on marking area and similitude
Technical field
The invention belongs to digital image processing field, more particularly to a kind of image integrated based on marking area and similitude Method for reordering.
Technical background
Image rearrangement sequence is a basic research problem of Digital Image Processing and computer vision field.With multimedia When the fast development of network technology, the popularization of image and video capture device and the reduction, particularly Web2.0 of storage charges In generation, is come quietly, and the Large Graph such as Flickr, Facebook, YouTube, Yoqoo picture and video sharing website are such as emerged rapidly in large numbersBamboo shoots after a spring rain As arise at the historic moment, there is millions of images and video to produce and pass on network daily.Heat is studied in corresponding internet Point is the processing for mass data, and text search technology is very ripe, and largely meets the need of user Ask, but in some cases, client more intentionally gets the data result for being satisfied with oneself, and has suitable sequence to carry out Browse.But the result that many search engines search for acquisition for these image resources is always not fully up to expectations, because they Vision content be have ignored as ranking foundation.Current image rearrangement sequence field, researchers are making great efforts to add various spies Levy to realize the more intelligent permutatation of Query Result, and wherein visual signature is people most intuitively and needed most Content element.According to investigations, the external information about 75% that the mankind can obtain from the Nature comes from visually-perceptible.Image It is the concise multimedia messages intuitively enriched with visual expression of a kind of performance, is that all kinds of things of objective reality in the Nature are given birth to Dynamic description, it is the mankind's used most flexible information carrier in daily life.Visual activity is exactly most of us people Contact image, extraneous most direct behavior, therefore vision turn into the very important part of field of image search.Actual searches Hitch fruit present situation, can not reach the expectation of user search in many cases, because retrieval result is disorderly arranged so that user Very blindly, it is desired nonetheless to go to search oneself more conventional result using vision, then the basis of retrieval result in this case Vision is reordered and is just particularly important so that use can be greatly increased by being stood out with the result for more wanting to close of original image Experience satisfaction in family.Image rearrangement sequence is in field of image search, data analysis field, education, internet commercial field in recent years Etc. being paid attention to, image rearrangement sequence is as one of hot subject of every field research.
The present invention provides a kind of improved method for image retrieval initial results, enables to related to being queried image The higher result of property is stood out.
The content of the invention
The present invention contains much information for traditional images feature extraction, and purpose is not strong, useful information and nothing in characteristics of image It cannot be distinguished and treated with a certain discrimination very well with information, core information can accurately be extracted by finding one kind, by first delimiting notable area Domain, characteristics of image compares as information source, and by the successive ignition of similitude integration method using in marking area, can be fine The method that ground carries out similitude sequence between image --- the image rearrangement sequence method integrated based on marking area and similitude, realize Accurate sequence again.
To achieve the above object, the present invention adopts the following technical scheme that:
A kind of image rearrangement sequence method integrated based on marking area and similitude, is comprised the following steps:
Step 1, original image is retrieved in search engine, obtain initial retrieval result, appearance and original image Related out of order arrangement image, gray scale notable figure calculating is carried out to original image and the image of preliminary search result, obtained each The significance C in obvious regionI, j, by the final significantly angle value P of each imageKCompared with threshold value T-phase, contrast is more than 10% Extracted for marking area;
Step 2, by shape size of original image and the marking area of the image of preliminary search result etc. normalize, be easy to The feature extraction of marking area;
Each marking area after step 3, normalization carries out the extraction of color, textural characteristics;
Step 4, original image and the characteristics of image of preliminary search result are contrasted using similitude integration method and sorted.
Preferably, in step 1, by the characteristic vector average of each pixel of image-region and its neighborhood characteristics vector away from From obtaining the significance in this region.
Preferably, step 4 comprises the following steps:
4.1st, each feature of extraction is as ensuing basic element, using mahalanobis distance so that each characteristic element calculates To matrix S=[Sij]N*NMiddle progress matrix form similarity system design so that arranged with the even more like result of original image marking area In forefront;
4.2nd, preceding the 90% of ranking results obtained in the previous step is taken, similitude integral contrast is carried out again, is repeatedly changed In generation, compares, and obtains the result that the higher image of more accurate similitude is stood out.
Brief description of the drawings
Fig. 1 is extraction gray scale notable figure;
Fig. 2 is the flow chart of the present invention.
Embodiment
As shown in Fig. 2 the present invention provides a kind of image rearrangement sequence method integrated based on marking area and similitude, specifically Comprise the following steps:
Step 1:Original image is retrieved in search engine, obtains initial retrieval result, appearance and original image Related out of order arrangement image, marking area extraction is carried out to original image and the image of initial retrieval result;
Step 1-1:The characteristic vector average of each pixel of image-region and the distance of its neighborhood characteristics vector obtain this region Significance, for specific implementation process as shown in figure 1, wherein region A is the region of fixed size, region B is inclusion region A Variable (multiple dimensioned) region of a size.By by it is left-to-right, travel through entire image from top to bottom, calculate A and B feature away from From, and characteristic distance is transformed into the range of 0-255, thus obtain gray scale notable figure.It is 3 yardsticks to select B herein, every Under one yardstick, the characteristic distance for calculating A and B is significance, and 3 width gray scale notable figures are obtained.To 3 width notable figure corresponding pixel points The gray value summation of (using i, j to represent) takes average, just obtains and original image notable figure of a size.
Now under B particular dimensions, image-region A significance is:
Wherein, N1And N2The number of pixel in respectively region A and region B.VqFor each pixel in region A and B feature to Amount, D is Euclidean distance, and p is pixel-parameters.For a width of w pixels, the image of a height of H pixels, region B dimensional variation scope WBFor:
It is assumed herein that W is less than H, why it is set within the scope of this, is that meeting is perhaps because when B yardstick is excessive More non-significant region is judged to marking area.And B yardstick it is too small when, then the marking area obtained is mainly the edge part of object Point.WB3 yardsticks in the range of selection formula (2), final saliency map M are WBThe sum of significance under different scale size S, such as Formula:
mI, j=∑SCI, j (1-3)
Wherein mI, jFor final notable angle value corresponding in image pixel.
Step 1-2:Using K mean value theorems to image carry out over-segmentation, K initial point by hill-climbing algorithm image three-dimensional Chosen automatically in CIELab histograms.After K initial point is obtained cut zone r is obtained by clusteringK(k=1, 2 ..., K).It is corresponding by the notable figure M that above we obtain, average P to the saliency value of each pixel of each cut zoneK, Such as formula:
Wherein | rK| it is the pixel count of each cut zone.By given threshold T, notable angle value PKLess than threshold value T region It is removed, remaining is just the marking area of image.T selection is empirically determined, is typically chosen in gray scale notable figure maximum The 10% of significance.
Step 2:Shape size of marking area etc. normalizes, and is so more convenient for comparing after the feature extraction of marking area. Detailed process is:Coordinate centralization, X-shearing normalization, scaling normalization, rotational normalization, a corresponding radiation become It is divided into change in displacement, tangential variations, tangential variations, rotationally-varying.
Square is one of method for describing region.(p+q) square corresponding to region f (x, y) is defined as:
mpq=∑xyxpyqF (x, y), p, q=0,1,2 ... (2-1) and centre-to-centre spacing that we can define are:
Wherein(such as m10Geometric moment when as p=1, q=0).
Define image covariance matrix M be:
(such as μ20Central moment when as p=2, q=0).
γ1, γ2It is M characteristic values, [e1x e1y]T, [e1y e2y]TIt is corresponding characteristic vector, is specifically calculated as follows:
It is available:
WhereinRepresentative image normalizes by translation, dimension normalization, rotational normalization 3
The final result obtained afterwards is walked,It is coordinate center successively from right to left
Change, x, the stretching of y directions, scaling normalization, rotational normalization.
Step 3:Marking area feature extraction, including color, textural characteristics;
Step 3-1:Wherein color characteristic is extracted using the method for color layout, right in color layout descriptors 8 × 8 image split takes the color average of each block of image, forms a color average matrix, then it is used 2-D discrete cosine enters line translation, takes low frequency component as follows as color characteristic, the extracting method of color layout descriptors:
(1) entire image is divided into 8 × 8 pieces, the color for calculating all tri- Color Channels of pixel RGB in each piece is averaged Value, and in this, as the representative color (domain color) of the block.
(2) each piece of color average is subjected to discrete cosine transform (DCT), obtains DCT coefficient matrix.DCT is a kind of The conversion of separation, it is international Joint Photographic Experts Group JPEG basis.Because the high fdrequency component of most of images is smaller, accordingly Zero is often in the coefficient of image high fdrequency component, it is less sensitive plus distortion of the human eye to radio-frequency component, so available thicker Quantify.Therefore, characteristic vector can be used as by the use of part DCT coefficient in general retrieval.
(3) zigzag scanning and quantization are carried out to DCT coefficient matrix, obtains DCT coefficient.
(4) for tri- passages of R, G, B, 4 low frequency components are taken out from DCT coefficient respectively, form 12 parameters, jointly Form the color feature vector of the image.
Step 3-2:The method that extraction for textural characteristics is extracted using small echo, for arbitrary function f (x) ∈ L2(R), Its continuous wavelet transform is defined as:
WhereinFor the window function of setting,A determines window width, and b is in window The heart, a, b ∈ R,Actual numerical value calculate and practical application in, generally take a, the b to be:
The discrete form of wavelet transformation is:
Wherein:a0> 1, b0> 0, which appoints to take, (works as a0=2, b0=1, ) m, n ∈ Z
For window function coefficient under discrete mode.Wherein extract textural characteristics using wavelet transformation, here color layout, The principle of wavelet transformation and the extraction of further feature repeat no more.
Step 4:Original image and the characteristics of image of preliminary search result are contrasted using similitude integration method and sorted;
Step 4-1:With image similarity calculation method, using the marking area characteristics of image of extraction, these features are gathered A characteristic vector is combined into represent piece image, according to inquiry, using image information, initial results is returned and includes image {I1, I2..., IN, with matrix S=[Sij]N*NRepresent the similitude between these images for obtaining according to visual signature, wherein Sij The similarity score of j images and i images is represented, I is query image result.
According to image modalities feature distribution situation, using image similarity calculation method and parameter optimization method (at this In illustrate, it is multi-modal i.e. multiple element influence, refer herein to multiple features).Improved mahalanobis distance is employed, is calculated such as Formula:
WK, ij=exp (- (xK, i-xK, j)TMk(xK, i-xK, j)) (4-1)
Wherein, WK, ijFor the similarity between lower i-th width of k-th of mode and jth width image, xK, iWith xK, iFor k-th of mode The feature of lower i-th width and jth width image, MkMatrix is to pile positive semidefinite matrix corresponding to k-th of mode.By MkMatrix decomposition can Abbreviation mahalanobis distance is easy to calculate, specific as follows:
Obtain:
WK, ij=exp (- | | Ak(xK, i-xK, j)||2) (4-3)
The process that Bayes reorders can be expressed as:
Regard score list γ as a stochastic variable, reordering is obtained in given sample set ρ and initial pictures sequence Divide tableIn the case of, obtain the optimal sequencing Score Lists γ with maximum a posteriori probability*,For condition prior probability, p (γ | ρ) it is posterior probability.According to visual consistency it is assumed that visually look at substantially similar two images, returned in retrieval Position in the results list should be adjacent.LikelihoodIt may be defined as:
Wherein For penalty term, for weighing the difference of two scoring sequences The opposite sex, ε are penalty factor.
Conditional probability p (γ | ρ) it can be defined as:
Wherein Z2=∑rExp (- R (γ, ρ)), R (γ, ρ) is regular terms, for representing the sequence between similar image Point should be closer to, describe the uniformity of image sequence score.
Obtain:
The object function that as image search result reorders.
For the object function for the problem of reordering, using the Laplace regularization of figure, may be defined as:
Wherein γiRepresent the score of the i-th width image, dK, iiRepresent similarity matrix W under k-th of modeKThe i-th row sum Value.α=[α1, α2..., αk] be each mode weights, τ is adjustment parameter corresponding to mode weights.Using alternative optimization Thought, iteration renewal γ, AkAnd α.At this moment object function can be exchanged into
α is randomly selected from K weightiAnd αjIt is updated, fixes remaining K-2 weight.As can be seen that due toTherefore αijValue be always to maintain it is constant.
Step 4-2:According to the similitude of every sub-picture integration, successive ignition sorts from small to large, to initial retrieval result Each image, all calculate similar integration ∑k∈NS (i, j), N ∈ [1,2..., n], n are amount of images.According to original image and inspection The similar integration contrast of rope result images is ranked up.Preceding 90% image of previous step ranking results is then taken to carry out newly similar Property matrix minor sort again, repeat sequencer procedure, iterate to after specified step number and terminate.In each cyclic process with original image not Related is come behind image sequence, will be come 10% last image every time and is considered as and the incoherent noise of original image Do not reprocess, this processing procedure is repeated to the image stood out again.
Embodiment 1
A kind of image rearrangement sequence method integrated based on marking area and similitude, including:
1st, a width original image is retrieved in search engines such as Baidu, obtains initial retrieval result, now occur Some out of order arrangement images related to original image.
2nd, gray scale notable figure calculating is carried out to each image, obtains the significance C in each obvious regionI, j, by each image Final significantly angle value PKCompared with threshold value T-phase, contrast being extracted for marking area more than 10%.
The 3rd, marking area that each image is included to original image passes throughChange in displacement, tangential variations, tangential variations, It is rotationally-varying to transform to formed objects shape so that feature extraction afterwards has uniformity, so as to similar integral contrast.
4th, each marking area after normalizing carries out the extraction of color, textural characteristics.
5th, each feature of extraction is as ensuing basic element, using mahalanobis distance so that the calculating of each characteristic element is arrived Matrix S=[Sij]N*NMiddle progress matrix form similarity system design so that even more like result comes with original image marking area Forefront.
6th, preceding the 90% of ranking results obtained in the previous step is taken, carries out similitude integral contrast again, carries out successive ignition Compare, obtain the result that the higher image of more accurate similitude is stood out.

Claims (5)

  1. A kind of 1. image rearrangement sequence method integrated based on marking area and similitude, it is characterised in that comprise the following steps:
    Step 1, original image is retrieved in search engine, obtain initial retrieval result, occurred related to original image Out of order arrangement image, gray scale notable figure calculating is carried out to original image and the image of preliminary search result, obtained each obvious The significance C in regionI, j, by the final significantly angle value P of each imageKCompared with threshold value T-phase, contrast is aobvious more than 10% Extracted region is write to come out;
    Step 2, by shape size of original image and the marking area of the image of preliminary search result etc. normalize, be easy to significantly The feature extraction in region;
    Each marking area after step 3, normalization carries out the extraction of color, textural characteristics;
    Step 4, original image and the characteristics of image of preliminary search result are contrasted using similitude integration method and sorted.
  2. 2. the image rearrangement sequence method integrated as claimed in claim 1 based on marking area and similitude, it is characterised in that step In rapid 1, the notable of this region is obtained by the characteristic vector average and the distance of its neighborhood characteristics vector of each pixel of image-region Degree.
  3. 3. the image rearrangement sequence method integrated as claimed in claim 1 based on marking area and similitude, it is characterised in that step Rapid 4 comprise the following steps:
    4.1st, each feature of extraction is as ensuing basic element, using mahalanobis distance so that each characteristic element, which calculates, arrives square Battle array S=[Sij]N*NMiddle progress matrix form similarity system design so that before even more like result comes with original image marking area Row;
    4.2nd, preceding the 90% of ranking results obtained in the previous step is taken, carries out similitude integral contrast again, carries out successive ignition ratio Compared with obtaining the result that the higher image of more accurate similitude is stood out.
CN201710721851.9A 2017-08-22 2017-08-22 A kind of image rearrangement sequence method integrated based on marking area and similitude Pending CN107506429A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710721851.9A CN107506429A (en) 2017-08-22 2017-08-22 A kind of image rearrangement sequence method integrated based on marking area and similitude

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710721851.9A CN107506429A (en) 2017-08-22 2017-08-22 A kind of image rearrangement sequence method integrated based on marking area and similitude

Publications (1)

Publication Number Publication Date
CN107506429A true CN107506429A (en) 2017-12-22

Family

ID=60692536

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710721851.9A Pending CN107506429A (en) 2017-08-22 2017-08-22 A kind of image rearrangement sequence method integrated based on marking area and similitude

Country Status (1)

Country Link
CN (1) CN107506429A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080001950A1 (en) * 2006-06-30 2008-01-03 Microsoft Corporation Producing animated scenes from still images
CN102214298A (en) * 2011-06-20 2011-10-12 复旦大学 Method for detecting and identifying airport target by using remote sensing image based on selective visual attention mechanism
CN104090972A (en) * 2014-07-18 2014-10-08 北京师范大学 Image feature extraction and similarity measurement method used for three-dimensional city model retrieval
CN104598908A (en) * 2014-09-26 2015-05-06 浙江理工大学 Method for recognizing diseases of crop leaves
CN105426529A (en) * 2015-12-15 2016-03-23 中南大学 Image retrieval method and system based on user search intention positioning
CN106708943A (en) * 2016-11-22 2017-05-24 安徽睿极智能科技有限公司 Image retrieval reordering method and system based on arrangement fusion

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080001950A1 (en) * 2006-06-30 2008-01-03 Microsoft Corporation Producing animated scenes from still images
CN102214298A (en) * 2011-06-20 2011-10-12 复旦大学 Method for detecting and identifying airport target by using remote sensing image based on selective visual attention mechanism
CN104090972A (en) * 2014-07-18 2014-10-08 北京师范大学 Image feature extraction and similarity measurement method used for three-dimensional city model retrieval
CN104598908A (en) * 2014-09-26 2015-05-06 浙江理工大学 Method for recognizing diseases of crop leaves
CN105426529A (en) * 2015-12-15 2016-03-23 中南大学 Image retrieval method and system based on user search intention positioning
CN106708943A (en) * 2016-11-22 2017-05-24 安徽睿极智能科技有限公司 Image retrieval reordering method and system based on arrangement fusion

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
林彬: "图像重排序自适应算法研究与贪心选择方法改进", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Similar Documents

Publication Publication Date Title
Bar et al. Classification of artistic styles using binarized features derived from a deep neural network
CN106126581B (en) Cartographical sketching image search method based on deep learning
Bai et al. VHR object detection based on structural feature extraction and query expansion
CN106682233B (en) Hash image retrieval method based on deep learning and local feature fusion
Gould et al. Decomposing a scene into geometric and semantically consistent regions
Chen et al. A region-based fuzzy feature matching approach to content-based image retrieval
Zhang et al. Contact lens detection based on weighted LBP
Bagri et al. A comparative study on feature extraction using texture and shape for content based image retrieval
CN103207879B (en) The generation method and apparatus of image index
Liu et al. Periodicity, directionality, and randomness: Wold features for image modeling and retrieval
Hossain et al. Leaf shape identification based plant biometrics
Sudderth et al. Shared Segmentation of Natural Scenes Using Dependent Pitman-Yor Processes.
CN104778242B (en) Cartographical sketching image search method and system based on image dynamic partition
Murthy et al. Content based image retrieval using Hierarchical and K-means clustering techniques
Saavedra et al. Sketch based Image Retrieval using Learned KeyShapes (LKS).
Fu et al. Centralized binary patterns embedded with image euclidean distance for facial expression recognition
Kannan et al. Image clustering and retrieval using image mining techniques
Alsmadi et al. Fish recognition based on robust features extraction from color texture measurements using back-propagation classifier
CN107515895B (en) Visual target retrieval method and system based on target detection
Jang et al. Car-Rec: A real time car recognition system
CN104239898B (en) A kind of quick bayonet vehicle is compared and model recognizing method
Cao et al. Selecting key poses on manifold for pairwise action recognition
WO2017101434A1 (en) Human body target re-identification method and system among multiple cameras
JP4098021B2 (en) Scene identification method, apparatus, and program
CN106874489B (en) Lung nodule image block retrieval method and device based on convolutional neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination