Summary of the invention
The objective of the invention is to overcome existing method and adds banner advertisement do not have user's deficiency targetedly in the network image that the user browses, proposing a kind of is the banner advertisement adding method of guiding with user and the network image content browsed thereof.
For reaching above purpose, the present invention adopts following technical scheme to be achieved:
A kind of method of adding banner advertisement in user's browse network image comprises the steps:
At first vision retrieving similar images unit 20 is carried out in the network image unit 10 that the user is browsed, wherein, the network image unit 10 that the user browses comprises image frame 100, image text 110 and user ID information 120, image frame 100 in the network image unit 10 that the user is browsed in vision retrieving similar images unit 20 carries out similar image to be determined, obtains user ID associated picture visual similarity ordering 220; Carry out user interest descriptor sequencing unit 30 according to user ID associated picture visual similarity ordering 220 then, obtain user interest information according to the different time constraint information; Next carry out order ads method and selected cell 40, promptly the result according to the ordering of user interest descriptor carries out order ads and advertisement selection to the advertisement in the banner advertisement storehouse according to correlativity; Next carrying out location advertising selects and link unit 50, result according to the previous step order ads, visual similarity to each banner advertisement and present image calculates, determine to insert the position of banner advertisement, and add the relevant more hyperlink of detailed content of this advertisement of describing in the correspondent advertisement insertion position; Final result for retrieval shows in showing insertion advertising results figure unit 60.
In the such scheme, described vision retrieving similar images unit 20 comprises following concrete steps: at first image frame 100 is carried out Visual Feature Retrieval Process step 101, extract color, texture and edge feature in the image, next carry out visual signature quantization step 102, after Visual Feature Retrieval Process, corresponding color characteristic, textural characteristics and edge feature are quantized with the method for K-mean cluster respectively; When carrying out Visual Feature Retrieval Process step 101 and visual signature quantization step 102, image and the image text execution index in the image text 200 downloaded on the network are set up, generate network image text message index database 201 based on user ID information; Then image and the image in the image text 200 downloaded on the network are carried out Visual Feature Retrieval Process step 101 and characteristic quantification step 102 successively, obtain network image visual signature index database 202 based on user ID information; Then to the visual signature quantitative information of characteristic quantification step 102 gained and the visual similarity metrology step of carrying out based on the Image Visual Feature index of all these users in the network image visual signature index database 202 of user ID information based on TF-IDF 210, carrying out similarity calculates, obtain based on the visual similarity score between each image and the user images picture 100 in the network image visual signature index database 202 of user ID information, at last above-mentioned visual similarity score is sorted, obtain user ID associated picture visual similarity ordering 220.
In color in the described extraction image, texture and the edge feature step, the extraction of color characteristic is 25 image blocks that wait size that original image are divided into 5x5, extracts the color moment feature of 9 dimensions in each piece respectively, and the dimension of color characteristic is 225; Scalable wavelet bag textural characteristics describing method is adopted in the extraction of textural characteristics, and the basis function of wavelet package transforms is ' DB2 ', the image divided mode be 2x2 and one placed in the middle etc. the sized images piece, the dimension of textural characteristics is 170.Edge Gradient Feature adopts the marginal distribution histogram based on 128 dimensions, and direction number is 16, and the quantification technique progression of gradient is 8.
Described user interest descriptor sequencing unit 30 comprises following concrete steps: according to user ID associated picture visual similarity ordering 220, carry out and extract user images text step, obtain vision similar image text message 310 with current browse graph picture, next execution in step 320, obtain user interest information according to the different time constraint information, step 320 comprises that the user interest of overall time-constrain obtains method 321, the user interest of time-constrain obtains method 322 recently, and both select one; Carry out the user interest ordered steps 330 based on the visual similarity weighting at last, this step is exactly to describe user interest according to the text message of image relevant in step 321 or the step 322.
The method of adding banner advertisement in the network image that the user browses that is provided among the present invention is compared with existing network icon adding method, and its beneficial effect shows that the advertisement of interpolation has user's specific aim.
Embodiment
Fig. 1 has provided the general steps synoptic diagram that adds the method for banner advertisement among the present invention in user's browse graph picture.Wherein comprise the network image unit 10 that the Internet user browses; Vision retrieving similar images unit 20 is carried out in the network image unit 10 that the user browses; Carry out user interest descriptor sequencing unit 30 then; Next carry out order ads method and selected cell 40; Next carrying out location advertising selects and link unit unit 50; Final result for retrieval shows in showing insertion advertising results figure unit 60.
The network image unit 10 that the user browses among the present invention comprises image frame 100, image text 110, user ID information 120.Image frame 100 in the network image unit 10 that the user is browsed in vision retrieving similar images unit 20 among the present invention carries out similar image and determines.Provide to example the network image unit 10 that the user is browsed among Fig. 2 and carried out the FB(flow block) that the vision similar image detects.At first image frame 100 is carried out Visual Feature Retrieval Process step 101, extract color, texture and edge feature in the image.Wherein the extraction of color characteristic is 25 image blocks that wait size that original image are divided into 5x5, extracts the color moment feature of 9 dimensions in each piece respectively, and the dimension of color characteristic is 225.Wherein scalable wavelet bag textural characteristics describing method is adopted in the description of textural characteristics, the basis function of wavelet package transforms is ' DB2 ', the image divided mode be 2x2 and one placed in the middle etc. the sized images piece, the dimension of textural characteristics is that 170 (correlation technique sees the paper of publishing: X.Qian for details, G.Liu, D.Guo, Z.Li, Z.Wang, and H.Wang, " Object Categorization using Hierarchical Wavelet Packet Texture Descriptors, " inProc.ISM 2009, pp.44-51.).Wherein edge feature adopts the marginal distribution histogram (direction number is 16, and the quantification progression of gradient is 8) based on 128 dimensions.
Next carry out visual signature quantization step 102, after feature extraction, corresponding color moment feature, wavelet packet textural characteristics and edge feature are quantized with the method for K-mean cluster respectively, the code book number that quantizes is respectively: 50000,10000 and 50000.Can change the code book number as required in practice.Suggestion code book number is more than 10000 among the present invention.
The similarity image comes from the image downloaded on the network and image text unit 200 thereof (this unit from website Bing, Flickr, the view data that download websites such as Google and the label text information of every width of cloth image) among the present invention.Then the image text execution index in the unit 200 is set up, generated network image text message index database 201 based on user ID information.Then the image in the unit 200 is carried out Visual Feature Retrieval Process step 101 and characteristic quantification step 102 successively, to obtain network image visual signature index database 202 based on user ID information.Then to visual signature quantitative information 102 and the visual similarity metrology step of carrying out based on the Image Visual Feature index of relevant all these users in the network image visual signature index database 202 of user ID information based on TF-IDF 210, calculate to carry out similarity, obtain the visual similarity score between each image and user images picture 100 in 202, suppose that active user's picture number is N, so wherein the visual similarity of any one image i must be divided into S (i), i=1~N, S (i) ∈ [0,1].In similarity is calculated, adopt and carry out (the TF-IDF method is a kind of known method) in this area based on the criterion of TF-IDF.At last above-mentioned visual similarity score is sorted (image being arranged by score order from high to low), obtain user ID associated picture visual similarity ordering 220.
In the user interest descriptor sequencing unit 30 of Fig. 1, realize user's interest is sorted.The pairing concrete steps of user interest sort method as shown in Figure 3.Comprising according to similar image ranking results 220 in the unit 20, carry out and extract user images text step, with the vision similar image text message 310 of acquisition with active user's browse graph picture, next execution in step 320, obtain user interest information according to the different time constraint information.Step 320 comprises that the user interest of overall time-constrain obtains method 321, the user interest of time-constrain obtains one of method 322 recently.The method of step 321 is the text message in all relevant images of user to be used for interest obtain; The method of step 322 is that the interest with the user is limited in the current slot.Carry out user interest ordered steps 330 at last, describe user interest according to the text message of image relevant in step 321 or the step 322 exactly based on the visual similarity weighting.Suppose that wherein picture number is M, the corresponding visual similarity of each image must be divided into S (i), i=1~M, and S (i) ∈ [0,1], the descriptive text vocabulary that is comprised among this image i has Z
iIndividual.Suppose to include K vocabulary in these images, be designated as t respectively
1~t
K, vocabulary t wherein
kThe number of times that occurs is that the score of c in respective image is respectively s
1~s
c, then final pairing user interest degree I
kFor:
p∈[0,1]
The final user interest degree of describing adopts normalized interest-degree:
p∈[0,1]
Result according to the user interest descriptor ordering that is drawn in the unit 30 in order ads in Fig. 1 and the advertisement selection unit 40 carries out order ads and advertisement selection to the advertisement in the banner advertisement storehouse according to correlativity.Method for measuring similarity in the advertisement coupling adopts existing open source literature (T.Mei, X.-S.Hua, and S.Li, Contextual in-image advertising, in Proc.ACM Multimedia, Vancouver, Canada, 2008, the pp.439-448.) method in.After calculating, can draw each advertisement a
iCorrelativity score U (a with the user
i).
Location advertising in Fig. 1 select and link unit 50 in according to the result of the order ads that is drawn in the unit 40, the visual similarity of each banner advertisement and present image is calculated.During similarity is calculated with the color correlation of image as measurement criterion.The execution in step of insertion position system of selection is as follows, at first image division is become 5*5 piece that waits size, then each piece is carried out the texture complexity and the content importance degree is divided, with the position P that finds out the most suitable interpolation icon (x, y, z), x wherein, y denotation coordination, z are represented corresponding Color Channel number (for coloured image port number z=3, for gray level image port number z=1).Concrete grammar can adopt existing document (T.Mei, X.-S.Hua, and S.Li, Contextual in-image advertising, in Proc.ACM Multimedia, Vancouver, Canada, 2008, the pp.439-448.) method of publishing in.After determining to insert the position of advertisement, the criterion with the advertisement icon and the color distortion of corresponding insertion position are measured as visual similarity can draw each advertisement a after calculating
iWith the current visual similarity score V (a that browses image frame 100 of user
i).
V(a
i)=exp(-D(a
i))
D (a wherein
i) expression advertisement a
iWith current local location P (x, y, vision difference z), the D (a that browses the most suitable interpolation advertisement in the image frame of user
i) can be expressed as:
In addition also with advertisement a
iSimilarity T (a with the current browse network image text 110 of user
i) also consider among tolerance T (a wherein
i) computing method and U (a
i) identical, do not do tired stating at this.
Score F (a of final advertisement selection
i) be user's correlativity score U (a that weighting is considered
i) and visual similarity score V (a
i) and user version correlativity T (a
i) and:
F (a
i)=α * U (a
i)+β * V (a
i)+γ * T (a
i), { α, beta, gamma } ∈ [0,1] is α wherein, and beta, gamma is a weighting coefficient,
α+β+γ=1
α=0.7,β=0.1,γ=0.2
Add the relevant more hyperlink of detailed content of this advertisement of describing then with the highest the picking out of advertisement score, and in the correspondent advertisement insertion position as final insertion advertisement.
In order ads and advertisement selection unit 40, can suitably select several alternative advertisements in the present invention and select and link the input of determining unit 50, like this, can effectively reduce the complexity of system handles as the advertisement insertion position.
The final advertising effect image that inserts shows in unit 60.