CN104992347A - Video matching advertisement method and device - Google Patents

Video matching advertisement method and device Download PDF

Info

Publication number
CN104992347A
CN104992347A CN201510338003.0A CN201510338003A CN104992347A CN 104992347 A CN104992347 A CN 104992347A CN 201510338003 A CN201510338003 A CN 201510338003A CN 104992347 A CN104992347 A CN 104992347A
Authority
CN
China
Prior art keywords
sample
video
participle
advertisement
vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510338003.0A
Other languages
Chinese (zh)
Other versions
CN104992347B (en
Inventor
童明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing QIYI Century Science and Technology Co Ltd
Original Assignee
Beijing QIYI Century Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing QIYI Century Science and Technology Co Ltd filed Critical Beijing QIYI Century Science and Technology Co Ltd
Priority to CN201510338003.0A priority Critical patent/CN104992347B/en
Publication of CN104992347A publication Critical patent/CN104992347A/en
Application granted granted Critical
Publication of CN104992347B publication Critical patent/CN104992347B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

An embodiment of the invention provides a video matching advertisement method and a video matching advertisement device. The method comprises the following steps: video description of a video of an advertisement to be matched is obtained, advertisement description of a candidate advertisement is obtained from an advertisement library, word segmentation processing is performed on the video description and the advertisement description, video description segmented words and advertisement description segmented words are input into a pre-built video and advertisement matching degree prediction model, distributed characteristic vectors of the video description segmented words and the advertisement description segmented words are obtained, the distributed characteristic vectors of the video description segmented words and the advertisement description segmented words are input into a multilayer convolutional neural network in the model to obtain a matching value between the video of the advertisement to be matched and the candidate advertisement, and, if the matching value is greater than a preset matching degree threshold, the video of the advertisement to be matched matches the candidate advertisement. Through the application of the embodiment, the defect that the video cannot match the related advertisement is prevented, and the video matching advertisement recall rate is improved.

Description

A kind of method of video matching advertisement and device
Technical field
The present invention relates to internet, mode identification technology, particularly relate to a kind of method and device of video matching advertisement.
Background technology
At present, online advertisement is as the business model fast development thereupon of internet most profit, in advertisement release process, need to throw in for different videos the advertisement matched with it to user, that is, carry out personalized advertisement putting for different videos to user, the economic benefit of businessman can be significantly improved like this.
In existing technology, the mode adopted for video matching advertisement is: utilize the word in advertisement and video presentation to overlap and carry out semantic matches.
But, owing to usually only adding the description of the programme content to video during video website editing video, and be pay close attention to the product information represented in advertisement for the description majority of advertisement, when not having word to overlap during video presentation describes to relevant advertisement, this advertisement would not be thrown in for this video.Such as: have " i Phone " in the video presentation of video A, when there is no " i Phone " in the description of advertisement a and have " iphone ", video A would not be mated with advertisement a.Therefore, very many videos can be made cannot to mate relevant advertisement, cause the recall rate of video matching advertisement (or recall ratio) not high.
Summary of the invention
The object of the embodiment of the present invention is the method and the device that provide a kind of video matching advertisement, to improve the recall rate of video matching advertisement.
For achieving the above object, the embodiment of the invention discloses a kind of method of video matching advertisement, the method comprises:
Obtain the video presentation of the video of advertisement to be matched, the advertisement obtaining candidate locations from advertisement base describes;
According to preset rules, described video presentation and described advertisement are described and carries out word segmentation processing, obtain video presentation participle and advertisement describes participle;
Described video presentation participle and advertisement are described participle and input to the video ads matching degree forecast model set up in advance;
Described video ads matching degree forecast model, according to the corresponding relation of participle and distributed nature vector, obtains the distributed nature vector that video presentation participle and advertisement describe participle; The corresponding relation of described participle and distributed nature vector, obtains by carrying out training to video presentation, advertisement description and outside language material;
Described video ads matching degree forecast model, describes the vectorial multilayer convolutional neural networks inputed in model of distributed nature of participle, obtains the described video of advertisement to be matched and the matching value of candidate locations by described video presentation participle and described advertisement; Described multilayer convolutional neural networks is that the lifting degree of foundation ad click rate carries out training acquisition;
If described matching value is greater than default matching degree threshold value, then video and the candidate locations of described advertisement to be matched match.
Preferably, the described distributed nature vector described video presentation participle and described advertisement being described participle inputs to the multilayer convolutional neural networks in model, obtains the step of the described video of advertisement to be matched and the matching value of candidate locations, comprising:
One dimension convolutional neural networks layer in I1, described multilayer convolutional neural networks, the distributed nature vector described video presentation participle inputted and advertisement being described to participle carries out one dimension convolution algorithm, obtain the distributed nature One-Dimensional Extended vector that video presentation participle and advertisement describe participle, export the first maximum pond layer to;
I2, described first maximum pond layer, to the distributed nature One-Dimensional Extended vector of input, carry out data compression by down-sampling algorithm, obtain the first maximum pond layer bivector, export the first two-dimensional convolution neural net layer to;
I3, described first two-dimensional convolution neural net layer, to the first maximum pond layer bivector of input, by two-dimensional convolution computing, obtain the multiple two-dimensional convolution neural net layer bivectors identical with input vector dimension; With activation function, each element in described multiple two-dimensional convolution neural net layer bivector is calculated, obtain the bivector after multiple calculating of equal number, export next coupled middle maximum pond layer to;
I4: next middle maximum pond layer described, to the bivector after multiple calculating of input, carries out data compression by down-sampling algorithm, obtains middle maximum pond layer bivector, exports next coupled middle two-dimentional neural net layer to;
I5: next middle two-dimentional neural net layer described, to maximum pond, the centre layer bivector of input, by two-dimensional convolution computing, obtain the multiple middle two-dimensional convolution neural net layer bivector identical with input vector dimension, with activation function, each element in described multiple middle two-dimensional convolution neural net layer bivector is calculated, obtain the bivector after multiple calculating of equal number, judge the bivector after the multiple calculating obtained, it is whether the bivector of 1 × 1, if so, then step I6 is performed; Otherwise export next coupled middle maximum pond layer to, return step I4;
I6: by the object vector of the Element generation one dimension of whole vectors obtained;
I7: adopt the algorithm preset to carry out computing to the object vector obtained, obtain the matching value of video presentation and advertisement description.
Preferably, the corresponding relation of described participle and distributed nature vector, being by describing and outside language material video presentation, advertisement, adopting the training of non-supervisory formula training method to obtain.
Preferably, the process of the non-supervisory formula training method training of described employing comprises:
A: the passage obtained in video presentation, advertisement description or outside language material describes;
B: carry out word segmentation processing to described text description, obtains N number of description participle;
C: described N number of description participle is mapped as the continuous proper vector of one dimension that N number of length is m;
D: average calculating operation is weighted to the continuous proper vector of one dimension that a front N-1 length is m, obtains a predicted vector;
E: when utilizing described predicted vector to predict N number of description participle, judges to predict that whether participle and actual N number of description participle predicated error rate are lower than the prediction threshold value preset:
If so, distributed nature training terminates, the N number of distributed nature vector of the described continuous proper vector of N number of one dimension corresponding to described N number of description participle; If not, utilize back-propagation algorithm to adjust the continuous proper vector of described N number of one dimension, obtain the continuous proper vector of one dimension that new N number of length is m, continue to perform steps d and step e.
Preferably, the lifting degree according to ad click rate carries out training the step obtaining multilayer convolutional neural networks, comprising:
F: obtain the ad click rate lifting degree L that Sample video describes, sample advertisement describes and described Sample video advertisement is right that a Sample video advertisement is right from the training sample of multilayer convolutional neural networks;
G: according to described preset rules, describes described Sample video and sample advertisement describes and carries out word segmentation processing, obtains Sample video and describes participle and sample advertisement describes participle;
H: participle and described sample advertisement are described to described Sample video and describes participle and carry out distributed nature training, obtain Sample video respectively and describe the sample distribution formula proper vector that participle and sample advertisement describe participle, wherein, described Sample video describes the input that the sample distribution formula proper vector that participle and sample advertisement describe participle is multilayer convolutional neural networks;
I: the Sample video of multilayer convolutional neural networks to input describes the sample distribution formula proper vector that participle and sample advertisement describe participle and train, obtains Sample video and describes and the sample matches value L ' of sample advertisement description;
J: judge that whether the error range of L ' and L is lower than the sample training error threshold preset:
If so, multilayer convolutional neural networks model training terminates, and determines the neuronic weights ω in every layer of neural network; If not, utilize back-propagation algorithm to adjust neuronic weights ω in every layer of neural network, then, continue to perform step f to step j.
Preferably, described step I, comprising:
I1: the one dimension convolutional neural networks layer in described multilayer convolutional neural networks, the sample distribution formula proper vector that participle and sample advertisement describe participle is described to the described Sample video of input and carries out one dimension convolution algorithm, obtain Sample video and describe the sample distribution formula feature One-Dimensional Extended vector that participle and sample advertisement describe participle, export the first maximum pond layer to;
I2: described first maximum pond layer, to the sample distribution formula feature One-Dimensional Extended vector of input, carries out data compression by down-sampling algorithm, obtains the first maximum pond layer sample bivector, export the first two-dimensional convolution neural net layer to;
I3: described first two-dimensional convolution neural net layer, to the first maximum pond layer sample bivector of input, by two-dimensional convolution computing, obtains the multiple two-dimensional convolution neural net layer sample bivectors identical with input vector dimension; With activation function, each element in described multiple two-dimensional convolution neural net layer sample bivector is calculated, obtain the sample bivector after multiple calculating of equal number, export next coupled middle maximum pond layer to;
I4: next middle maximum pond layer described, to the sample bivector after multiple calculating of input, carries out data compression by down-sampling algorithm, obtains middle maximum pond layer sample bivector, exports next coupled middle two-dimentional neural net layer to;
I5: next middle two-dimentional neural net layer described, to maximum pond, the centre layer sample bivector of input, by two-dimensional convolution computing, obtain the multiple middle two-dimensional convolution neural net layer sample bivector identical with input vector dimension, with activation function, each element in described multiple middle two-dimensional convolution neural net layer sample bivector is calculated, obtain the sample bivector after multiple calculating of equal number, judge the sample bivector after the multiple calculating obtained, it is whether the bivector of 1 × 1, if so, then step I 6 is performed; Otherwise export next coupled middle maximum pond layer to, return step I 4;
I6: by the sample object vector of the Element generation one dimension of whole vectors obtained;
I7: adopt the algorithm preset to carry out computing to the sample object vector obtained, obtain the sample matches value L ' that Sample video describes and sample advertisement describes.
Preferably, the lifting degree of described ad click rate, for:
lift = ctr ( ad , video ) max ( ctrad , ctrvideo )
Wherein, ctr (ad, video)for the clicking rate of targeted advertisements on target video, ctr adfor the average click-through rate of targeted advertisements on all videos, ctr videofor the average click-through rate of the whole advertisements on target video.
Preferably, described activation function is: ReLU function, for:
y j = max ( 0 , Σ i = 1 n ω ij x i )
Wherein, x ifor the input of two-dimensional convolution neural net layer, y jfor the output of two-dimensional convolution neural net layer, ω ijelement in every layer of neural network residing for described activation function in neuron weight vector is the weights connecting input i and export j.
Preferably, described default algorithm is:
y = 1 1 + e - w · x
Wherein, x is input vector, and y is the matching value of video ads, and ω is neuronic weight vector in every layer of neural network, and ω and x has identical dimension.
For achieving the above object, the embodiment of the invention discloses a kind of device of video matching advertisement, this device comprises:
Video ads describes and obtains module, and for obtaining the video presentation of the video of advertisement to be matched, the advertisement obtaining candidate locations from advertisement base describes;
Video ads describes word segmentation processing module, for according to preset rules, describes carry out word segmentation processing to described video presentation and described advertisement, obtains video presentation participle and advertisement describes participle;
Video ads matching degree forecast model load module, inputs to for described video presentation participle and advertisement being described participle the video ads matching degree forecast model set up in advance;
Video ads distributed nature obtains module, and in described video ads matching degree forecast model, according to the corresponding relation of participle and distributed nature vector, obtaining video presentation participle and advertisement, to describe the distributed nature of participle vectorial; The corresponding relation of described participle and distributed nature vector, obtains by carrying out training to video presentation, advertisement description and outside language material;
Video ads matching degree forecast model output module, for in described video ads matching degree forecast model, described video presentation participle and described advertisement are described the vectorial multilayer convolutional neural networks inputed in model of distributed nature of participle, obtain the described video of advertisement to be matched and the matching value of candidate locations; Described multilayer convolutional neural networks is that the lifting degree of foundation ad click rate carries out training acquisition;
Video ads matching judgment module, if described matching value is greater than default matching degree threshold value, then video and the candidate locations of described advertisement to be matched match.
Preferably, described video ads matching degree forecast model output module, comprising:
Distributed nature vector input submodule, the distributed nature vector for described video presentation participle and described advertisement being described participle inputs to the one dimension convolutional neural networks layer in the multilayer convolutional neural networks in model;
One dimension convolutional neural networks layer process submodule, distributed nature vector for describing participle to the described video presentation participle and advertisement that input one dimension convolutional neural networks layer carries out one dimension convolution algorithm, obtain video and frequently retouch the distributed nature One-Dimensional Extended vector that participle is retouched in participle and advertisement frequently, export the first maximum pond layer to;
First maximum pond layer process submodule, for the distributed nature One-Dimensional Extended vector to the described first maximum pond layer of input, carry out data compression by down-sampling algorithm, obtain the first maximum pond layer bivector, export the first two-dimensional convolution neural net layer to;
First two-dimensional convolution neural net layer process submodule, for the first maximum pond layer bivector to the described first two-dimensional convolution neural net layer of input, by two-dimensional convolution computing, obtain the multiple two-dimensional convolution neural net layer bivectors identical with input vector dimension; With activation function, each element in described multiple two-dimensional convolution neural net layer bivector is calculated, obtain the first bivector after multiple calculating of equal number, export next coupled middle maximum pond layer to;
Maximum pond layer process submodule in the middle of next, for the bivector after the multiple calculating to input next middle maximum pond layer described, carry out data compression by down-sampling algorithm, obtain middle maximum pond layer bivector, export next coupled middle two-dimentional neural net layer to;
Two-dimentional neural net layer process submodule in the middle of next, for maximum pond, the centre layer bivector to input next middle two-dimentional neural net layer described, by two-dimensional convolution computing, obtain the multiple middle two-dimensional convolution neural net layer bivector identical with input vector dimension, with activation function, each element in described multiple middle two-dimensional convolution neural net layer bivector is calculated, obtain the bivector after multiple calculating of equal number, judge the bivector after the multiple calculating obtained, it is whether the bivector of 1 × 1, if, then trigger target vector generates submodule, otherwise export next coupled middle maximum pond layer to, next middle maximum pond layer process submodule described in sequence trigger switch and two-dimentional neural net layer process submodule in the middle of next,
Object vector generates submodule, for the object vector of the Element generation one dimension of whole vectors that will obtain;
Matching value obtains submodule, for adopting default algorithm to carry out computing to the object vector obtained, obtains the matching value of video presentation and advertisement description.
Preferably, the corresponding relation of described participle and distributed nature vector, is passed through to describe and outside language material video presentation, advertisement by non-supervisory formula training method training module, adopts the training of non-supervisory formula training method to obtain.
Preferably, described non-supervisory formula training method training module, comprising:
Text description obtains submodule, describes for the passage obtained in video presentation, advertisement description or outside language material;
Text description word segmentation processing submodule, for carrying out word segmentation processing to described text description, obtains N number of description participle;
Maps feature vectors submodule, for being mapped as described N number of description participle the continuous proper vector of one dimension that N number of length is m;
Predicted vector obtains submodule, for being that the continuous proper vector of one dimension of m is weighted average calculating operation to a front N-1 length, obtains a predicted vector;
Participle predicated error rate judges submodule, for when utilizing described predicted vector to predict N number of description participle, judges to predict that whether participle and actual N number of description participle predicated error rate are lower than the prediction threshold value preset:
If so, distributed nature training terminates, the N number of distributed nature vector of the described continuous proper vector of N number of one dimension corresponding to described N number of description participle; If not, utilize back-propagation algorithm to adjust the continuous proper vector of described N number of one dimension, obtain the continuous proper vector of one dimension that new N number of length is m, predicted vector described in sequence trigger switch obtains submodule and participle predicated error rate judges submodule.
Preferably, described multilayer convolutional neural networks is by neural metwork training module, and the lifting degree according to ad click rate carries out training and obtains;
Described neural metwork training module, comprising:
Sample input submodule, for obtaining the right ad click rate lifting degree L that Sample video describes, sample advertisement describes and described Sample video advertisement is right of a Sample video advertisement in the training sample from multilayer convolutional neural networks;
Sample video advertisement describes word segmentation processing submodule, for according to described preset rules, to describe and sample advertisement describes and carries out word segmentation processing to described Sample video, obtains Sample video and describes participle and sample advertisement describes participle;
Sample distribution formula proper vector obtains submodule, describe participle carry out distributed nature training for describing participle and described sample advertisement to described Sample video, obtain Sample video respectively and describe the sample distribution formula proper vector that participle and sample advertisement describe participle, wherein, described Sample video describes the input that the sample distribution formula proper vector that participle and sample advertisement describe participle is multilayer convolutional neural networks;
Sample video advertising matches degree predictor module, describe for the Sample video of multilayer convolutional neural networks to input the sample distribution formula proper vector that participle and sample advertisement describe participle to train, obtain Sample video and describe and the sample matches value L ' of sample advertisement description;
Sample matches degree judges submodule, for judging that whether the error range of L ' and L is lower than the sample training error threshold preset:
If so, multilayer convolutional neural networks model training terminates, and determines the neuronic weights ω in every layer of neural network; If not, utilize back-propagation algorithm to adjust neuronic weights ω in every layer of neural network, then, sequence trigger switch sample input submodule judges submodule to sample matches degree.
Preferably, described Sample video advertising matches degree predictor module, comprising:
Sample distribution formula feature One-Dimensional Extended unit, for the one dimension convolutional neural networks layer in multilayer convolutional neural networks, the sample distribution formula proper vector that participle and sample advertisement describe participle is described to the described Sample video of input and carries out one dimension convolution algorithm, obtain Sample video and describe the sample distribution formula feature One-Dimensional Extended vector that participle and sample advertisement describe participle, export sample first maximum pond layer to;
Sample first maximum pond layer processing unit, for the sample distribution formula feature One-Dimensional Extended vector to input amendment first maximum pond layer, carry out data compression by down-sampling algorithm, obtain the first pond layer sample bivector, export sample first two-dimensional convolution neural net layer to;
Sample first two-dimensional convolution neural net layer processing unit, for the first pond layer sample bivector to input amendment first two-dimensional convolution neural net layer, by two-dimensional convolution computing, obtain the multiple two-dimensional convolution neural net layer sample bivectors identical with input vector dimension; With activation function, each element in described multiple two-dimensional convolution neural net layer sample bivector is calculated, obtain the sample bivector after multiple calculating of equal number, export next middle maximum pond layer of coupled sample to;
Next middle maximum pond layer processing unit of sample, for the sample bivector after the multiple calculating to next middle maximum pond layer of input amendment, data compression is carried out by down-sampling algorithm, obtain maximum pond layer sample bivector in the middle of sample, export next middle two-dimentional neural net layer of coupled sample to;
Next middle two-dimentional neural net layer processing unit of sample, for maximum pond, the centre layer sample bivector to next middle two-dimentional neural net layer of input amendment, by two-dimensional convolution computing, obtain the multiple middle two-dimensional convolution neural net layer sample bivector identical with input vector dimension, with activation function, each element in described multiple middle two-dimensional convolution neural net layer sample bivector is calculated, obtain the intermediate sample bivector after multiple calculating of equal number, judge the sample bivector after the multiple calculating obtained, it is whether the bivector of 1 × 1, if, then trigger sample object generation unit, otherwise export next middle maximum pond layer of coupled sample to, next middle maximum pond layer processing unit of sample described in sequence trigger switch and next middle two-dimentional neural net layer processing unit of sample,
Sample object vector generation unit, for the sample object vector of the Element generation one dimension of whole vectors that will obtain;
Sample matches value obtains unit, for adopting default algorithm to carry out computing to the sample object vector obtained, obtains the sample matches value L ' that Sample video describes and sample advertisement describes.
Preferably, the lifting degree of described ad click rate, for:
lift = ctr ( ad , video ) max ( ctrad , ctrvideo )
Wherein, ctr (ad, video)for the clicking rate of targeted advertisements on target video, ctr adfor the average click-through rate of targeted advertisements on all videos, ctr videofor the average click-through rate of the whole advertisements on target video.
Preferably, described activation function is: ReLU function, for:
y j = max ( 0 , Σ i = 1 n ω ij x i )
Wherein, x ifor the input of two-dimensional convolution neural net layer, y jfor the output of two-dimensional convolution neural net layer, ω ijelement in every layer of neural network residing for described activation function in neuron weight vector is the weights connecting input i and export j.
Preferably, described default algorithm is:
y = 1 1 + e - w · x
Wherein, x is input vector, and y is the matching value of video ads, and ω is neuronic weight vector in every layer of neural network, and ω and x has identical dimension.
The method of a kind of video matching advertisement that the embodiment of the present invention provides and device, the distributed nature that the advertisement in the video presentation of the video of advertisement to be matched and advertisement base describes can be obtained, the video ads matching degree forecast model established in advance is inputed to by obtaining distributed nature vector, export as the video of advertisement to be matched and the matching value of candidate locations, if judge that this matching value is greater than default matching degree threshold value, then the video of advertisement to be matched and candidate locations match, and then throw in candidate locations to the video of advertisement to be matched.Like this, the distributed nature aspect that can describe from video presentation and advertisement judges the video of advertisement to be matched and the matching degree of candidate locations, avoiding when adopting word coincidence system to carry out video ads coupling cannot be the defect of the video matching relevant advertisements of advertisement to be matched, more effectively can throw in the advertisement that video matches to corresponding video, thus improve the recall rate of video matching advertisement.
Accompanying drawing explanation
In order to be illustrated more clearly in the embodiment of the present invention or technical scheme of the prior art, be briefly described to the accompanying drawing used required in embodiment or description of the prior art below, apparently, accompanying drawing in the following describes is only some embodiments of the present invention, for those of ordinary skill in the art, under the prerequisite of not paying the property made work, other accompanying drawing can also be obtained according to these accompanying drawings.
The method flow schematic diagram of a kind of video matching advertisement that Fig. 1 provides for the embodiment of the present invention;
Fig. 2 be embodiment illustrated in fig. 1 in refinement schematic flow sheet to step S105;
Fig. 3 trains schematic flow sheet for a kind of participle of providing in embodiment illustrated in fig. 1 and distributed nature vector corresponding relation;
A kind of multilayer convolutional neural networks training schematic flow sheet of Fig. 4 for providing in embodiment illustrated in fig. 1;
Fig. 5 be embodiment illustrated in fig. 4 in refinement schematic flow sheet to step S404;
Fig. 6 is that the corresponding relation of middle participle embodiment illustrated in fig. 3 and distributed nature vector trains principle schematic;
The structural representation of a kind of multilayer convolutional neural networks that Fig. 7 provides for the embodiment of the present invention;
The apparatus structure schematic diagram of a kind of video matching advertisement that Fig. 8 provides for the embodiment of the present invention.
Embodiment
The embodiment of the present invention provides a kind of method and device of video matching advertisement, and the method comprises: the video presentation obtaining the video of advertisement to be matched, and the advertisement obtaining candidate locations from advertisement base describes; According to preset rules, described video presentation and described advertisement are described and carries out word segmentation processing, obtain video presentation participle and advertisement describes participle; Described video presentation participle and advertisement are described participle and input to the video ads matching degree forecast model set up in advance; Described video ads matching degree forecast model, according to the corresponding relation of participle and distributed nature vector, obtains the distributed nature vector that video presentation participle and advertisement describe participle; The corresponding relation of described participle and distributed nature vector, obtains by carrying out training to video presentation, advertisement description and outside language material; Described video ads matching degree forecast model, describes the vectorial multilayer convolutional neural networks inputed in model of distributed nature of participle, obtains the described video of advertisement to be matched and the matching value of candidate locations by described video presentation participle and described advertisement; Described multilayer convolutional neural networks is that the lifting degree of foundation ad click rate carries out training acquisition; If described matching value is greater than default matching degree threshold value, then video and the candidate locations of described advertisement to be matched match.
Also it should be noted that, the learning rules adopted in the process of establishing of video ads matching degree forecast model are: the lifting degree of ad click rate.Why adopt the lifting degree of ad click rate to be because there is no the video of handmarking and the matched data of advertisement at present as learning rules, therefore lack sample training data.In the present invention, the lifting degree of ad click rate is approximately the matching degree of video and advertisement, that is, the numerical value of clicking rate lifting degree is larger, and to represent the matching degree of video and advertisement higher, otherwise the numerical value that clicking rate promotes is less, and to represent the matching degree of video and advertisement lower.
The lifting degree of clicking rate is defined as follows:
lift = ctr ( ad , video ) max ( ctrad , ctrvideo )
Wherein, ctr (ad, video)for the clicking rate of targeted advertisements on target video, ctr adfor the average click-through rate of targeted advertisements on all videos, ctr videofor the average click-through rate of the whole advertisements on target video.
Definition for clicking rate lifting degree once illustrates, gets ctr in denominator adand ctr videomaximal value, object is in order to avoid promoting erroneous judgement situation owing to clicking maliciously the clicking rate that causes, with ensure result accurately and stable.Such as, certain user clicks maliciously, and makes ctr (ad, video)by 3*10 33*10 is mentioned in secondary click 6, meanwhile, ctr adfor 3.4*10 3, ctr videoalso 3.2*10 is risen to 6but, calculating lifting degree according to the lifting degree definition of clicking rate is: 0.9375, as can be seen here, clicking rate lifting degree is almost 1, show that clicking rate does not significantly promote, that is this advertisement is not mated with the video of advertisement to be matched, so also just avoids the impact because manual operation causes.
Below in conjunction with the accompanying drawing in the embodiment of the present invention, be clearly and completely described the technical scheme in the embodiment of the present invention, obviously, described embodiment is only the present invention's part embodiment, instead of whole embodiments.Based on the embodiment in the present invention, those of ordinary skill in the art, not making the every other embodiment obtained under creative work prerequisite, belong to the scope of protection of the invention.
Below, first the method for the video matching advertisement that the embodiment of the present invention provides is described in detail.
The method of the video matching advertisement of the embodiment of the present invention, establish a video ads matching degree forecast model in advance, this model comprises two parts: 1, the corresponding relation of participle and distributed nature vector; 2, multilayer convolutional neural networks.Wherein, the corresponding relation of described participle and distributed nature vector, obtains by carrying out training to video presentation, advertisement description and outside language material; Described multilayer convolutional neural networks is that the lifting degree of foundation ad click rate carries out training acquisition.
Below the process of video matching advertisement is described in detail respectively with the process setting up video ads matching degree forecast model.
First, the process of video matching advertisement is described in detail.
See the method flow schematic diagram of a kind of video matching advertisement that Fig. 1, Fig. 1 provide for the embodiment of the present invention;
Step S101: the video presentation obtaining the video of advertisement to be matched, the advertisement obtaining candidate locations from advertisement base describes.
Easy understand, the editor of video website can add succinct video presentation according to the content of video frequency program, in video presentation, usually can relate to the information such as video frequency program title, video frequency program type, host, director and featured performer; Same, simple advertisement description is carried out in the advertisement that advertisement putting business also can provide advertiser, general, and advertisement can relate to the relevant information of name of product that advertisement represents, product type and product mouthpiece in describing.Above-mentioned video presentation and advertisement describe, and are all the information being easy to obtain.
Step S102: according to preset rules, describes described video presentation and described advertisement and carries out word segmentation processing, obtains video presentation participle and advertisement describes participle.
It should be noted that, " word segmentation processing " mentioned here is realized by some more common open source softwares, and more common software is: the language technology platform cloud LTP of " stammerer " Chinese word segmentation jieba and Harbin Institute of Technology; The main algorithm of word segmentation processing has the segmentation methods based on CRF, and certainly, the present invention does not need to limit concrete algorithm, and any possible algorithm all can be applicable to the present invention.
Here mentioned preset rules is: first described by passage and divide by word, and the inactive word filtered out in the word obtained after word divides, wherein, inactive word is default set of words, mainly comprise some words without concrete meaning and some common words, such as, " ", " " " I ", " this " etc.; Then the noun in the word after filtering, verb, adjective is selected as important word, and to extract above-mentioned important word, concrete, for the determination of the part of speech of the word after filtration, can by the part-of-speech tagging function automatic acquisition of participle software." the military charming ma's legend " citing of acting the leading role with Fan Bingbing, the editor of video website with the addition of such one section of video presentation: " " military charming ma's legend " are acted the leading role by Fan Bingbing ... ", according to preset rules, after this video presentation is carried out participle, obtain: (Wu Meiniang, legend, Fan Bingbing, acts the leading role).
Step S103: described video presentation participle and advertisement are described participle and input to the video ads matching degree forecast model set up in advance.
It should be noted that, video ads matching degree forecast model establishes in advance, in the application process of this model, model be input as video presentation participle and advertisement describes participle, export the matching value of video presentation into the video of advertisement to be matched and candidate locations.
Step S104: described video ads matching degree forecast model, according to the corresponding relation of participle and distributed nature vector, obtains the distributed nature vector that video presentation participle and advertisement describe participle; The corresponding relation of described participle and distributed nature vector, obtains by carrying out training to video presentation, advertisement description and outside language material.
Step S105: described video ads matching degree forecast model, described video presentation participle and described advertisement are described the vectorial multilayer convolutional neural networks inputed in model of distributed nature of participle, obtain the described video of advertisement to be matched and the matching value of candidate locations; Described multilayer convolutional neural networks is that the lifting degree of foundation ad click rate carries out training acquisition.
Step S106: if described matching value is greater than default matching degree threshold value, then video and the candidate locations of described advertisement to be matched match.
Wherein, the distributed nature vector described video presentation participle and described advertisement being described participle described in step S105 inputs to the multilayer convolutional neural networks in model, obtain the step of the described video of advertisement to be matched and the matching value of candidate locations, see Fig. 2 and Fig. 7, Fig. 2 be embodiment illustrated in fig. 1 in refinement schematic flow sheet to step S105, the structural representation of a kind of multilayer convolutional neural networks that Fig. 7 provides for the embodiment of the present invention, comprising:
Step S201: the one dimension convolutional neural networks layer in multilayer convolutional neural networks, the distributed nature vector video presentation participle inputted and advertisement being described to participle carries out one dimension convolution algorithm, obtain the distributed nature One-Dimensional Extended vector that video presentation participle and advertisement describe participle, export the first maximum pond layer to;
It should be noted that, this step object the video presentation of input and advertisement is described right distributed nature vector to expand, and obtain a distributed nature One-Dimensional Extended vector, this is the input of multilayer convolutional neural networks model.
Give an example and be described, such as, video presentation participle is (Fan Bingbing, military charming ma), advertisement describes participle for (Fan Bingbing, game), and the extended mode adopted due to convolutional neural networks is not full connected mode, therefore, obtaining the distributed nature that video presentation participle and advertisement describe participle is: (Fan Bingbing, Fan Bingbing), (Fan Bingbing, Wu Meiniang), (Fan Bingbing, game), (Wu Meiniang, game).In addition, each in these four kinds combinations is all by a kind of for correspondence distributed nature One-Dimensional Extended vector, and all distributed nature One-Dimensional Extended vectors form a convolutional network layer.
Step S202: the first maximum pond layer, to the distributed nature One-Dimensional Extended vector of input, carries out data compression by down-sampling algorithm, obtains the first maximum pond layer bivector, export the first two-dimensional convolution neural net layer to;
Simplicity of explanation once, first maximum pond layer can carry out data compression, the impact of stress release treatment to the distributed nature One-Dimensional Extended vector in step I1, obtains the first maximum pond layer bivector, preferably, the first maximum pond layer can adopt the method for down-sampling to realize data compression.
Step S203: the first two-dimensional convolution neural net layer, to the first maximum pond layer bivector of input, by two-dimensional convolution computing, obtains the multiple two-dimensional convolution neural net layer bivectors identical with input vector dimension; With activation function, each element in multiple two-dimensional convolution neural net layer bivector is calculated, obtain the bivector after multiple calculating of equal number, export next coupled middle maximum pond layer to;
Wherein, step S203 carries out two-dimensional convolution mathematical operation to the maximum pond of first in step S202 layer bivector, obtains the bivector after multiple calculating.In step S203, the vector element in the first maximum pond layer bivector being carried out combination of two, adding the feature for carrying out predicting the matching degree that this video presentation and advertisement describe.
Step S204: maximum pond layer in the middle of next, to the bivector after multiple calculating of input, carries out data compression by down-sampling algorithm, obtains middle maximum pond layer bivector, exports next coupled middle two-dimentional neural net layer to;
Step S205: two-dimentional neural net layer in the middle of next, to maximum pond, the centre layer bivector of input, by two-dimensional convolution computing, obtain the multiple middle two-dimensional convolution neural net layer bivector identical with input vector dimension, with activation function, each element in described multiple middle two-dimensional convolution neural net layer bivector is calculated, obtain the bivector after multiple calculating of equal number, judge the bivector after the multiple calculating obtained, it is whether the bivector of 1 × 1, if so, then step S206. is performed; Otherwise export next coupled middle maximum pond layer to, return step step S204;
It should be noted that, step S204 and step S205 is a process repeated, and through continuous learning training, is finally obtained the bivector of multiple 1 × 1 by the bivector after multiple calculating.
Step S206: by the object vector of the Element generation one dimension of whole vectors obtained;
By finally obtain multiple 1 × 1 the element of bivector, generate the object vector of an one dimension according to the object vector create-rule preset.
Step S207: adopt the algorithm preset to carry out computing to the object vector obtained, obtain the matching value of video presentation and advertisement description.
Default algorithm mentioned here is:
y = 1 1 + e - w · x
Wherein, x is input vector, and y is the matching value of video ads, and ω is neuronic weight vector in every layer of neural network, and ω and x has identical dimension.
Secondly, the process setting up video ads matching degree forecast model is described in detail.
The process setting up video ads matching degree forecast model comprises two training flow processs: the training flow process 1, setting up participle and distributed nature corresponding relation; 2, the training flow process of multilayer convolutional neural networks is set up., be described respectively below.
One, first, the training flow process setting up participle and distributed nature vector corresponding relation is described in detail.
See Fig. 3 and Fig. 6, Fig. 3 for a kind of participle of providing in embodiment illustrated in fig. 1 and distributed nature vector corresponding relation train schematic flow sheet, Fig. 6 is that the corresponding relation of middle participle embodiment illustrated in fig. 3 and distributed nature vector trains principle schematic.
" distributed nature " mentioned here refers to text datas such as video presentation, advertisement description and outside language materials, the vector characteristics that obtains after carrying out training according to certain learning rules, this vector characteristics can go to express these text datas from deeper aspect.For example, if obtained one to describe participle " Wu Meiniang ", this word is by after training, and the distributed nature vector obtaining distributed nature is assumed to be X=(0.5,03,0.3,0.6,0.8,0.9), can be understood as, the content of the text data expressed by vectorial X in vector space is " Wu Meiniang ".Certainly, the length of this distributed vectorial X can regulate, and these needs are determined according to actual conditions.
It should be noted that, the corresponding relation of participle and distributed nature vector, being by describing and outside language material video presentation, advertisement, adopting the training of non-supervisory formula training method to obtain.
Above-mentioned, the process of non-supervisory formula training method training comprises the steps:
Step S301: the passage obtained in video presentation, advertisement description or outside language material describes;
Wherein, introduce outside language material mainly in order to avoid video presentation and advertisement describe the problem not having the overlap of word, due to video presentation and advertisement describe carry out participle after the language material that obtains fewer, therefore, increasing outside language material can make the distributed nature of video presentation and advertisement description more general.
" outside language material " mentioned here refers to the text data captured from outside related web sites such as bean cotyledon net, Baidupedias.
Step S302: carry out word segmentation processing to described text description, obtains N number of description participle;
According to the method in step S102, to video presentation, advertisement describes and outside language material carries out word segmentation processing, obtains the description participle corresponding to these text datas.
Step S303: described N number of description participle is mapped as the continuous proper vector of one dimension that N number of length is m;
In order to the distributed nature that each describes participle can be obtained, need constantly to train distributed nature according to learning rules.
For example, as shown in Figure 6, suppose have to describe a language material, and 4 of having obtained wherein describe participle language material, be labeled as respectively: Word_n, Word_n+1, Word_n+2, Word_n+3, according to the corresponding relation preset, the non-mathematical operation amounts such as text data are transformed into the mathematical operation amount using vector as data processing carrier in vector space, namely obtain the continuous proper vector of one dimension: X_n, X_n+1, X_n+2, X_n+3.
Step S304: average calculating operation is weighted to the continuous proper vector of one dimension that a front N-1 length is m, obtains a predicted vector;
Still describe with the example in step S303, to the continuous proper vector of one dimension: X_n, X_n+1, X_n+2, X_n+3 perform mathematical calculations, such as, weighting in step S304 is averaging computing, like this, can obtain a predicted vector, be designated as: X ' _ n+1.
Step S305: when utilizing described predicted vector to predict N number of description participle, judges to predict that whether participle and actual N number of description participle predicated error rate are lower than the prediction threshold value preset: if perform step S306; If not, step S307 is performed.
Step S306: distributed nature training terminates, the N number of distributed nature vector of the described continuous proper vector of N number of one dimension corresponding to described N number of description participle;
Step S307: utilize back-propagation algorithm to adjust the continuous proper vector of described N number of one dimension, obtain the continuous proper vector of one dimension that new N number of length is m, continues to perform step S304 and step S305.
It should be noted that, here " back-propagation algorithm " mentioned is a kind of conventional algorithm solving neural network, in the embody rule process of algorithm, need the weights correcting every one deck according to the departure degree of Output rusults and the real Output rusults of prediction conversely from back to front a step by a step, until the departure degree of the Output rusults predicted of the input data after correction and real Output rusults is less than predetermined threshold value; Be directed to the embodiment of the present invention, input data are the continuous proper vector of N number of one dimension, and Output rusults is the probability distribution situation of N number of word, when the probability distribution situation of N number of word is lower than predetermined threshold value, obtains the new continuous proper vector of N number of one dimension.
Explain step S305 in detail, the predicted vector X ' _ n+1 obtained in step S304 is inputed to and describes participle by (n+1)th that predicts in the sorter of Softmax algorithm realization in this section of language material, description participle and the actual description participle error of statistical forecast, obtain describing participle predicated error rate, judge that whether this predicated error rate is lower than the prediction threshold value preset, if so, (n+1)th describes predicting the outcome accurately or stablizing of participle to show to utilize this predicted vector to predict, if otherwise predicting the outcome of (n+1)th description participle is inaccurate or unstable to show to utilize this predicted vector to predict, like this, just need again to return 4 the continuous proper vector of one dimension: X_n before adjusting (n+1)th description participle, X_n+1, X_n+2, X_n+3, the continuous proper vector of adjustment one dimension can adopt some mathematical algorithms, such as back-propagation algorithm, certainly the mathematical algorithm other can also being selected feasible according to the prediction effect of reality, obtain 4 new continuous proper vectors of one dimension, be designated as: X_n ', X_n+1 ', X_n+2 ', X_n+3 ', then, continue to perform S304 and step S305, until meet prediction participle with actual N number of description participle predicated error rate lower than the prediction threshold value preset, distributed nature training process terminates, 4 the final continuous proper vectors of one dimension now obtained are these 4 the distributed nature vectors describing participle, the i.e. actual mathematical quantity input inputing to video ads matching degree forecast model.
Two, secondly, the training flow process setting up multilayer convolutional neural networks is described in detail.
See Fig. 4 and Fig. 7, Fig. 4 a kind of multilayer convolutional neural networks training schematic flow sheet for providing in embodiment illustrated in fig. 1, the structural representation of a kind of multilayer convolutional neural networks that Fig. 7 provides for the embodiment of the present invention.
Intactly introduce below, the lifting degree according to ad click rate carries out training the step obtaining multilayer convolutional neural networks, comprising:
Step S401: obtain the ad click rate lifting degree L that Sample video describes, sample advertisement describes and described Sample video advertisement is right that a Sample video advertisement is right from the training sample of multilayer convolutional neural networks.
Step S402: according to described preset rules, describes described Sample video and sample advertisement describes and carries out word segmentation processing, obtains Sample video and describes participle and sample advertisement describes participle.
Step S403: participle and described sample advertisement are described to described Sample video and describes participle and carry out distributed nature training, obtain Sample video respectively and describe the sample distribution formula proper vector that participle and sample advertisement describe participle, wherein, described Sample video describes the input that the sample distribution formula proper vector that participle and sample advertisement describe participle is multilayer convolutional neural networks.
Step S404: the Sample video of multilayer convolutional neural networks to input describes the sample distribution formula proper vector that participle and sample advertisement describe participle and train, obtains Sample video and describes and the sample matches value L ' of sample advertisement description.
It should be noted that, the L ' calculated is actual is the ad click rate lifting degree obtained by multilayer convolutional neural networks model, in the embodiment of the present invention, the ad click rate lifting degree that multilayer convolutional neural networks model exports is defined as the matching value of video matching advertisement.
Step S405: judge that whether the error range of L ' and L is lower than the sample training error threshold preset: if perform step S406; If not, step S407 is performed.
S406: multilayer convolutional neural networks model training terminates, determines the neuronic weights ω in every layer of neural network.
S407: utilize back-propagation algorithm to adjust neuronic weights ω in every layer of neural network, then, continues to perform step S401 to step S405.
It should be noted that, back-propagation algorithm is in the process of establishing of video ads matching degree forecast model, from rear to the front connection weights progressively revised multilayer convolutional neural networks, thus the training effect of video ads matching degree forecast model can be made accurately with stable.
Further, step S404 above-mentioned is launched to be described, comprising:
Step S501: the one dimension convolutional neural networks layer in described multilayer convolutional neural networks, the sample distribution formula proper vector that participle and sample advertisement describe participle is described to the described Sample video of input and carries out one dimension convolution algorithm, obtain Sample video and describe the sample distribution formula feature One-Dimensional Extended vector that participle and sample advertisement describe participle, export the first maximum pond layer to;
Step S502: described first maximum pond layer, to the sample distribution formula feature One-Dimensional Extended vector of input, carries out data compression by down-sampling algorithm, obtains the first maximum pond layer sample bivector, export the first two-dimensional convolution neural net layer to;
Step S503: described first two-dimensional convolution neural net layer, to the first maximum pond layer sample bivector of input, by two-dimensional convolution computing, obtains the multiple two-dimensional convolution neural net layer sample bivectors identical with input vector dimension; With activation function, each element in described multiple two-dimensional convolution neural net layer sample bivector is calculated, obtain the sample bivector after multiple calculating of equal number, export next coupled middle maximum pond layer to;
It should be noted that, activation function is: ReLU function, for:
y j = max ( 0 , Σ i = 1 n ω ij x i )
Wherein, x ifor the input of two-dimensional convolution neural net layer, y jfor the output of two-dimensional convolution neural net layer, ω ijfor being the element in every layer of neural network residing for described activation function in neuron weight vector, be the weights connecting input i and export j.
Step S504: next middle maximum pond layer described, to the sample bivector after multiple calculating of input, carry out data compression by down-sampling algorithm, obtain middle maximum pond layer sample bivector, export next coupled middle two-dimentional neural net layer to;
Step S505: next middle two-dimentional neural net layer described, to maximum pond, the centre layer sample bivector of input, by two-dimensional convolution computing, obtain the multiple middle two-dimensional convolution neural net layer sample bivector identical with input vector dimension, with activation function, each element in described multiple middle two-dimensional convolution neural net layer sample bivector is calculated, obtain the sample bivector after multiple calculating of equal number, judge the sample bivector after the multiple calculating obtained, it is whether the bivector of 1 × 1, if, then perform step S506, otherwise export next coupled middle maximum pond layer to, return step S504,
Step S506: by the sample object vector of the Element generation one dimension of whole vectors obtained;
Step S507: adopt the algorithm preset to carry out computing to the sample object vector obtained, obtain the sample matches value L ' that Sample video describes and sample advertisement describes.
Below, then to the device of the video matching advertisement that the embodiment of the present invention provides be described in detail.
The apparatus structure schematic diagram of a kind of video matching advertisement that Fig. 8 provides for the embodiment of the present invention, with Fig. 1 a kind of method of video matching advertisement corresponding, comprising: video ads describes and obtains module 801, video ads describes word segmentation processing module 802, video ads matching degree forecast model load module 803, video ads distributed nature obtain module 804, video ads matching degree forecast model output module 805, video ads matching judgment module 806.
Wherein, video ads describes and obtains module 801, and for obtaining the video presentation of the video of advertisement to be matched, the advertisement obtaining candidate locations from advertisement base describes.
Video ads describes word segmentation processing module 802, for according to preset rules, describes carry out word segmentation processing to described video presentation and described advertisement, obtains video presentation participle and advertisement describes participle.
Video ads matching degree forecast model load module 803, inputs to for described video presentation participle and advertisement being described participle the video ads matching degree forecast model set up in advance.
Video ads distributed nature obtains module 804, and in described video ads matching degree forecast model, according to the corresponding relation of participle and distributed nature vector, obtaining video presentation participle and advertisement, to describe the distributed nature of participle vectorial; The corresponding relation of described participle and distributed nature vector, obtains by carrying out training to video presentation, advertisement description and outside language material.
Video ads matching degree forecast model output module 805, for in described video ads matching degree forecast model, described video presentation participle and described advertisement are described the vectorial multilayer convolutional neural networks inputed in model of distributed nature of participle, obtain the described video of advertisement to be matched and the matching value of candidate locations; Described multilayer convolutional neural networks is that the lifting degree of foundation ad click rate carries out training acquisition.
Video ads matching judgment module 806, if described matching value is greater than default matching degree threshold value, then video and the candidate locations of described advertisement to be matched match.
It should be noted that, the corresponding relation of participle and distributed nature vector, be passed through to describe and outside language material video presentation, advertisement by non-supervisory formula training method training module, adopt the training of non-supervisory formula training method to obtain.
Above-mentioned, non-supervisory formula training method training module, comprising:
Text description obtains submodule, describes for the passage obtained in video presentation, advertisement description or outside language material;
Text description word segmentation processing submodule, for carrying out word segmentation processing to described text description, obtains N number of description participle;
Maps feature vectors submodule, for being mapped as described N number of description participle the continuous proper vector of one dimension that N number of length is m;
Predicted vector obtains submodule, for being that the continuous proper vector of one dimension of m is weighted average calculating operation to a front N-1 length, obtains a predicted vector;
Participle predicated error rate judges submodule, for when utilizing described predicted vector to predict N number of description participle, judges to predict that whether participle and actual N number of description participle predicated error rate are lower than the prediction threshold value preset:
If so, distributed nature training terminates, the N number of distributed nature vector of the described continuous proper vector of N number of one dimension corresponding to described N number of description participle; If not, utilize back-propagation algorithm to adjust the continuous proper vector of described N number of one dimension, obtain the continuous proper vector of one dimension that new N number of length is m, predicted vector described in sequence trigger switch obtains submodule and participle predicated error rate judges submodule.
Video ads matching degree forecast model output module, comprising:
Distributed nature vector input submodule, the distributed nature vector for described video presentation participle and described advertisement being described participle inputs to the one dimension convolutional neural networks layer in the multilayer convolutional neural networks in model;
One dimension convolutional neural networks layer process submodule, distributed nature vector for describing participle to the described video presentation participle and advertisement that input one dimension convolutional neural networks layer carries out one dimension convolution algorithm, obtain video and frequently retouch the distributed nature One-Dimensional Extended vector that participle is retouched in participle and advertisement frequently, export the first maximum pond layer to;
First maximum pond layer process submodule, for the distributed nature One-Dimensional Extended vector to the described first maximum pond layer of input, carry out data compression by down-sampling algorithm, obtain the first maximum pond layer bivector, export the first two-dimensional convolution neural net layer to;
First two-dimensional convolution neural net layer process submodule, for the first maximum pond layer bivector to the described first two-dimensional convolution neural net layer of input, by two-dimensional convolution computing, obtain the multiple two-dimensional convolution neural net layer bivectors identical with input vector dimension; With activation function, each element in described multiple two-dimensional convolution neural net layer bivector is calculated, obtain the first bivector after multiple calculating of equal number, export next coupled middle maximum pond layer to;
Maximum pond layer process submodule in the middle of next, for the bivector after the multiple calculating to input next middle maximum pond layer described, carry out data compression by down-sampling algorithm, obtain middle maximum pond layer bivector, export next coupled middle two-dimentional neural net layer to;
Two-dimentional neural net layer process submodule in the middle of next, for maximum pond, the centre layer bivector to input next middle two-dimentional neural net layer described, by two-dimensional convolution computing, obtain the multiple middle two-dimensional convolution neural net layer bivector identical with input vector dimension, with activation function, each element in described multiple middle two-dimensional convolution neural net layer bivector is calculated, obtain the bivector after multiple calculating of equal number, judge the bivector after the multiple calculating obtained, it is whether the bivector of 1 × 1, if, then trigger target vector generates submodule, otherwise export next coupled middle maximum pond layer to, next middle maximum pond layer process submodule described in sequence trigger switch and two-dimentional neural net layer process submodule in the middle of next,
Object vector generates submodule, for the object vector of the Element generation one dimension of whole vectors that will obtain;
Matching value obtains submodule, for adopting default algorithm to carry out computing to the object vector obtained, obtains the matching value of video presentation and advertisement description.
Default algorithm mentioned here is:
y = 1 1 + e - w · x
Wherein, x is input vector, and y is the matching value of video ads, and ω is neuronic weight vector in every layer of neural network, and ω and x has identical dimension.
It should be noted that, multilayer convolutional neural networks is by neural metwork training module, and the lifting degree according to ad click rate carries out training and obtains;
Described neural metwork training module, comprising:
Sample input submodule, for obtaining the right ad click rate lifting degree L that Sample video describes, sample advertisement describes and described Sample video advertisement is right of a Sample video advertisement in the training sample from multilayer convolutional neural networks;
Sample video advertisement describes word segmentation processing submodule, for according to described preset rules, to describe and sample advertisement describes and carries out word segmentation processing to described Sample video, obtains Sample video and describes participle and sample advertisement describes participle;
Sample distribution formula proper vector obtains submodule, describe participle carry out distributed nature training for describing participle and described sample advertisement to described Sample video, obtain Sample video respectively and describe the sample distribution formula proper vector that participle and sample advertisement describe participle, wherein, described Sample video describes the input that the sample distribution formula proper vector that participle and sample advertisement describe participle is multilayer convolutional neural networks;
Sample video advertising matches degree predictor module, describe for the Sample video of multilayer convolutional neural networks to input the sample distribution formula proper vector that participle and sample advertisement describe participle to train, obtain Sample video and describe and the sample matches value L ' of sample advertisement description;
Sample matches degree judges submodule, for judging that whether the error range of L ' and L is lower than the sample training error threshold preset:
If so, multilayer convolutional neural networks model training terminates, and determines the neuronic weights ω in every layer of neural network; If not, utilize back-propagation algorithm to adjust neuronic weights ω in every layer of neural network, then, sequence trigger switch sample input submodule judges submodule to sample matches degree.
Further, Sample video advertising matches degree predictor module, comprising:
Sample distribution formula feature One-Dimensional Extended unit, for the one dimension convolutional neural networks layer in multilayer convolutional neural networks, the sample distribution formula proper vector that participle and sample advertisement describe participle is described to the described Sample video of input and carries out one dimension convolution algorithm, obtain Sample video and describe the sample distribution formula feature One-Dimensional Extended vector that participle and sample advertisement describe participle, export sample first maximum pond layer to;
Sample first maximum pond layer processing unit, for the sample distribution formula feature One-Dimensional Extended vector to input amendment first maximum pond layer, carry out data compression by down-sampling algorithm, obtain the first pond layer sample bivector, export sample first two-dimensional convolution neural net layer to;
Sample first two-dimensional convolution neural net layer processing unit, for the first pond layer sample bivector to input amendment first two-dimensional convolution neural net layer, by two-dimensional convolution computing, obtain the multiple two-dimensional convolution neural net layer sample bivectors identical with input vector dimension; With activation function, each element in described multiple two-dimensional convolution neural net layer sample bivector is calculated, obtain the sample bivector after multiple calculating of equal number, export next middle maximum pond layer of coupled sample to;
It should be noted that, activation function is: ReLU function, for:
y j = max ( 0 , Σ i = 1 n ω ij x i )
Wherein, x ifor the input of two-dimensional convolution neural net layer, y jfor the output of two-dimensional convolution neural net layer, ω ijelement in every layer of neural network residing for described activation function in neuron weight vector is the weights connecting input i and export j.
Next middle maximum pond layer processing unit of sample, for the sample bivector after the multiple calculating to next middle maximum pond layer of input amendment, data compression is carried out by down-sampling algorithm, obtain maximum pond layer sample bivector in the middle of sample, export next middle two-dimentional neural net layer of coupled sample to;
Next middle two-dimentional neural net layer processing unit of sample, for maximum pond, the centre layer sample bivector to next middle two-dimentional neural net layer of input amendment, by two-dimensional convolution computing, obtain the multiple middle two-dimensional convolution neural net layer sample bivector identical with input vector dimension, with activation function, each element in described multiple middle two-dimensional convolution neural net layer sample bivector is calculated, obtain the intermediate sample bivector after multiple calculating of equal number, judge the sample bivector after the multiple calculating obtained, it is whether the bivector of 1 × 1, if, then trigger sample object generation unit, otherwise export next middle maximum pond layer of coupled sample to, next middle maximum pond layer processing unit of sample described in sequence trigger switch and next middle two-dimentional neural net layer processing unit of sample,
Sample object vector generation unit, for the sample object vector of the Element generation one dimension of whole vectors that will obtain;
Sample matches value obtains unit, for adopting default algorithm to carry out computing to the sample object vector obtained, obtains the sample matches value L ' that Sample video describes and sample advertisement describes.
It should be noted that, in this article, the such as relational terms of first and second grades and so on is only used for an entity or operation to separate with another entity or operational zone, and not necessarily requires or imply the relation that there is any this reality between these entities or operation or sequentially.And, term " comprises ", " comprising " or its any other variant are intended to contain comprising of nonexcludability, thus make to comprise the process of a series of key element, method, article or equipment and not only comprise those key elements, but also comprise other key elements clearly do not listed, or also comprise by the intrinsic key element of this process, method, article or equipment.When not more restrictions, the key element limited by statement " comprising ... ", and be not precluded within process, method, article or the equipment comprising described key element and also there is other identical element.
Each embodiment in this instructions all adopts relevant mode to describe, between each embodiment identical similar part mutually see, what each embodiment stressed is the difference with other embodiments.Especially, for device embodiment, because it is substantially similar to embodiment of the method, so description is fairly simple, relevant part illustrates see the part of embodiment of the method.
The foregoing is only preferred embodiment of the present invention, be not intended to limit protection scope of the present invention.All any amendments done within the spirit and principles in the present invention, equivalent replacement, improvement etc., be all included in protection scope of the present invention.

Claims (18)

1. a method for video matching advertisement, is characterized in that, the method comprises:
Obtain the video presentation of the video of advertisement to be matched, the advertisement obtaining candidate locations from advertisement base describes;
According to preset rules, described video presentation and described advertisement are described and carries out word segmentation processing, obtain video presentation participle and advertisement describes participle;
Described video presentation participle and advertisement are described participle and input to the video ads matching degree forecast model set up in advance;
Described video ads matching degree forecast model, according to the corresponding relation of participle and distributed nature vector, obtains the distributed nature vector that video presentation participle and advertisement describe participle; The corresponding relation of described participle and distributed nature vector, obtains by carrying out training to video presentation, advertisement description and outside language material;
Described video ads matching degree forecast model, describes the vectorial multilayer convolutional neural networks inputed in model of distributed nature of participle, obtains the described video of advertisement to be matched and the matching value of candidate locations by described video presentation participle and described advertisement; Described multilayer convolutional neural networks is that the lifting degree of foundation ad click rate carries out training acquisition;
If described matching value is greater than default matching degree threshold value, then video and the candidate locations of described advertisement to be matched match.
2. method according to claim 1, it is characterized in that: the described distributed nature vector described video presentation participle and described advertisement being described participle inputs to the multilayer convolutional neural networks in model, obtain the step of the described video of advertisement to be matched and the matching value of candidate locations, comprising:
One dimension convolutional neural networks layer in I1, described multilayer convolutional neural networks, the distributed nature vector described video presentation participle inputted and advertisement being described to participle carries out one dimension convolution algorithm, obtain the distributed nature One-Dimensional Extended vector that video presentation participle and advertisement describe participle, export the first maximum pond layer to;
I2, described first maximum pond layer, to the distributed nature One-Dimensional Extended vector of input, carry out data compression by down-sampling algorithm, obtain the first maximum pond layer bivector, export the first two-dimensional convolution neural net layer to;
I3, described first two-dimensional convolution neural net layer, to the first maximum pond layer bivector of input, by two-dimensional convolution computing, obtain the multiple two-dimensional convolution neural net layer bivectors identical with input vector dimension; With activation function, each element in described multiple two-dimensional convolution neural net layer bivector is calculated, obtain the bivector after multiple calculating of equal number, export next coupled middle maximum pond layer to;
I4: next middle maximum pond layer described, to the bivector after multiple calculating of input, carries out data compression by down-sampling algorithm, obtains middle maximum pond layer bivector, exports next coupled middle two-dimentional neural net layer to;
I5: next middle two-dimentional neural net layer described, to maximum pond, the centre layer bivector of input, by two-dimensional convolution computing, obtain the multiple middle two-dimensional convolution neural net layer bivector identical with input vector dimension, with activation function, each element in described multiple middle two-dimensional convolution neural net layer bivector is calculated, obtain the bivector after multiple calculating of equal number, judge the bivector after the multiple calculating obtained, it is whether the bivector of 1 × 1, if so, then step I6 is performed; Otherwise export next coupled middle maximum pond layer to, return step I4;
I6: by the object vector of the Element generation one dimension of whole vectors obtained;
I7: adopt the algorithm preset to carry out computing to the object vector obtained, obtain the matching value of video presentation and advertisement description.
3. method according to claim 1, is characterized in that: the corresponding relation of described participle and distributed nature vector, being by describing and outside language material video presentation, advertisement, adopting the training of non-supervisory formula training method to obtain.
4. method according to claim 3, is characterized in that: the process of the non-supervisory formula training method training of described employing comprises:
A: the passage obtained in video presentation, advertisement description or outside language material describes;
B: carry out word segmentation processing to described text description, obtains N number of description participle;
C: described N number of description participle is mapped as the continuous proper vector of one dimension that N number of length is m;
D: average calculating operation is weighted to the continuous proper vector of one dimension that a front N-1 length is m, obtains a predicted vector;
E: when utilizing described predicted vector to predict N number of description participle, judges to predict that whether participle and actual N number of description participle predicated error rate are lower than the prediction threshold value preset:
If so, distributed nature training terminates, the N number of distributed nature vector of the described continuous proper vector of N number of one dimension corresponding to described N number of description participle; If not, utilize back-propagation algorithm to adjust the continuous proper vector of described N number of one dimension, obtain the continuous proper vector of one dimension that new N number of length is m, continue to perform steps d and step e.
5. method according to claim 1, is characterized in that, the lifting degree according to ad click rate carries out training the step obtaining multilayer convolutional neural networks, comprising:
F: obtain the ad click rate lifting degree L that Sample video describes, sample advertisement describes and described Sample video advertisement is right that a Sample video advertisement is right from the training sample of multilayer convolutional neural networks;
G: according to described preset rules, describes described Sample video and sample advertisement describes and carries out word segmentation processing, obtains Sample video and describes participle and sample advertisement describes participle;
H: participle and described sample advertisement are described to described Sample video and describes participle and carry out distributed nature training, obtain Sample video respectively and describe the sample distribution formula proper vector that participle and sample advertisement describe participle, wherein, described Sample video describes the input that the sample distribution formula proper vector that participle and sample advertisement describe participle is multilayer convolutional neural networks;
I: the Sample video of multilayer convolutional neural networks to input describes the sample distribution formula proper vector that participle and sample advertisement describe participle and train, obtains Sample video and describes and the sample matches value L ' of sample advertisement description;
J: judge that whether the error range of L ' and L is lower than the sample training error threshold preset:
If so, multilayer convolutional neural networks model training terminates, and determines the neuronic weights ω in every layer of neural network; If not, utilize back-propagation algorithm to adjust neuronic weights ω in every layer of neural network, then, continue to perform step f to step j.
6. method according to claim 5, is characterized in that, described step I, comprising:
I1: the one dimension convolutional neural networks layer in described multilayer convolutional neural networks, the sample distribution formula proper vector that participle and sample advertisement describe participle is described to the described Sample video of input and carries out one dimension convolution algorithm, obtain Sample video and describe the sample distribution formula feature One-Dimensional Extended vector that participle and sample advertisement describe participle, export the first maximum pond layer to;
I2: described first maximum pond layer, to the sample distribution formula feature One-Dimensional Extended vector of input, carries out data compression by down-sampling algorithm, obtains the first maximum pond layer sample bivector, export the first two-dimensional convolution neural net layer to;
I3: described first two-dimensional convolution neural net layer, to the first maximum pond layer sample bivector of input, by two-dimensional convolution computing, obtains the multiple two-dimensional convolution neural net layer sample bivectors identical with input vector dimension; With activation function, each element in described multiple two-dimensional convolution neural net layer sample bivector is calculated, obtain the sample bivector after multiple calculating of equal number, export next coupled middle maximum pond layer to;
I4: next middle maximum pond layer described, to the sample bivector after multiple calculating of input, carries out data compression by down-sampling algorithm, obtains middle maximum pond layer sample bivector, exports next coupled middle two-dimentional neural net layer to;
I5: next middle two-dimentional neural net layer described, to maximum pond, the centre layer sample bivector of input, by two-dimensional convolution computing, obtain the multiple middle two-dimensional convolution neural net layer sample bivector identical with input vector dimension, with activation function, each element in described multiple middle two-dimensional convolution neural net layer sample bivector is calculated, obtain the sample bivector after multiple calculating of equal number, judge the sample bivector after the multiple calculating obtained, it is whether the bivector of 1 × 1, if so, then step I 6 is performed; Otherwise export next coupled middle maximum pond layer to, return step I 4;
I6: by the sample object vector of the Element generation one dimension of whole vectors obtained;
I7: adopt the algorithm preset to carry out computing to the sample object vector obtained, obtain the sample matches value L ' that Sample video describes and sample advertisement describes.
7. method according to claim 1, is characterized in that, the lifting degree of described ad click rate, for:
lift = ctr ( ad , video ) max ( ctrad , ctrvideo )
Wherein, ctr (ad, video)for the clicking rate of targeted advertisements on target video, ctr adfor the average click-through rate of targeted advertisements on all videos, ctr videofor the average click-through rate of the whole advertisements on target video.
8. the method according to claim 2 or 6, is characterized in that, described activation function is: ReLU function, for:
j i = max ( 0 , Σ i = 1 n ω ij x i )
Wherein, x ifor the input of two-dimensional convolution neural net layer, y jfor the output of two-dimensional convolution neural net layer, ω ijelement in every layer of neural network residing for described activation function in neuron weight vector is the weights connecting input i and export j.
9. the method according to claim 2 or 6, is characterized in that, described default algorithm is:
y = 1 1 + e - w · x
Wherein, x is input vector, and y is the matching value of video ads, and ω is neuronic weight vector in every layer of neural network, and ω and x has identical dimension.
10. a device for video matching advertisement, is characterized in that, this device comprises:
Video ads describes and obtains module, and for obtaining the video presentation of the video of advertisement to be matched, the advertisement obtaining candidate locations from advertisement base describes;
Video ads describes word segmentation processing module, for according to preset rules, describes carry out word segmentation processing to described video presentation and described advertisement, obtains video presentation participle and advertisement describes participle;
Video ads matching degree forecast model load module, inputs to for described video presentation participle and advertisement being described participle the video ads matching degree forecast model set up in advance;
Video ads distributed nature obtains module, and in described video ads matching degree forecast model, according to the corresponding relation of participle and distributed nature vector, obtaining video presentation participle and advertisement, to describe the distributed nature of participle vectorial; The corresponding relation of described participle and distributed nature vector, obtains by carrying out training to video presentation, advertisement description and outside language material;
Video ads matching degree forecast model output module, for in described video ads matching degree forecast model, described video presentation participle and described advertisement are described the vectorial multilayer convolutional neural networks inputed in model of distributed nature of participle, obtain the described video of advertisement to be matched and the matching value of candidate locations; Described multilayer convolutional neural networks is that the lifting degree of foundation ad click rate carries out training acquisition;
Video ads matching judgment module, if described matching value is greater than default matching degree threshold value, then video and the candidate locations of described advertisement to be matched match.
11. devices according to claim 10, is characterized in that: described video ads matching degree forecast model output module, comprising:
Distributed nature vector input submodule, the distributed nature vector for described video presentation participle and described advertisement being described participle inputs to the one dimension convolutional neural networks layer in the multilayer convolutional neural networks in model;
One dimension convolutional neural networks layer process submodule, distributed nature vector for describing participle to the described video presentation participle and advertisement that input one dimension convolutional neural networks layer carries out one dimension convolution algorithm, obtain video and frequently retouch the distributed nature One-Dimensional Extended vector that participle is retouched in participle and advertisement frequently, export the first maximum pond layer to;
First maximum pond layer process submodule, for the distributed nature One-Dimensional Extended vector to the described first maximum pond layer of input, carry out data compression by down-sampling algorithm, obtain the first maximum pond layer bivector, export the first two-dimensional convolution neural net layer to;
First two-dimensional convolution neural net layer process submodule, for the first maximum pond layer bivector to the described first two-dimensional convolution neural net layer of input, by two-dimensional convolution computing, obtain the multiple two-dimensional convolution neural net layer bivectors identical with input vector dimension; With activation function, each element in described multiple two-dimensional convolution neural net layer bivector is calculated, obtain the first bivector after multiple calculating of equal number, export next coupled middle maximum pond layer to;
Maximum pond layer process submodule in the middle of next, for the bivector after the multiple calculating to input next middle maximum pond layer described, carry out data compression by down-sampling algorithm, obtain middle maximum pond layer bivector, export next coupled middle two-dimentional neural net layer to;
Two-dimentional neural net layer process submodule in the middle of next, for maximum pond, the centre layer bivector to input next middle two-dimentional neural net layer described, by two-dimensional convolution computing, obtain the multiple middle two-dimensional convolution neural net layer bivector identical with input vector dimension, with activation function, each element in described multiple middle two-dimensional convolution neural net layer bivector is calculated, obtain the bivector after multiple calculating of equal number, judge the bivector after the multiple calculating obtained, it is whether the bivector of 1 × 1, if, then trigger target vector generates submodule, otherwise export next coupled middle maximum pond layer to, next middle maximum pond layer process submodule described in sequence trigger switch and two-dimentional neural net layer process submodule in the middle of next,
Object vector generates submodule, for the object vector of the Element generation one dimension of whole vectors that will obtain;
Matching value obtains submodule, for adopting default algorithm to carry out computing to the object vector obtained, obtains the matching value of video presentation and advertisement description.
12. devices according to claim 10, it is characterized in that: the corresponding relation of described participle and distributed nature vector, be passed through to describe and outside language material video presentation, advertisement by non-supervisory formula training method training module, adopt the training of non-supervisory formula training method to obtain.
13. devices according to claim 12, is characterized in that: described non-supervisory formula training method training module, comprising:
Text description obtains submodule, describes for the passage obtained in video presentation, advertisement description or outside language material;
Text description word segmentation processing submodule, for carrying out word segmentation processing to described text description, obtains N number of description participle;
Maps feature vectors submodule, for being mapped as described N number of description participle the continuous proper vector of one dimension that N number of length is m;
Predicted vector obtains submodule, for being that the continuous proper vector of one dimension of m is weighted average calculating operation to a front N-1 length, obtains a predicted vector;
Participle predicated error rate judges submodule, for when utilizing described predicted vector to predict N number of description participle, judges to predict that whether participle and actual N number of description participle predicated error rate are lower than the prediction threshold value preset:
If so, distributed nature training terminates, the N number of distributed nature vector of the described continuous proper vector of N number of one dimension corresponding to described N number of description participle; If not, utilize back-propagation algorithm to adjust the continuous proper vector of described N number of one dimension, obtain the continuous proper vector of one dimension that new N number of length is m, predicted vector described in sequence trigger switch obtains submodule and participle predicated error rate judges submodule.
14. devices according to claim 10, is characterized in that, described multilayer convolutional neural networks is by neural metwork training module, and the lifting degree according to ad click rate carries out training and obtains;
Described neural metwork training module, comprising:
Sample input submodule, for obtaining the right ad click rate lifting degree L that Sample video describes, sample advertisement describes and described Sample video advertisement is right of a Sample video advertisement in the training sample from multilayer convolutional neural networks;
Sample video advertisement describes word segmentation processing submodule, for according to described preset rules, to describe and sample advertisement describes and carries out word segmentation processing to described Sample video, obtains Sample video and describes participle and sample advertisement describes participle;
Sample distribution formula proper vector obtains submodule, describe participle carry out distributed nature training for describing participle and described sample advertisement to described Sample video, obtain Sample video respectively and describe the sample distribution formula proper vector that participle and sample advertisement describe participle, wherein, described Sample video describes the input that the sample distribution formula proper vector that participle and sample advertisement describe participle is multilayer convolutional neural networks;
Sample video advertising matches degree predictor module, describe for the Sample video of multilayer convolutional neural networks to input the sample distribution formula proper vector that participle and sample advertisement describe participle to train, obtain Sample video and describe and the sample matches value L ' of sample advertisement description;
Sample matches degree judges submodule, for judging that whether the error range of L ' and L is lower than the sample training error threshold preset:
If so, multilayer convolutional neural networks model training terminates, and determines the neuronic weights ω in every layer of neural network; If not, utilize back-propagation algorithm to adjust neuronic weights ω in every layer of neural network, then, sequence trigger switch sample input submodule judges submodule to sample matches degree.
15. devices according to claim 14, is characterized in that, described Sample video advertising matches degree predictor module, comprising:
Sample distribution formula feature One-Dimensional Extended unit, for the one dimension convolutional neural networks layer in multilayer convolutional neural networks, the sample distribution formula proper vector that participle and sample advertisement describe participle is described to the described Sample video of input and carries out one dimension convolution algorithm, obtain Sample video and describe the sample distribution formula feature One-Dimensional Extended vector that participle and sample advertisement describe participle, export sample first maximum pond layer to;
Sample first maximum pond layer processing unit, for the sample distribution formula feature One-Dimensional Extended vector to input amendment first maximum pond layer, carry out data compression by down-sampling algorithm, obtain the first pond layer sample bivector, export sample first two-dimensional convolution neural net layer to;
Sample first two-dimensional convolution neural net layer processing unit, for the first pond layer sample bivector to input amendment first two-dimensional convolution neural net layer, by two-dimensional convolution computing, obtain the multiple two-dimensional convolution neural net layer sample bivectors identical with input vector dimension; With activation function, each element in described multiple two-dimensional convolution neural net layer sample bivector is calculated, obtain the sample bivector after multiple calculating of equal number, export next middle maximum pond layer of coupled sample to;
Next middle maximum pond layer processing unit of sample, for the sample bivector after the multiple calculating to next middle maximum pond layer of input amendment, data compression is carried out by down-sampling algorithm, obtain maximum pond layer sample bivector in the middle of sample, export next middle two-dimentional neural net layer of coupled sample to;
Next middle two-dimentional neural net layer processing unit of sample, for maximum pond, the centre layer sample bivector to next middle two-dimentional neural net layer of input amendment, by two-dimensional convolution computing, obtain the multiple middle two-dimensional convolution neural net layer sample bivector identical with input vector dimension, with activation function, each element in described multiple middle two-dimensional convolution neural net layer sample bivector is calculated, obtain the intermediate sample bivector after multiple calculating of equal number, judge the sample bivector after the multiple calculating obtained, it is whether the bivector of 1 × 1, if, then trigger sample object generation unit, otherwise export next middle maximum pond layer of coupled sample to, next middle maximum pond layer processing unit of sample described in sequence trigger switch and next middle two-dimentional neural net layer processing unit of sample,
Sample object vector generation unit, for the sample object vector of the Element generation one dimension of whole vectors that will obtain;
Sample matches value obtains unit, for adopting default algorithm to carry out computing to the sample object vector obtained, obtains the sample matches value L ' that Sample video describes and sample advertisement describes.
16. devices according to claim 10, is characterized in that, the lifting degree of described ad click rate, for:
lift = ctr ( ad , video ) max ( ctrad , ctrvideo )
Wherein, ctr (ad, video)for the clicking rate of targeted advertisements on target video, ctr adfor the average click-through rate of targeted advertisements on all videos, ctr videofor the average click-through rate of the whole advertisements on target video.
17. devices according to claim 11 or 15, it is characterized in that, described activation function is: ReLU function, for:
y j = max ( 0 , Σ i = 1 n ω ij x i )
Wherein, x ifor the input of two-dimensional convolution neural net layer, y jfor the output of two-dimensional convolution neural net layer, ω ijelement in every layer of neural network residing for described activation function in neuron weight vector is the weights connecting input i and export j.
18. devices according to claim 11 or 15, it is characterized in that, described default algorithm is:
y = 1 1 + e - w · x
Wherein, x is input vector, and y is the matching value of video ads, and ω is neuronic weight vector in every layer of neural network, and ω and x has identical dimension.
CN201510338003.0A 2015-06-17 2015-06-17 A kind of method and device of video matching advertisement Active CN104992347B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510338003.0A CN104992347B (en) 2015-06-17 2015-06-17 A kind of method and device of video matching advertisement

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510338003.0A CN104992347B (en) 2015-06-17 2015-06-17 A kind of method and device of video matching advertisement

Publications (2)

Publication Number Publication Date
CN104992347A true CN104992347A (en) 2015-10-21
CN104992347B CN104992347B (en) 2018-12-14

Family

ID=54304156

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510338003.0A Active CN104992347B (en) 2015-06-17 2015-06-17 A kind of method and device of video matching advertisement

Country Status (1)

Country Link
CN (1) CN104992347B (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106227793A (en) * 2016-07-20 2016-12-14 合网络技术(北京)有限公司 A kind of video and the determination method and device of Video Key word degree of association
CN106355446A (en) * 2016-08-31 2017-01-25 镇江乐游网络科技有限公司 Online and mobile game advertising recommending system
CN106649603A (en) * 2016-11-25 2017-05-10 北京资采信息技术有限公司 Webpage text data sentiment classification designated information push method
CN106682108A (en) * 2016-12-06 2017-05-17 浙江大学 Video retrieval method based on multi-modal convolutional neural network
CN106792003A (en) * 2016-12-27 2017-05-31 西安石油大学 A kind of intelligent advertisement inserting method, device and server
CN107172448A (en) * 2017-06-19 2017-09-15 环球智达科技(北京)有限公司 The method for managing video and audio
CN107507046A (en) * 2017-10-13 2017-12-22 北京奇艺世纪科技有限公司 The method and system that advertisement is recalled
CN107609888A (en) * 2016-07-11 2018-01-19 百度(美国)有限责任公司 System and method for the clicking rate prediction between word of inquiring about and submit a tender
CN107730002A (en) * 2017-10-13 2018-02-23 国网湖南省电力公司 A kind of communication network shutdown remote control parameter intelligent fuzzy comparison method
CN108805611A (en) * 2018-05-21 2018-11-13 北京小米移动软件有限公司 Advertisement screening technique and device
CN109391829A (en) * 2017-08-09 2019-02-26 创意引晴(开曼)控股有限公司 Video gets position analysis system, analysis method and storage media ready
CN110278466A (en) * 2019-06-06 2019-09-24 浙江口碑网络技术有限公司 Put-on method, device and the equipment of short video ads
CN110637460A (en) * 2017-07-11 2019-12-31 索尼公司 Visual quality preserving quantitative parameter prediction using deep neural networks
CN111093101A (en) * 2018-10-23 2020-05-01 腾讯科技(深圳)有限公司 Media file delivery method and device, storage medium and electronic device
US10839226B2 (en) 2016-11-10 2020-11-17 International Business Machines Corporation Neural network training

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102591854A (en) * 2012-01-10 2012-07-18 凤凰在线(北京)信息技术有限公司 Advertisement filtering system and advertisement filtering method specific to text characteristics
CN102708498A (en) * 2012-01-13 2012-10-03 合一网络技术(北京)有限公司 Theme orientation based advertising method
CN103164454A (en) * 2011-12-15 2013-06-19 百度在线网络技术(北京)有限公司 Keyword grouping method and keyword grouping system
CN103617230A (en) * 2013-11-26 2014-03-05 中国科学院深圳先进技术研究院 Method and system for advertisement recommendation based microblog
CN104636487A (en) * 2015-02-26 2015-05-20 湖北光谷天下传媒股份有限公司 Advertising information management method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103164454A (en) * 2011-12-15 2013-06-19 百度在线网络技术(北京)有限公司 Keyword grouping method and keyword grouping system
CN102591854A (en) * 2012-01-10 2012-07-18 凤凰在线(北京)信息技术有限公司 Advertisement filtering system and advertisement filtering method specific to text characteristics
CN102708498A (en) * 2012-01-13 2012-10-03 合一网络技术(北京)有限公司 Theme orientation based advertising method
CN103617230A (en) * 2013-11-26 2014-03-05 中国科学院深圳先进技术研究院 Method and system for advertisement recommendation based microblog
CN104636487A (en) * 2015-02-26 2015-05-20 湖北光谷天下传媒股份有限公司 Advertising information management method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈先昌: "基于卷积神经网络的深度学习算法与应用研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107609888A (en) * 2016-07-11 2018-01-19 百度(美国)有限责任公司 System and method for the clicking rate prediction between word of inquiring about and submit a tender
CN107609888B (en) * 2016-07-11 2021-08-27 百度(美国)有限责任公司 System and method for click rate prediction between query and bid terms
CN106227793A (en) * 2016-07-20 2016-12-14 合网络技术(北京)有限公司 A kind of video and the determination method and device of Video Key word degree of association
CN106227793B (en) * 2016-07-20 2019-10-22 优酷网络技术(北京)有限公司 A kind of determination method and device of video and the Video Key word degree of correlation
CN106355446A (en) * 2016-08-31 2017-01-25 镇江乐游网络科技有限公司 Online and mobile game advertising recommending system
CN106355446B (en) * 2016-08-31 2019-11-05 镇江乐游网络科技有限公司 A kind of advertisement recommender system of network and mobile phone games
US10839226B2 (en) 2016-11-10 2020-11-17 International Business Machines Corporation Neural network training
CN106649603B (en) * 2016-11-25 2020-11-10 北京资采信息技术有限公司 Designated information pushing method based on emotion classification of webpage text data
CN106649603A (en) * 2016-11-25 2017-05-10 北京资采信息技术有限公司 Webpage text data sentiment classification designated information push method
CN106682108A (en) * 2016-12-06 2017-05-17 浙江大学 Video retrieval method based on multi-modal convolutional neural network
CN106682108B (en) * 2016-12-06 2022-07-12 浙江大学 Video retrieval method based on multi-mode convolutional neural network
CN106792003A (en) * 2016-12-27 2017-05-31 西安石油大学 A kind of intelligent advertisement inserting method, device and server
CN106792003B (en) * 2016-12-27 2020-04-14 西安石油大学 Intelligent advertisement insertion method and device and server
CN107172448A (en) * 2017-06-19 2017-09-15 环球智达科技(北京)有限公司 The method for managing video and audio
CN110637460A (en) * 2017-07-11 2019-12-31 索尼公司 Visual quality preserving quantitative parameter prediction using deep neural networks
CN110637460B (en) * 2017-07-11 2021-09-28 索尼公司 Visual quality preserving quantitative parameter prediction using deep neural networks
CN109391829A (en) * 2017-08-09 2019-02-26 创意引晴(开曼)控股有限公司 Video gets position analysis system, analysis method and storage media ready
CN107507046A (en) * 2017-10-13 2017-12-22 北京奇艺世纪科技有限公司 The method and system that advertisement is recalled
CN107730002A (en) * 2017-10-13 2018-02-23 国网湖南省电力公司 A kind of communication network shutdown remote control parameter intelligent fuzzy comparison method
CN107730002B (en) * 2017-10-13 2020-06-02 国网湖南省电力公司 Intelligent fuzzy comparison method for remote control parameters of communication gateway machine
CN108805611A (en) * 2018-05-21 2018-11-13 北京小米移动软件有限公司 Advertisement screening technique and device
CN111093101A (en) * 2018-10-23 2020-05-01 腾讯科技(深圳)有限公司 Media file delivery method and device, storage medium and electronic device
CN110278466B (en) * 2019-06-06 2021-08-06 浙江口碑网络技术有限公司 Short video advertisement putting method, device and equipment
CN110278466A (en) * 2019-06-06 2019-09-24 浙江口碑网络技术有限公司 Put-on method, device and the equipment of short video ads

Also Published As

Publication number Publication date
CN104992347B (en) 2018-12-14

Similar Documents

Publication Publication Date Title
CN104992347A (en) Video matching advertisement method and device
CN103778548B (en) Merchandise news and key word matching method, merchandise news put-on method and device
CN104462593B (en) A kind of method and apparatus that the push of user individual message related to resources is provided
CN109522556A (en) A kind of intension recognizing method and device
US11782999B2 (en) Method for training fusion ordering model, search ordering method, electronic device and storage medium
CN105335519A (en) Model generation method and device as well as recommendation method and device
CN108288067A (en) Training method, bidirectional research method and the relevant apparatus of image text Matching Model
CN106649774A (en) Artificial intelligence-based object pushing method and apparatus
CN108133013A (en) Information processing method, device, computer equipment and storage medium
CN104866969A (en) Personal credit data processing method and device
CN105023165A (en) Method, device and system for controlling release tasks in social networking platform
CN106407477A (en) Multidimensional interconnection recommendation method and system
CN108665064A (en) Neural network model training, object recommendation method and device
CN106445954B (en) Business object display method and device
CN109766557A (en) A kind of sentiment analysis method, apparatus, storage medium and terminal device
CN109636430A (en) Object identifying method and its system
CN111199474A (en) Risk prediction method and device based on network diagram data of two parties and electronic equipment
CN109741098A (en) Broadband off-network prediction technique, equipment and storage medium
CN111798280B (en) Multimedia information recommendation method, device and equipment and storage medium
CN114036398B (en) Content recommendation and ranking model training method, device, equipment and storage medium
CN108256098A (en) A kind of method and device of determining user comment Sentiment orientation
CN105740415A (en) Label position weight and self-learning based tendering and bidding good friend recommendation system
CN114519435A (en) Model parameter updating method, model parameter updating device and electronic equipment
CN103186604A (en) Method, device and equipment for determining satisfaction degree of user on search result
CN108596765A (en) A kind of Electronic Finance resource recommendation method and device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant