CN104992347B - A kind of method and device of video matching advertisement - Google Patents
A kind of method and device of video matching advertisement Download PDFInfo
- Publication number
- CN104992347B CN104992347B CN201510338003.0A CN201510338003A CN104992347B CN 104992347 B CN104992347 B CN 104992347B CN 201510338003 A CN201510338003 A CN 201510338003A CN 104992347 B CN104992347 B CN 104992347B
- Authority
- CN
- China
- Prior art keywords
- sample
- video
- participle
- advertisement
- vector
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Landscapes
- Image Analysis (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
A kind of method and device of video matching advertisement provided in an embodiment of the present invention, method include: the video presentation for obtaining the video of advertisement to be matched, and the advertisement description of candidate locations is obtained from advertisement base;Word segmentation processing is carried out to the video presentation and advertisement description;Video presentation participle and advertisement description participle are input to the video ads matching degree prediction model pre-established;Obtain the distributed nature vector of video presentation participle and advertisement description participle;The distributed nature vector of video presentation participle and advertisement description participle is input to the multilayer convolutional neural networks in model, obtains the video of the advertisement to be matched and the matching value of candidate locations;If the matching value is greater than preset matching degree threshold value, the video of the advertisement to be matched matches with candidate locations.Using the embodiment of the present invention, avoiding to be the defect of video matching relevant advertisements, improve the recall rate of video matching advertisement.
Description
Technical field
The present invention relates to internets, mode identification technology, a kind of method more particularly to video matching advertisement and
Device.
Background technique
Currently, online advertisement is grown rapidly therewith as the business model of internet most profit, in advertisement release process
In, it needs to launch matched advertisement to user for different videos, that is to say, that for different videos to user
It carries out personalized advertisement to launch, the economic benefit of businessman can be significantly improved in this way.
In existing technology, it is by the way of for video matching advertisement: utilizes the word in advertisement and video presentation
It is overlapped and carries out semantic matches.
But due to usually only adding the description to the programme content of video when video website editor's video, and for wide
The description majority of announcement is to pay close attention to the product information represented in advertisement, when not having word during video presentation is described with relevant advertisement
When coincidence, the advertisement would not be launched for the video.Such as: have in the video presentation of video A " iPhone ", advertisement a's retouches
In the case where there is no " iPhone " in stating and having " iphone ", video A would not be matched with advertisement a.Therefore, can make non-
Often more videos can not match relevant advertisement, cause the recall rate (or recall ratio) of video matching advertisement not high.
Summary of the invention
The method and device for being designed to provide a kind of video matching advertisement of the embodiment of the present invention, to improve video matching
The recall rate of advertisement.
In order to achieve the above objectives, the embodiment of the invention discloses a kind of methods of video matching advertisement, this method comprises:
The video presentation for obtaining the video of advertisement to be matched obtains the advertisement description of candidate locations from advertisement base;
According to preset rules, word segmentation processing is carried out to the video presentation and advertisement description, obtains video presentation point
Word and advertisement description participle;
Video presentation participle and advertisement description participle are input to the video ads matching degree pre-established and predict mould
Type;
The video ads matching degree prediction model is obtained according to the corresponding relationship of participle and distributed nature vector
The distributed nature vector of video presentation participle and advertisement description participle;The participle is corresponding with distributed nature vector to close
System is by being trained acquisition to video presentation, advertisement description and external corpus;
The video ads matching degree prediction model, by point of video presentation participle and advertisement description participle
Cloth feature vector is input to the multilayer convolutional neural networks in model, obtains the video and candidate locations of the advertisement to be matched
Matching value;The multilayer convolutional neural networks are that the promotion degree of foundation ad click rate is trained acquisition;
If the matching value is greater than preset matching degree threshold value, video and the candidate locations phase of the advertisement to be matched
Match.
Preferably, the distributed nature vector by video presentation participle and advertisement description participle is input to
Multilayer convolutional neural networks in model, obtain the advertisement to be matched video and candidate locations matching value the step of, packet
It includes:
One-dimensional convolutional neural networks layer in I1, the multilayer convolutional neural networks, to the video presentation point of input
The distributed nature vector of word and advertisement description participle carries out one-dimensional convolution algorithm, obtains video presentation participle and advertisement description point
The distributed nature One-Dimensional Extended vector of word, output to the first maximum pond layer;
I2, the first maximum pond layer, to the distributed nature One-Dimensional Extended vector of input, by down-sampling algorithm into
Row data compression obtains the first maximum pond layer bivector, output to the first two-dimensional convolution neural net layer;
I3, the first two-dimensional convolution neural net layer pass through two dimension to the first maximum pond layer bivector of input
Convolution algorithm obtains multiple two-dimensional convolution neural net layer bivectors identical with input vector dimension;With activation primitive pair
Each element is calculated in the multiple two-dimensional convolution neural net layer bivector, after the multiple calculating for obtaining identical quantity
Bivector, export to coupled next intermediate maximum pond layer;
I4: next intermediate maximum pond layer passes through down-sampling algorithm to the bivector after multiple calculating of input
Data compression is carried out, intermediate maximum pond layer bivector is obtained, is exported to coupled next intermediate two-dimentional neural network
Layer;
I5: next intermediate two-dimentional neural net layer passes through two dimension to the intermediate maximum pond layer bivector of input
Convolution algorithm obtains multiple intermediate two-dimensional convolution neural net layer bivectors identical with input vector dimension, with activation letter
It is several that each element in the multiple intermediate two-dimensional convolution neural net layer bivector is calculated, obtain the more of identical quantity
Bivector after a calculating, the bivector after judging the multiple calculating obtained, if it is 1 × 1 bivector, if
It is to then follow the steps I6;Otherwise it exports to coupled next intermediate maximum pond layer, return step I4;
I6: by one one-dimensional object vector of Element generation of obtained whole vectors;
I7: operation is carried out using object vector of the preset algorithm to acquisition, obtains of video presentation and advertisement description
With value.
Preferably, the participle and the corresponding relationship of distributed nature vector, be by video presentation, advertisement description with
And external corpus, it is obtained using the training of non-supervisory formula training method.
Preferably, the process using the training of non-supervisory formula training method includes:
A: the passage description in video presentation, advertisement description or external corpus is obtained;
B: carrying out word segmentation processing to the verbal description, obtains N number of description participle;
C: N number of description participle is mapped as the one-dimensional continuous characteristic vector that N number of length is m;
D: operation is weighted and averaged to the one-dimensional continuous characteristic vector that preceding N-1 length is m, obtains a pre- direction finding
Amount;
E: when using predicted vector prediction n-th description participle, judge prediction participle and the description point of practical n-th
Whether word prediction error rate is lower than preset prediction threshold value:
If so, distributed nature training terminates, N number of one-dimensional continuous characteristic vector is N number of description participle
Corresponding N number of distributed nature vector;If not, N number of one-dimensional continuous characteristic vector is adjusted using back-propagation algorithm,
The one-dimensional continuous characteristic vector that new N number of length is m is obtained, step d and step e are continued to execute.
Preferably, the promotion degree according to ad click rate is trained the step of obtaining multilayer convolutional neural networks, comprising:
F: the Sample video description of one Sample video advertisement pair of acquisition from the training sample of multilayer convolutional neural networks,
The ad click rate promotion degree L of sample advertisement description and the Sample video advertisement pair;
G: according to the preset rules, word segmentation processing is carried out to Sample video description and sample advertisement description, is obtained
Sample video description participle and sample advertisement description participle;
H: distributed nature training is carried out to Sample video description participle and sample advertisement description participle, respectively
Obtain the sample distribution formula feature vector of Sample video description participle and sample advertisement description participle, wherein the Sample video
The sample distribution formula feature vector of description participle and sample advertisement description participle is the input of multilayer convolutional neural networks;
I: multilayer convolutional neural networks are to the Sample video description participle of input and the sample point of sample advertisement description participle
Cloth feature vector is trained, and obtains the sample matches value L ' of Sample video description and sample advertisement description;
J: judge whether L ' and the error range of L are lower than preset sample training error threshold:
If so, multilayer convolutional neural networks model training terminates, the weight of the neuron in every layer of neural network is determined
ω;If not, adjusting the weight ω of neuron in every layer of neural network using back-propagation algorithm, then, step f is continued to execute
To step j.
Preferably, the step i, comprising:
I1: the one-dimensional convolutional neural networks layer in the multilayer convolutional neural networks retouches the Sample video of input
The sample distribution formula feature vector for stating participle and sample advertisement description participle carries out one-dimensional convolution algorithm, obtains Sample video description
The sample distribution formula feature One-Dimensional Extended vector of participle and sample advertisement description participle, output to the first maximum pond layer;
I2: described first maximum pond layer calculates the sample distribution formula feature One-Dimensional Extended vector of input by down-sampling
Method carries out data compression, obtains the first maximum pond layer sample bivector, output to the first two-dimensional convolution neural net layer;
I3: the first two-dimensional convolution neural net layer passes through the first maximum pond layer sample bivector of input
Two-dimensional convolution operation obtains multiple two-dimensional convolution neural net layer sample bivectors identical with input vector dimension;With swash
Function living calculates each element in the multiple two-dimensional convolution neural net layer sample bivector, obtains identical quantity
Multiple calculating after sample bivector, export to coupled next intermediate maximum pond layer;
I4: next intermediate maximum pond layer passes through down-sampling to the sample bivector after multiple calculating of input
Algorithm carries out data compression, obtains intermediate maximum pond layer sample bivector, exports to coupled next intermediate two dimension
Neural net layer;
I5: next intermediate two-dimentional neural net layer passes through the intermediate maximum pond layer sample bivector of input
Two-dimensional convolution operation obtains multiple intermediate two-dimensional convolution neural net layer sample bivectors identical with input vector dimension,
Each element in the multiple intermediate two-dimensional convolution neural net layer sample bivector is calculated with activation primitive, is obtained
Sample bivector after multiple calculating of identical quantity, the sample bivector after judging the multiple calculating obtained, if be 1
× 1 bivector, if so, thening follow the steps i6;Otherwise it exports to coupled next intermediate maximum pond layer, returns
Step i4;
I6: by Element generation one one-dimensional sample object vector of obtained whole vectors;
I7: operation is carried out using sample object vector of the preset algorithm to acquisition, obtains Sample video description and sample
The sample matches value L ' of advertisement description.
Preferably, the promotion degree of the ad click rate, are as follows:
Wherein, ctr(ad,video)The clicking rate for being targeted advertisements on target video, ctradIt is targeted advertisements in whole views
Average click-through rate on frequency, ctrvideoFor the average click-through rate of whole advertisements on target video.
Preferably, the activation primitive are as follows: ReLU function, are as follows:
Wherein, xiFor the input of two-dimensional convolution neural net layer, yjFor the output of two-dimensional convolution neural net layer, ωijFor
Element in every layer of neural network locating for the activation primitive in neuron weight vector is connection input i and the power for exporting j
Value.
Preferably, the preset algorithm are as follows:
Wherein, x is input vector, and y is the matching value of video ads, ω be neuron in every layer of neural network weight to
Amount, ω and x dimension having the same.
In order to achieve the above objectives, the embodiment of the invention discloses a kind of device of video matching advertisement, which includes:
Video ads description obtains module, and the video presentation of the video for obtaining advertisement to be matched is obtained from advertisement base
Obtain the advertisement description of candidate locations;
Video ads describe word segmentation processing module, for being retouched to the video presentation and the advertisement according to preset rules
Carry out word segmentation processing is stated, video presentation participle and advertisement description participle are obtained;
Video ads matching degree prediction model input module, for video presentation participle and advertisement description participle is defeated
Enter to the video ads matching degree prediction model pre-established;
Video ads distributed nature obtains module, in the video ads matching degree prediction model, according to
Participle and the corresponding relationship of distributed nature vector, obtain the distributed nature of video presentation participle and advertisement description participle to
Amount;The participle and the corresponding relationship of distributed nature vector, be by video presentation, advertisement description and external corpus into
Row training obtains;
Video ads matching degree prediction model output module, in the video ads matching degree prediction model,
The multilayer convolution mind distributed nature vector of video presentation participle and advertisement description participle being input in model
Through network, the video of the advertisement to be matched and the matching value of candidate locations are obtained;The multilayer convolutional neural networks, are foundations
The promotion degree of ad click rate is trained acquisition;
Video ads matching judgment module is described to be matched wide if the matching value is greater than preset matching degree threshold value
The video of announcement matches with candidate locations.
Preferably, the video ads matching degree prediction model output module, comprising:
Distributed nature vector input submodule, point for segmenting video presentation participle and advertisement description
Cloth feature vector is input to the one-dimensional convolutional neural networks layer in the multilayer convolutional neural networks in model;
One-dimensional convolutional neural networks layer handles submodule, for retouching to the video for inputting one-dimensional convolutional neural networks layer
The distributed nature vector for stating participle and advertisement description participle carries out one-dimensional convolution algorithm, obtains video frequency and retouches participle and advertisement frequency
Retouch the distributed nature One-Dimensional Extended vector of participle, output to the first maximum pond layer;
First maximum pond layer handles submodule, for one-dimensional to the distributed nature for inputting the described first maximum pond layer
Spread vector carries out data compression by down-sampling algorithm, obtains the first maximum pond layer bivector, output to the first two dimension
Convolutional neural networks layer;
First two-dimensional convolution neural net layer handles submodule, for input the first two-dimensional convolution neural net layer
The first maximum pond layer bivector obtain multiple two dimensions identical with input vector dimension by two-dimensional convolution operation and roll up
Product neural net layer bivector;With activation primitive to each element in the multiple two-dimensional convolution neural net layer bivector
It is calculated, the first bivector after obtaining multiple calculating of identical quantity, is exported to coupled next intermediate maximum
Pond layer;
Next intermediate maximum pond layer handles submodule, by input next intermediate maximum pond layer it is multiple based on
Bivector after calculation carries out data compression by down-sampling algorithm, obtains intermediate maximum pond layer bivector, output extremely with
Its connected next intermediate two-dimentional neural net layer;
Next intermediate two-dimentional neural net layer handles submodule, for input next intermediate two-dimentional neural net layer
Intermediate maximum pond layer bivector multiple centres two identical with input vector dimension are obtained by two-dimensional convolution operation
Convolutional neural networks layer bivector is tieed up, with activation primitive in the multiple intermediate two-dimensional convolution neural net layer bivector
Each element is calculated, the bivector after obtaining multiple calculating of identical quantity, two after judging the multiple calculating obtained
Dimensional vector, if be 1 × 1 bivector, if it is, triggering object vector generates submodule;Otherwise output to its phase
The intermediate maximum pond layer of even next, next intermediate maximum pond layer processing submodule described in sequence trigger switch and it is next among it is two-dimentional
Neural net layer handles submodule;
Object vector generates submodule, for by one one-dimensional object vector of Element generation of obtained whole vectors;
Matching value obtains submodule, for carrying out operation using object vector of the preset algorithm to acquisition, obtains video
The matching value of description and advertisement description.
Preferably, the corresponding relationship of the participle and distributed nature vector, is by non-supervisory formula training method training mould
Block using the training of non-supervisory formula training method by being obtained to video presentation, advertisement description and external corpus.
Preferably, the non-supervisory formula training method training module, comprising:
Verbal description obtains submodule, retouches for obtaining the passage in video presentation, advertisement description or external corpus
It states;
Verbal description word segmentation processing submodule obtains N number of description point for carrying out word segmentation processing to the verbal description
Word;
Maps feature vectors submodule, for N number of description participle to be mapped as the one-dimensional continuous spy that N number of length is m
Levy vector;
Predicted vector obtains submodule, for being weighted and averaged to the one-dimensional continuous characteristic vector that preceding N-1 length is m
Operation obtains a predicted vector;
Participle prediction error rate judging submodule, for sentencing when using predicted vector prediction n-th description participle
Whether disconnected prediction participle is lower than preset prediction threshold value with practical n-th description participle prediction error rate:
If so, distributed nature training terminates, N number of one-dimensional continuous characteristic vector is N number of description participle
Corresponding N number of distributed nature vector;If not, N number of one-dimensional continuous characteristic vector is adjusted using back-propagation algorithm,
The one-dimensional continuous characteristic vector that new N number of length is m is obtained, predicted vector described in sequence trigger switch obtains submodule and participle prediction
Error rate judging submodule.
Preferably, promotion degree of the multilayer convolutional neural networks by neural metwork training module, according to ad click rate
It is trained acquisition;
The neural metwork training module, comprising:
Sample input submodule, for obtaining a Sample video advertisement from the training sample of multilayer convolutional neural networks
Pair Sample video description, sample advertisement description and the Sample video advertisement pair ad click rate promotion degree L;
Sample video advertisement describes word segmentation processing submodule, for being retouched to the Sample video according to the preset rules
It states and describes to carry out word segmentation processing with sample advertisement, obtain Sample video description participle and sample advertisement description participle;
Sample distribution formula feature vector obtains submodule, for Sample video description participle and the sample advertisement
Description participle carries out distributed nature training, respectively obtains the sample point of Sample video description participle and sample advertisement description participle
Cloth feature vector, wherein the sample distribution formula feature vector of the Sample video description participle and sample advertisement description participle
For the input of multilayer convolutional neural networks;
Sample video advertisement matching degree predicts submodule, describes for Sample video of the multilayer convolutional neural networks to input
The sample distribution formula feature vector of participle and sample advertisement description participle is trained, and obtains Sample video description and sample advertisement
The sample matches value L ' of description;
Sample matches degree judging submodule is missed for judging whether the error range of L ' and L are lower than preset sample training
Poor threshold value:
If so, multilayer convolutional neural networks model training terminates, the weight of the neuron in every layer of neural network is determined
ω;If not, adjusting the weight ω of neuron in every layer of neural network, then, sequence trigger switch sample using back-propagation algorithm
Input submodule is to sample matches degree judging submodule.
Preferably, the Sample video advertisement matching degree predicts submodule, comprising:
Sample distribution formula feature One-Dimensional Extended unit, for the one-dimensional convolutional neural networks in multilayer convolutional neural networks
Layer, the sample distribution formula feature vector of Sample video description participle and sample advertisement description participle to input carry out one-dimensional
Convolution algorithm obtains the sample distribution formula feature One-Dimensional Extended vector of Sample video description participle and sample advertisement description participle,
It exports to the maximum pond layer of sample first;
The maximum pond layer processing unit of sample first, it is special for the sample distribution formula to the maximum pond layer of input sample first
One-Dimensional Extended vector is levied, data compression is carried out by down-sampling algorithm, obtains the first pond layer sample bivector, output to sample
This first two-dimensional convolution neural net layer;
Sample the first two-dimensional convolution neural net layer processing unit, for input sample the first two-dimensional convolution neural network
First pond layer sample bivector of layer obtains multiple two dimensions identical with input vector dimension by two-dimensional convolution operation
Convolutional neural networks layer sample bivector;With activation primitive to the multiple two-dimensional convolution neural net layer sample bivector
In each element calculated, the sample bivector after obtaining multiple calculating of identical quantity, output is to coupled sample
This next intermediate maximum pond layer;
Maximum pond layer processing unit among one under sample, among under input sample one maximum pond layer it is multiple
Sample bivector after calculating carries out data compression by down-sampling algorithm, obtains maximum pond layer sample two among sample
Dimensional vector, output to the two-dimentional neural net layer in a centre under coupled sample;
Two-dimentional neural net layer processing unit among one under sample, for the two-dimentional neural network in a centre under input sample
The intermediate maximum pond layer sample bivector of layer is obtained identical with input vector dimension multiple by two-dimensional convolution operation
Intermediate two-dimensional convolution neural net layer sample bivector, with activation primitive to the multiple intermediate two-dimensional convolution neural net layer
Each element is calculated in sample bivector, and the intermediate sample bivector after obtaining multiple calculating of identical quantity is sentenced
Sample bivector after the disconnected multiple calculating obtained, if be 1 × 1 bivector, if it is, triggering sample object is raw
At unit;Otherwise maximum pond layer among one under exporting to coupled sample, one under sample described in sequence trigger switch among most
Two-dimentional neural net layer processing unit among one under great Chiization layer processing unit and sample;
Sample object vector generation unit, for by one one-dimensional sample object of Element generation of obtained whole vectors
Vector;
Sample matches value obtaining unit is obtained for carrying out operation using sample object vector of the preset algorithm to acquisition
To the sample matches value L ' of Sample video description and sample advertisement description.
Preferably, the promotion degree of the ad click rate, are as follows:
Wherein, ctr(ad,video)The clicking rate for being targeted advertisements on target video, ctradIt is targeted advertisements in whole views
Average click-through rate on frequency, ctrvideoFor the average click-through rate of whole advertisements on target video.
Preferably, the activation primitive are as follows: ReLU function, are as follows:
Wherein, xiFor the input of two-dimensional convolution neural net layer, yjFor the output of two-dimensional convolution neural net layer, ωijFor
Element in every layer of neural network locating for the activation primitive in neuron weight vector is connection input i and the power for exporting j
Value.
Preferably, the preset algorithm are as follows:
Wherein, x is input vector, and y is the matching value of video ads, ω be neuron in every layer of neural network weight to
Amount, ω and x dimension having the same.
A kind of method and device of video matching advertisement provided in an embodiment of the present invention, can obtain the view of advertisement to be matched
The distributed nature of advertisement description in the video presentation and advertisement base of frequency, will obtain distributed nature vector and is input to build in advance
The video ads matching degree prediction model stood exports the matching value of the video and candidate locations for advertisement to be matched, and judgement is such as
The fruit matching value is greater than preset matching degree threshold value, then the video with candidate locations of advertisement to be matched match, and then will be candidate
Advertisement is delivered to the video of advertisement to be matched.In such manner, it is possible to the distributed nature level judgement described from video presentation and advertisement
The video of advertisement to be matched and the matching degree of candidate locations, avoid using word coincidence system carry out video ads matching when without
Method is the defect of the video matching relevant advertisements of advertisement to be matched, can more effectively be delivered to the advertisement to match with video
Corresponding video, to improve the recall rate of video matching advertisement.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below
There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this
Some embodiments of invention for those of ordinary skill in the art, can also basis under the premise of not paying the property made labour
These attached drawings obtain other attached drawings.
Fig. 1 is a kind of method flow schematic diagram of video matching advertisement provided in an embodiment of the present invention;
Fig. 2 is the refinement flow diagram in embodiment illustrated in fig. 1 to step S105;
Fig. 3 is that a kind of participle provided in embodiment illustrated in fig. 1 shows with distributed nature vector corresponding relationship training process
It is intended to;
Fig. 4 is a kind of multilayer convolutional neural networks training flow diagram provided in embodiment illustrated in fig. 1;
Fig. 5 is the refinement flow diagram in embodiment illustrated in fig. 4 to step S404;
Fig. 6 is the corresponding relationship training schematic illustration of participle and distributed nature vector in embodiment illustrated in fig. 3;
Fig. 7 is a kind of structural schematic diagram of multilayer convolutional neural networks provided in an embodiment of the present invention;
Fig. 8 is a kind of apparatus structure schematic diagram of video matching advertisement provided in an embodiment of the present invention.
Specific embodiment
The embodiment of the present invention provides a kind of method and device of video matching advertisement, this method comprises: obtaining to be matched wide
The video presentation of the video of announcement obtains the advertisement description of candidate locations from advertisement base;According to preset rules, the video is retouched
It states and describes to carry out word segmentation processing with the advertisement, obtain video presentation participle and advertisement description participle;By the video presentation point
Word and advertisement description participle are input to the video ads matching degree prediction model pre-established;The video ads matching degree is pre-
Model is surveyed, according to the corresponding relationship of participle and distributed nature vector, obtains point of video presentation participle and advertisement description participle
Cloth feature vector;The participle and the corresponding relationship of distributed nature vector, be by video presentation, advertisement description and
External corpus is trained acquisition;The video ads matching degree prediction model, by the video presentation participle and it is described
The distributed nature vector of advertisement description participle is input to the multilayer convolutional neural networks in model, obtains the advertisement to be matched
Video and candidate locations matching value;The multilayer convolutional neural networks are that the promotion degree of foundation ad click rate is instructed
Practice acquisition;If the matching value is greater than preset matching degree threshold value, video and the candidate locations phase of the advertisement to be matched
Matching.
It should be noted the learning rules employed in the establishment process of video ads matching degree prediction model
It is: the promotion degree of ad click rate.Why using ad click rate promotion degree as learning rules be because of, currently without
The video of handmarking and the matched data of advertisement, therefore lack sample training data.Mentioning ad click rate in the present invention
Liter degree is approximately the matching degree of video and advertisement, that is to say, that the numerical value of clicking rate promotion degree is bigger to represent video and advertisement
Matching degree is higher, conversely, the numerical value that is promoted of clicking rate is smaller to represent video and the matching degree of advertisement is lower.
The promotion degree of clicking rate is defined as follows:
Wherein, ctr(ad,video)The clicking rate for being targeted advertisements on target video, ctradIt is targeted advertisements in whole views
Average click-through rate on frequency, ctrvideoFor the average click-through rate of whole advertisements on target video.
Definition for clicking rate promotion degree is once illustrated, ctr is taken in denominatoradAnd ctrvideoMaximum value, purpose
It is that erroneous judgement situation is promoted in order to avoid the clicking rate as caused by clicking maliciously, to guarantee the accurate of result and stablize.For example, certain
A user clicks maliciously, so that ctr(ad,video)By 3*1033*10 is mentioned in secondary click6, meanwhile, ctradFor 3.4*103,
ctrvideoIt is also raised to 3.2*106, still, promotion degree is calculated according to the definition of the promotion degree of clicking rate are as follows: and 0.9375, thus
As it can be seen that clicking rate promotion degree is almost 1, show that clicking rate is not obviously improved, that is to say, that the advertisement and to be matched wide
The video and mismatch of announcement, the influence caused by also avoiding in this way because of manual operation.
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete
Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on
Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts every other
Embodiment shall fall within the protection scope of the present invention.
In the following, first the method for video matching advertisement provided in an embodiment of the present invention is described in detail.
The method of the video matching advertisement of the embodiment of the present invention has pre-established a video ads matching degree prediction mould
Type, the model include two parts: 1, segmenting the corresponding relationship with distributed nature vector;2, multilayer convolutional neural networks.Its
In, the participle and the corresponding relationship of distributed nature vector, be by video presentation, advertisement description and external corpus into
Row training obtains;The multilayer convolutional neural networks are that the promotion degree of foundation ad click rate is trained acquisition.
The process of video matching advertisement is carried out in detail respectively with the process for establishing video ads matching degree prediction model below
It describes in detail bright.
Firstly, the process of video matching advertisement is described in detail.
Referring to Fig. 1, Fig. 1 is a kind of method flow schematic diagram of video matching advertisement provided in an embodiment of the present invention;
Step S101: obtaining the video presentation of the video of advertisement to be matched, and the advertisement of candidate locations is obtained from advertisement base
Description.
Readily comprehensible, the editor of video website can add succinct video presentation according to the content of video program, regard
In frequency description, it will usually be related to the information such as video program title, video program type, host, director and featured performer;
Likewise, advertisement putting business also can carry out simple advertisement description to advertisement provided by advertiser, in general, in advertisement description
It can be related to the relevant information of name of product, product type and product mouthpiece that advertisement is represented.Above-mentioned video
Description and advertisement description, are all the information being readily available.
Step S102: according to preset rules, carrying out word segmentation processing to the video presentation and advertisement description, depending on
Frequency description participle and advertisement description participle.
It should be noted that " word segmentation processing " mentioned herein is realized by some more common open source softwares,
Relatively common software are as follows: the language technology platform cloud LTP of " stammerer " Chinese word segmentation jieba and Harbin Institute of Technology;The master of word segmentation processing
Algorithm is wanted there are the segmentation methods based on CRF, certainly, the present invention does not need to limit specific algorithm, any possible calculation
Method can be applied to the present invention.
Preset rules mentioned herein are as follows: describe to divide by word by passage first, and filter out word
The deactivated word in word that language obtains after dividing, wherein deactivating word is preset set of words, mainly includes some no tools
The word of body meaning and some common words, for example, " ", " " " I ", " this " etc.;Then the name in filtered word is selected
Word, verb, adjective extract above-mentioned important word as important word, specifically, for the part of speech of filtered word
It determines, can be obtained automatically by the part-of-speech tagging function of participle software." the charming ma's legend of the force " citing acted the leading role with Fan Bingbing, video
The editor of website is added to such one section of video presentation: " " the charming ma's legend of force " are to be acted the leading role by Fan Bingbing ... ", according to default
Rule obtains after being segmented the video presentation: (Wu Meiniang, legend, Fan Bingbing are acted the leading role).
Step S103: video presentation participle and advertisement description participle are input to the video ads pre-established and matched
Spend prediction model.
It should be noted that video ads matching degree prediction model has pre-established, in the application process of the model
In, the input of model is video presentation participle and advertisement description participle, export for the video of advertisement to be matched video presentation and
The matching value of candidate locations.
Step S104: the video ads matching degree prediction model, it is corresponding with distributed nature vector according to segmenting
Relationship obtains the distributed nature vector of video presentation participle and advertisement description participle;The participle and distributed nature vector
Corresponding relationship, be by being trained acquisition to video presentation, advertisement description and external corpus.
Step S105: the video ads matching degree prediction model retouches video presentation participle and the advertisement
The distributed nature vector for stating participle is input to multilayer convolutional neural networks in model, obtains the video of the advertisement to be matched
With the matching value of candidate locations;The multilayer convolutional neural networks are that the promotion degree of foundation ad click rate is trained acquisition
's.
Step S106: if the matching value is greater than preset matching degree threshold value, the video and time of the advertisement to be matched
Advertisement is selected to match.
Wherein, described in step S105 by the video presentation participle and the advertisement description participle distributed nature to
The multilayer convolutional neural networks that are input in model are measured, the video of the advertisement to be matched and the matching value of candidate locations are obtained
Step, referring to fig. 2 and Fig. 7, Fig. 2 are, to the refinement flow diagram of step S105, Fig. 7 is the present invention in embodiment illustrated in fig. 1
A kind of structural schematic diagram for multilayer convolutional neural networks that embodiment provides, comprising:
Step S201: the one-dimensional convolutional neural networks layer in multilayer convolutional neural networks segments the video presentation of input
One-dimensional convolution algorithm is carried out with the distributed nature vector of advertisement description participle, obtains video presentation participle and advertisement description participle
Distributed nature One-Dimensional Extended vector, output to the first maximum pond layer;
It should be noted that the purpose of this step is the distributed nature vector of the video presentation that will be inputted and advertisement description pair
It is extended, obtains a distributed nature One-Dimensional Extended vector, this is the input of multilayer convolutional neural networks model.
For example it is illustrated, for example, video presentation participle is (Fan Bingbing, Wu Meiniang), advertisement description participle is (model
Ice ice, game), since the extended mode that convolutional neural networks use is not full connection type, obtain video presentation participle
With advertisement description participle distributed nature are as follows: (Fan Bingbing, Fan Bingbing), (Fan Bingbing, Wu Meiniang), (Fan Bingbing, game),
(Wu Meiniang, game).In addition, in these four combinations each all will a corresponding a kind of distributed nature One-Dimensional Extended to
Amount, all distributed nature One-Dimensional Extended vectors constitute a convolutional network layer.
The maximum pond layer of step S202: the first passes through down-sampling algorithm to the distributed nature One-Dimensional Extended vector of input
Data compression is carried out, the first maximum pond layer bivector, output to the first two-dimensional convolution neural net layer are obtained;
Once, the first maximum pond layer can carry out the distributed nature One-Dimensional Extended vector in step I1 simplicity of explanation
The influence of noise is eliminated in data compression, obtains the first maximum pond layer bivector, preferably, the first maximum pond layer can be with
Data compression is realized using the method for down-sampling.
Step the S203: the first two-dimensional convolution neural net layer passes through two to the first maximum pond layer bivector of input
Convolution algorithm is tieed up, multiple two-dimensional convolution neural net layer bivectors identical with input vector dimension are obtained;Use activation primitive
Each element in multiple two-dimensional convolution neural net layer bivectors is calculated, after the multiple calculating for obtaining identical quantity
Bivector is exported to coupled next intermediate maximum pond layer;
Wherein, step S203 carries out two-dimensional convolution mathematics fortune to the first maximum pond layer bivector in step S202
It calculates, the bivector after obtaining multiple calculating.In step S203 by the vector element in the first maximum pond layer bivector into
Row combination of two increases the feature for predict the matching degree of the video presentation and advertisement description.
Step S204: next intermediate maximum pond layer calculates the bivector after multiple calculating of input by down-sampling
Method carries out data compression, obtains intermediate maximum pond layer bivector, exports to coupled next intermediate two-dimentional nerve net
Network layers;
Step S205: next intermediate two-dimentional neural net layer passes through two to the intermediate maximum pond layer bivector of input
Convolution algorithm is tieed up, multiple intermediate two-dimensional convolution neural net layer bivectors identical with input vector dimension are obtained, with activation
Function calculates each element in the multiple intermediate two-dimensional convolution neural net layer bivector, obtains identical quantity
Bivector after multiple calculating, the bivector after judging the multiple calculating obtained, if it is 1 × 1 bivector, if
It is to then follow the steps S206.;Otherwise it exports to coupled next intermediate maximum pond layer, return step step S204;
It should be noted that step S204 and step S205 are a duplicate processes, by continuous learning training, by
Bivector after multiple calculating finally obtains multiple 1 × 1 bivector.
Step S206: by one one-dimensional object vector of Element generation of obtained whole vectors;
By the element of finally obtained multiple 1 × 1 bivector, one is generated according to preset object vector create-rule
A one-dimensional object vector.
Step S207: operation is carried out using object vector of the preset algorithm to acquisition, video presentation is obtained and advertisement is retouched
The matching value stated.
Preset algorithm mentioned here are as follows:
Wherein, x is input vector, and y is the matching value of video ads, ω be neuron in every layer of neural network weight to
Amount, ω and x dimension having the same.
Secondly, the process for establishing video ads matching degree prediction model is described in detail.
The process for establishing video ads matching degree prediction model includes two trained processes: 1, it is special with distribution to establish participle
Levy the training process of corresponding relationship;2, the training process for establishing multilayer convolutional neural networks, is illustrated individually below.
One, firstly, the training process for establishing participle and distributed nature vector corresponding relationship is described in detail.
Referring to Fig. 3 and Fig. 6, Fig. 3 is a kind of participle pass corresponding with distributed nature vector provided in embodiment illustrated in fig. 1
It is training flow diagram, Fig. 6 is that the corresponding relationship training principle of participle and distributed nature vector in embodiment illustrated in fig. 3 is shown
It is intended to.
" distributed nature " mentioned here refers to text datas such as video presentation, advertisement description and external corpus,
Rear vector characteristics obtained are trained according to certain learning rules, which can go to express from deeper level
These text datas.For example, it is obtained after this word is by training if having obtained a description participle " Wu Meiniang "
The distributed nature vector of distributed nature is assumed to be X=(0.5,03,0.3,0.6,0.8,0.9), it can be understood as, vector is empty
Between in vector X expressed by text data content be " Wu Meiniang ".Certainly, the length of this distributed vector X be can be with
It adjusts, this needs to determine according to the actual situation.
It should be noted that the corresponding relationship of participle and distributed nature vector, is by video presentation, advertisement description
And external corpus, it is obtained using the training of non-supervisory formula training method.
Above-mentioned, the process of non-supervisory formula training method training includes the following steps:
Step S301: the passage description in video presentation, advertisement description or external corpus is obtained;
Wherein, external corpus is introduced primarily to video presentation and advertisement is avoided to describe asking for the not overlapping of word
Topic, since the corpus of video presentation and advertisement description obtain after segmenting is fewer, increasing external corpus can make
The distributed nature for obtaining video presentation and advertisement description is more general.
" external corpus " mentioned here refers to the text grabbed from the external related web site such as bean cotyledon net, Baidupedia
Data.
Step S302: carrying out word segmentation processing to the verbal description, obtains N number of description participle;
According to the method in step S102, word segmentation processing is carried out to video presentation, advertisement description and external corpus, is obtained
The participle of description corresponding to these text datas.
Step S303: N number of description participle is mapped as the one-dimensional continuous characteristic vector that N number of length is m;
In order to obtain the distributed nature of each description participle, need constantly according to learning rules to distribution
Feature is trained.
For example, as shown in Figure 6, it is assumed that have a segment description corpus, and obtained 4 descriptions participle language therein
Material, is respectively labeled as: Word_n, Word_n+1, Word_n+2, Word_n+3, according to preset corresponding relationship, by text data
It is one-dimensional to get arriving that the mathematical operation amount using vector as data processing carrier in vector space is transformed into etc. non-mathematical operation amount
Continuous characteristic vector: X_n, X_n+1, X_n+2, X_n+3.
Step S304: operation is weighted and averaged to the one-dimensional continuous characteristic vector that preceding N-1 length is m, obtains one
Predicted vector;
Still it is described with the example in step S303, to one-dimensional continuous characteristic vector: X_n, X_n+1, X_n+2, X_n+3
It performs mathematical calculations, such as the weighting in step S304 is averaging operation and is denoted as in such manner, it is possible to obtain a predicted vector:
X ' _ n+1.
Step S305: when using predicted vector prediction n-th description participle, judge prediction participle and practical N
Whether a description participle prediction error rate is lower than preset prediction threshold value: if so, executing step S306;If not, executing step
Rapid S307.
Step S306: distributed nature training terminates, and N number of one-dimensional continuous characteristic vector is N number of description point
N number of distributed nature vector corresponding to word;
Step S307: N number of one-dimensional continuous characteristic vector is adjusted using back-propagation algorithm, obtains new N number of length
For the one-dimensional continuous characteristic vector of m, step S304 and step S305 are continued to execute.
It should be noted that " back-propagation algorithm " mentioned herein is a kind of common algorithm for solving neural network,
During the concrete application of algorithm, need according to the output result of prediction and the true departure degree for exporting result in turn
Each layer of weight is corrected a step by a step from back to front, until the output result predicted of input data after correction and true
The departure degree for exporting result is less than preset threshold;It is directed to for the embodiment of the present invention, input data is N number of one-dimensional continuous spy
Vector is levied, output result is the probability distribution of n-th word, when the probability distribution of n-th word is lower than preset threshold,
Obtain new N number of one-dimensional continuous characteristic vector.
Step S305 is explained in detail, the predicted vector X ' _ n+1 obtained in step S304 is input to by Softmax
(n+1)th in this section of corpus description participle, description participle and the reality of statistical forecast are predicted in the classifier that algorithm is realized
Description segment error, obtain description participle prediction error rate, judge this prediction error rate whether be lower than preset prediction threshold
Value, if so, showing to predict that the prediction result of (n+1)th description participle is accurate or stablizes using the predicted vector;If otherwise
Show the prediction result inaccuracy or unstable that (n+1)th description participle is predicted using the predicted vector, in this way, it is necessary to again
It returns to adjust 4 one-dimensional continuous characteristic vectors before (n+1)th description participle: X_n, X_n+1, X_n+2, X_n+3, adjustment one
Some mathematical algorithms, such as back-propagation algorithm can be used by tieing up continuous characteristic vector, certainly can also be according to actual prediction
Effect selects other feasible mathematical algorithms, obtains 4 new one-dimensional continuous characteristic vectors, is denoted as: X_n ', X_n+1 ', X_n+
Then 2 ', X_n+3 ' continue to execute S304 and step S305, until meeting prediction participle and the description participle prediction of practical n-th
Error rate is lower than preset prediction threshold value, and distributed nature training process terminates, the 4 final one-dimensional continuous spies obtained at this time
The distributed nature vector that vector is this 4 description participles is levied, that is, is actually entered to the number of video ads matching degree prediction model
Amount input.
Two, secondly, the training process for establishing multilayer convolutional neural networks is described in detail.
Referring to fig. 4 and Fig. 7, Fig. 4 are that a kind of multilayer convolutional neural networks training process provided in embodiment illustrated in fig. 1 is shown
It is intended to, Fig. 7 is a kind of structural schematic diagram of multilayer convolutional neural networks provided in an embodiment of the present invention.
It completely introduces below, the promotion degree according to ad click rate, which is trained, obtains multilayer convolutional neural networks
The step of, comprising:
Step S401: the sample view of a Sample video advertisement pair is obtained from the training sample of multilayer convolutional neural networks
The ad click rate promotion degree L of frequency description, sample advertisement description and the Sample video advertisement pair.
Step S402: according to the preset rules, Sample video description and sample advertisement description are carried out at participle
Reason obtains Sample video description participle and sample advertisement description participle.
Step S403: distributed nature instruction is carried out to Sample video description participle and sample advertisement description participle
Practice, respectively obtains the sample distribution formula feature vector of Sample video description participle and sample advertisement description participle, wherein the sample
The sample distribution formula feature vector of this video presentation participle and sample advertisement description participle is the input of multilayer convolutional neural networks.
Step S404: multilayer convolutional neural networks segment the Sample video description participle of input and sample advertisement description
Sample distribution formula feature vector is trained, and obtains the sample matches value L ' of Sample video description and sample advertisement description.
It should be noted that calculated L ' is really mentioned by the ad click rate that multilayer convolutional neural networks model obtains
The ad click rate promotion degree that multilayer convolutional neural networks model exports in the embodiment of the present invention, is defined as video by liter degree
Matching value with advertisement.
Step S405: judge whether L ' and the error range of L are lower than preset sample training error threshold: if so, holding
Row step S406;If not, executing step S407.
S406: multilayer convolutional neural networks model training terminates, and determines the weight ω of the neuron in every layer of neural network.
S407: adjusting the weight ω of neuron in every layer of neural network using back-propagation algorithm, then, continues to execute step
Rapid S401 to step S405.
It should be noted that back-propagation algorithm is in the establishment process of video ads matching degree prediction model, Neng Goucong
Afterwards to the preceding connection weight gradually corrected in multilayer convolutional neural networks, so that the instruction of video ads matching degree prediction model
It is accurate and stable to practice effect.
Further, steps noted above S404 expansion is described, comprising:
Step S501: the one-dimensional convolutional neural networks layer in the multilayer convolutional neural networks, to the sample of input
The sample distribution formula feature vector of video presentation participle and sample advertisement description participle carries out one-dimensional convolution algorithm, obtains sample view
The sample distribution formula feature One-Dimensional Extended vector of frequency description participle and sample advertisement description participle, output to the first maximum pond
Layer;
Step S502: described first maximum pond layer, to the sample distribution formula feature One-Dimensional Extended vector of input, under
Sampling algorithm carries out data compression, obtains the first maximum pond layer sample bivector, output to the first two-dimensional convolution nerve net
Network layers;
Step S503: the first two-dimensional convolution neural net layer, to the first of input the maximum pond layer sample two dimension to
Amount, by two-dimensional convolution operation, obtain multiple two-dimensional convolution neural net layer sample two dimensions identical with input vector dimension to
Amount;Each element in the multiple two-dimensional convolution neural net layer sample bivector is calculated with activation primitive, is obtained
Sample bivector after multiple calculating of identical quantity is exported to coupled next intermediate maximum pond layer;
It should be noted that activation primitive are as follows: ReLU function, are as follows:
Wherein, xiFor the input of two-dimensional convolution neural net layer, yjFor the output of two-dimensional convolution neural net layer, ωijFor
It is the element in every layer of neural network locating for the activation primitive in neuron weight vector, is connection input i and output j
Weight.
Step S504: next intermediate maximum pond layer passes through the sample bivector after multiple calculating of input
Down-sampling algorithm carries out data compression, obtains intermediate maximum pond layer sample bivector, output to it is coupled it is next in
Between two-dimentional neural net layer;
Step S505: next intermediate two-dimentional neural net layer, to the intermediate maximum pond layer sample two dimension of input to
Amount, by two-dimensional convolution operation, obtains multiple intermediate two-dimensional convolution neural net layer samples two identical with input vector dimension
Dimensional vector counts each element in the multiple intermediate two-dimensional convolution neural net layer sample bivector with activation primitive
It calculates, the sample bivector after obtaining multiple calculating of identical quantity, the sample bivector after judging the multiple calculating obtained,
Whether the bivector for being 1 × 1, if so, thening follow the steps S506;Otherwise it exports to coupled next intermediate maximum pond
Change layer, return step S504;
Step S506: by Element generation one one-dimensional sample object vector of obtained whole vectors;
Step S507: operation is carried out using sample object vector of the preset algorithm to acquisition, obtains Sample video description
With the sample matches value L ' of sample advertisement description.
In the following, the device of video matching advertisement provided in an embodiment of the present invention is described in detail again.
Fig. 8 be a kind of apparatus structure schematic diagram of video matching advertisement provided in an embodiment of the present invention, with Fig. 1 one kind
The method of video matching advertisement is corresponding, comprising: video ads description obtains module 801, video ads describe word segmentation processing mould
Block 802, video ads matching degree prediction model input module 803, video ads distributed nature acquisition module 804, video are wide
Accuse matching degree prediction model output module 805, video ads matching judgment module 806.
Wherein, video ads description obtains module 801, the video presentation of the video for obtaining advertisement to be matched, from wide
Accuse the advertisement description that candidate locations are obtained in library.
Video ads describe word segmentation processing module 802, are used for according to preset rules, to the video presentation and the advertisement
Description carries out word segmentation processing, obtains video presentation participle and advertisement description participle.
Video ads matching degree prediction model input module 803, for dividing video presentation participle and advertisement description
Word is input to the video ads matching degree prediction model pre-established.
Video ads distributed nature obtains module 804, in the video ads matching degree prediction model, root
According to participle and the corresponding relationship of distributed nature vector, obtain the distributed nature of video presentation participle and advertisement description participle to
Amount;The participle and the corresponding relationship of distributed nature vector, be by video presentation, advertisement description and external corpus into
Row training obtains.
Video ads matching degree prediction model output module 805, in the video ads matching degree prediction model
In, the distributed nature vector of video presentation participle and advertisement description participle is input to the multilayer convolution in model
Neural network obtains the video of the advertisement to be matched and the matching value of candidate locations;The multilayer convolutional neural networks, be according to
Acquisition is trained according to the promotion degree of ad click rate.
Video ads matching judgment module 806, it is described to be matched if the matching value is greater than preset matching degree threshold value
The video of advertisement matches with candidate locations.
It should be noted that the corresponding relationship of participle and distributed nature vector, is by the training of non-supervisory formula training method
Module using the training of non-supervisory formula training method by being obtained to video presentation, advertisement description and external corpus.
It is above-mentioned, non-supervisory formula training method training module, comprising:
Verbal description obtains submodule, retouches for obtaining the passage in video presentation, advertisement description or external corpus
It states;
Verbal description word segmentation processing submodule obtains N number of description point for carrying out word segmentation processing to the verbal description
Word;
Maps feature vectors submodule, for N number of description participle to be mapped as the one-dimensional continuous spy that N number of length is m
Levy vector;
Predicted vector obtains submodule, for being weighted and averaged to the one-dimensional continuous characteristic vector that preceding N-1 length is m
Operation obtains a predicted vector;
Participle prediction error rate judging submodule, for sentencing when using predicted vector prediction n-th description participle
Whether disconnected prediction participle is lower than preset prediction threshold value with practical n-th description participle prediction error rate:
If so, distributed nature training terminates, N number of one-dimensional continuous characteristic vector is N number of description participle
Corresponding N number of distributed nature vector;If not, N number of one-dimensional continuous characteristic vector is adjusted using back-propagation algorithm,
The one-dimensional continuous characteristic vector that new N number of length is m is obtained, predicted vector described in sequence trigger switch obtains submodule and participle prediction
Error rate judging submodule.
Video ads matching degree prediction model output module, comprising:
Distributed nature vector input submodule, point for segmenting video presentation participle and advertisement description
Cloth feature vector is input to the one-dimensional convolutional neural networks layer in the multilayer convolutional neural networks in model;
One-dimensional convolutional neural networks layer handles submodule, for retouching to the video for inputting one-dimensional convolutional neural networks layer
The distributed nature vector for stating participle and advertisement description participle carries out one-dimensional convolution algorithm, obtains video frequency and retouches participle and advertisement frequency
Retouch the distributed nature One-Dimensional Extended vector of participle, output to the first maximum pond layer;
First maximum pond layer handles submodule, for one-dimensional to the distributed nature for inputting the described first maximum pond layer
Spread vector carries out data compression by down-sampling algorithm, obtains the first maximum pond layer bivector, output to the first two dimension
Convolutional neural networks layer;
First two-dimensional convolution neural net layer handles submodule, for input the first two-dimensional convolution neural net layer
The first maximum pond layer bivector obtain multiple two dimensions identical with input vector dimension by two-dimensional convolution operation and roll up
Product neural net layer bivector;With activation primitive to each element in the multiple two-dimensional convolution neural net layer bivector
It is calculated, the first bivector after obtaining multiple calculating of identical quantity, is exported to coupled next intermediate maximum
Pond layer;
Next intermediate maximum pond layer handles submodule, by input next intermediate maximum pond layer it is multiple based on
Bivector after calculation carries out data compression by down-sampling algorithm, obtains intermediate maximum pond layer bivector, output extremely with
Its connected next intermediate two-dimentional neural net layer;
Next intermediate two-dimentional neural net layer handles submodule, for input next intermediate two-dimentional neural net layer
Intermediate maximum pond layer bivector multiple centres two identical with input vector dimension are obtained by two-dimensional convolution operation
Convolutional neural networks layer bivector is tieed up, with activation primitive in the multiple intermediate two-dimensional convolution neural net layer bivector
Each element is calculated, the bivector after obtaining multiple calculating of identical quantity, two after judging the multiple calculating obtained
Dimensional vector, if be 1 × 1 bivector, if it is, triggering object vector generates submodule;Otherwise output to its phase
The intermediate maximum pond layer of even next, next intermediate maximum pond layer processing submodule described in sequence trigger switch and it is next among it is two-dimentional
Neural net layer handles submodule;
Object vector generates submodule, for by one one-dimensional object vector of Element generation of obtained whole vectors;
Matching value obtains submodule, for carrying out operation using object vector of the preset algorithm to acquisition, obtains video
The matching value of description and advertisement description.
Preset algorithm mentioned here are as follows:
Wherein, x is input vector, and y is the matching value of video ads, ω be neuron in every layer of neural network weight to
Amount, ω and x dimension having the same.
It should be noted that promotion of the multilayer convolutional neural networks by neural metwork training module, according to ad click rate
Degree is trained acquisition;
The neural metwork training module, comprising:
Sample input submodule, for obtaining a Sample video advertisement from the training sample of multilayer convolutional neural networks
Pair Sample video description, sample advertisement description and the Sample video advertisement pair ad click rate promotion degree L;
Sample video advertisement describes word segmentation processing submodule, for being retouched to the Sample video according to the preset rules
It states and describes to carry out word segmentation processing with sample advertisement, obtain Sample video description participle and sample advertisement description participle;
Sample distribution formula feature vector obtains submodule, for Sample video description participle and the sample advertisement
Description participle carries out distributed nature training, respectively obtains the sample point of Sample video description participle and sample advertisement description participle
Cloth feature vector, wherein the sample distribution formula feature vector of the Sample video description participle and sample advertisement description participle
For the input of multilayer convolutional neural networks;
Sample video advertisement matching degree predicts submodule, describes for Sample video of the multilayer convolutional neural networks to input
The sample distribution formula feature vector of participle and sample advertisement description participle is trained, and obtains Sample video description and sample advertisement
The sample matches value L ' of description;
Sample matches degree judging submodule is missed for judging whether the error range of L ' and L are lower than preset sample training
Poor threshold value:
If so, multilayer convolutional neural networks model training terminates, the weight of the neuron in every layer of neural network is determined
ω;If not, adjusting the weight ω of neuron in every layer of neural network, then, sequence trigger switch sample using back-propagation algorithm
Input submodule is to sample matches degree judging submodule.
Further, Sample video advertisement matching degree predicts submodule, comprising:
Sample distribution formula feature One-Dimensional Extended unit, for the one-dimensional convolutional neural networks in multilayer convolutional neural networks
Layer, the sample distribution formula feature vector of Sample video description participle and sample advertisement description participle to input carry out one-dimensional
Convolution algorithm obtains the sample distribution formula feature One-Dimensional Extended vector of Sample video description participle and sample advertisement description participle,
It exports to the maximum pond layer of sample first;
The maximum pond layer processing unit of sample first, it is special for the sample distribution formula to the maximum pond layer of input sample first
One-Dimensional Extended vector is levied, data compression is carried out by down-sampling algorithm, obtains the first pond layer sample bivector, output to sample
This first two-dimensional convolution neural net layer;
Sample the first two-dimensional convolution neural net layer processing unit, for input sample the first two-dimensional convolution neural network
First pond layer sample bivector of layer obtains multiple two dimensions identical with input vector dimension by two-dimensional convolution operation
Convolutional neural networks layer sample bivector;With activation primitive to the multiple two-dimensional convolution neural net layer sample bivector
In each element calculated, the sample bivector after obtaining multiple calculating of identical quantity, output is to coupled sample
This next intermediate maximum pond layer;
It should be noted that activation primitive are as follows: ReLU function, are as follows:
Wherein, xiFor the input of two-dimensional convolution neural net layer, yjFor the output of two-dimensional convolution neural net layer, ωijFor
Element in every layer of neural network locating for the activation primitive in neuron weight vector is connection input i and the power for exporting j
Value.
Maximum pond layer processing unit among one under sample, among under input sample one maximum pond layer it is multiple
Sample bivector after calculating carries out data compression by down-sampling algorithm, obtains maximum pond layer sample two among sample
Dimensional vector, output to the two-dimentional neural net layer in a centre under coupled sample;
Two-dimentional neural net layer processing unit among one under sample, for the two-dimentional neural network in a centre under input sample
The intermediate maximum pond layer sample bivector of layer is obtained identical with input vector dimension multiple by two-dimensional convolution operation
Intermediate two-dimensional convolution neural net layer sample bivector, with activation primitive to the multiple intermediate two-dimensional convolution neural net layer
Each element is calculated in sample bivector, and the intermediate sample bivector after obtaining multiple calculating of identical quantity is sentenced
Sample bivector after the disconnected multiple calculating obtained, if be 1 × 1 bivector, if it is, triggering sample object is raw
At unit;Otherwise maximum pond layer among one under exporting to coupled sample, one under sample described in sequence trigger switch among most
Two-dimentional neural net layer processing unit among one under great Chiization layer processing unit and sample;
Sample object vector generation unit, for by one one-dimensional sample object of Element generation of obtained whole vectors
Vector;
Sample matches value obtaining unit is obtained for carrying out operation using sample object vector of the preset algorithm to acquisition
To the sample matches value L ' of Sample video description and sample advertisement description.
It should be noted that, in this document, relational terms such as first and second and the like are used merely to a reality
Body or operation are distinguished with another entity or operation, are deposited without necessarily requiring or implying between these entities or operation
In any actual relationship or order or sequence.Moreover, the terms "include", "comprise" or its any other variant are intended to
Non-exclusive inclusion, so that the process, method, article or equipment including a series of elements is not only wanted including those
Element, but also including other elements that are not explicitly listed, or further include for this process, method, article or equipment
Intrinsic element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that
There is also other identical elements in process, method, article or equipment including the element.
Each embodiment in this specification is all made of relevant mode and describes, same and similar portion between each embodiment
Dividing may refer to each other, and each embodiment focuses on the differences from other embodiments.Especially for device reality
For applying example, since it is substantially similar to the method embodiment, so being described relatively simple, related place is referring to embodiment of the method
Part explanation.
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the scope of the present invention.It is all
Any modification, equivalent replacement, improvement and so within the spirit and principles in the present invention, are all contained in protection scope of the present invention
It is interior.
Claims (18)
1. a kind of method of video matching advertisement, which is characterized in that this method comprises:
The video presentation for obtaining the video of advertisement to be matched obtains the advertisement description of candidate locations from advertisement base;
According to preset rules, word segmentation processing is carried out to the video presentation and advertisement description, obtain video presentation participle and
Advertisement description participle;
Video presentation participle and advertisement description participle are input to the video ads matching degree prediction model pre-established;
The video ads matching degree prediction model obtains video according to the corresponding relationship of participle and distributed nature vector
The distributed nature vector of description participle and advertisement description participle;The participle and the corresponding relationship of distributed nature vector are
By being trained acquisition to video presentation, advertisement description and external corpus;
The video ads matching degree prediction model, by the distribution of video presentation participle and advertisement description participle
Feature vector is input to the multilayer convolutional neural networks in model, obtains the video of the advertisement to be matched and of candidate locations
With value;The multilayer convolutional neural networks are that the promotion degree of foundation ad click rate is trained acquisition, the ad click
The promotion degree of rate is the matching value of video and advertisement;The multilayer convolutional neural networks to the Sample video of input description participle and
The sample distribution formula feature vector of sample advertisement description participle is trained, and obtains Sample video description and sample advertisement description
Sample matches value L ';Judge whether L ' and the error range of L are lower than preset sample training error threshold: if so, multilayer is rolled up
Product neural network model training terminates, and determines the weight ω of the neuron in every layer of neural network;Wherein, L is that Sample video is wide
Accuse the promotion degree of corresponding ad click rate;
If the matching value is greater than preset matching degree threshold value, the video of the advertisement to be matched matches with candidate locations.
2. according to the method described in claim 1, it is characterized by: described describe video presentation participle and the advertisement
The distributed nature vector of participle is input to the multilayer convolutional neural networks in model, obtain the video of the advertisement to be matched with
The step of matching value of candidate locations, comprising:
One-dimensional convolutional neural networks layer in I1, the multilayer convolutional neural networks, to the video presentation of input participle and
The distributed nature vector of advertisement description participle carries out one-dimensional convolution algorithm, obtains video presentation participle and advertisement description participle
Distributed nature One-Dimensional Extended vector, output to the first maximum pond layer;
I2, first maximum pond layer count the distributed nature One-Dimensional Extended vector of input by down-sampling algorithm
According to compression, the first maximum pond layer bivector, output to the first two-dimensional convolution neural net layer are obtained;
I3, the first two-dimensional convolution neural net layer pass through two-dimensional convolution to the first maximum pond layer bivector of input
Operation obtains multiple two-dimensional convolution neural net layer bivectors identical with input vector dimension;With activation primitive to described
Each element is calculated in multiple two-dimensional convolution neural net layer bivectors, two after obtaining multiple calculating of identical quantity
Dimensional vector is exported to coupled next intermediate maximum pond layer;
I4: next intermediate maximum pond layer carries out the bivector after multiple calculating of input by down-sampling algorithm
Data compression obtains intermediate maximum pond layer bivector, exports to coupled next intermediate two-dimentional neural net layer;
I5: next intermediate two-dimentional neural net layer passes through two-dimensional convolution to the intermediate maximum pond layer bivector of input
Operation obtains multiple intermediate two-dimensional convolution neural net layer bivectors identical with input vector dimension, with activation primitive pair
Each element is calculated in the multiple intermediate two-dimensional convolution neural net layer bivector, obtains multiple meters of identical quantity
Bivector after calculation, the bivector after judging the multiple calculating obtained, if it is 1 × 1 bivector, if it is,
Execute step I6;Otherwise it exports to coupled next intermediate maximum pond layer, return step I4;
I6: by one one-dimensional object vector of Element generation of obtained whole vectors;
I7: operation is carried out using object vector of the preset algorithm to acquisition, obtains the matching value of video presentation and advertisement description.
3. according to the method described in claim 1, it is characterized by: the participle and the corresponding relationship of distributed nature vector,
It is by being obtained using the training of non-supervisory formula training method to video presentation, advertisement description and external corpus.
4. according to the method described in claim 3, it is characterized by: the process packet using the training of non-supervisory formula training method
It includes:
A: the passage description in video presentation, advertisement description or external corpus is obtained;
B: carrying out word segmentation processing to the verbal description, obtains N number of description participle;
C: N number of description participle is mapped as the one-dimensional continuous characteristic vector that N number of length is m;
D: operation is weighted and averaged to the one-dimensional continuous characteristic vector that preceding N-1 length is m, obtains a predicted vector;
E: when using predicted vector prediction n-th description participle, judge that prediction participle and practical n-th description participle are pre-
Survey whether error rate is lower than preset prediction threshold value:
If so, distributed nature training terminates, N number of one-dimensional continuous characteristic vector is right for N number of description participle
The N number of distributed nature vector answered;If not, adjusting N number of one-dimensional continuous characteristic vector using back-propagation algorithm, obtain
New N number of length is the one-dimensional continuous characteristic vector of m, continues to execute step d and step e.
5. the method according to claim 1, wherein the promotion degree according to ad click rate be trained obtain it is more
The step of layer convolutional neural networks, comprising:
F: the Sample video description, sample of a Sample video advertisement pair are obtained from the training sample of multilayer convolutional neural networks
The ad click rate promotion degree L of advertisement description and the Sample video advertisement pair;
G: according to the preset rules, word segmentation processing is carried out to Sample video description and sample advertisement description, obtains sample
Video presentation participle and sample advertisement description participle;
H: distributed nature training is carried out to Sample video description participle and sample advertisement description participle, is respectively obtained
The sample distribution formula feature vector of Sample video description participle and sample advertisement description participle, wherein the Sample video description
The sample distribution formula feature vector of participle and sample advertisement description participle is the input of multilayer convolutional neural networks;
I: multilayer convolutional neural networks are to the Sample video description participle of input and the sample distribution formula of sample advertisement description participle
Feature vector is trained, and obtains the sample matches value L ' of Sample video description and sample advertisement description;
J: judge whether L ' and the error range of L are lower than preset sample training error threshold:
If so, multilayer convolutional neural networks model training terminates, the weight ω of the neuron in every layer of neural network is determined;Such as
Fruit is no, and the weight ω of neuron in every layer of neural network is adjusted using back-propagation algorithm, then, continues to execute step f to step
Rapid j.
6. according to the method described in claim 5, it is characterized in that, the step i, comprising:
I1: the one-dimensional convolutional neural networks layer in the multilayer convolutional neural networks, the Sample video description point to input
The sample distribution formula feature vector of word and sample advertisement description participle carries out one-dimensional convolution algorithm, obtains Sample video description participle
With the sample distribution formula feature One-Dimensional Extended vector of sample advertisement description participle, output to the first maximum pond layer;
I2: the first maximum pond layer, to the sample distribution formula feature One-Dimensional Extended vector of input, by down-sampling algorithm into
Row data compression obtains the first maximum pond layer sample bivector, output to the first two-dimensional convolution neural net layer;
I3: the first two-dimensional convolution neural net layer passes through two dimension to the first maximum pond layer sample bivector of input
Convolution algorithm obtains multiple two-dimensional convolution neural net layer sample bivectors identical with input vector dimension;With activation letter
It is several that each element in the multiple two-dimensional convolution neural net layer sample bivector is calculated, obtain the more of identical quantity
Sample bivector after a calculating is exported to coupled next intermediate maximum pond layer;
I4: next intermediate maximum pond layer passes through down-sampling algorithm to the sample bivector after multiple calculating of input
Data compression is carried out, intermediate maximum pond layer sample bivector is obtained, is exported to coupled next intermediate two dimension nerve
Network layer;
I5: next intermediate two-dimentional neural net layer passes through two dimension to the intermediate maximum pond layer sample bivector of input
Convolution algorithm obtains multiple intermediate two-dimensional convolution neural net layer sample bivectors identical with input vector dimension, with swash
Function living calculates each element in the multiple intermediate two-dimensional convolution neural net layer sample bivector, obtains identical
Sample bivector after multiple calculating of quantity, the sample bivector after judging the multiple calculating obtained, if be 1 × 1
Bivector, if so, thening follow the steps i6;Otherwise it exports to coupled next intermediate maximum pond layer, returns to step
Rapid i4;
I6: by Element generation one one-dimensional sample object vector of obtained whole vectors;
I7: operation is carried out using sample object vector of the preset algorithm to acquisition, obtains Sample video description and sample advertisement
The sample matches value L ' of description.
7. the method according to claim 1, wherein the promotion degree of the ad click rate, are as follows:
Wherein, ctr(ad,video)The clicking rate for being targeted advertisements on target video, ctradIt is targeted advertisements on all videos
Average click-through rate, ctrvideoFor the average click-through rate of whole advertisements on target video.
8. the method according to claim 2 or 6, which is characterized in that the activation primitive are as follows: ReLU function, are as follows:
Wherein, xiFor the input of two-dimensional convolution neural net layer, yjFor the output of two-dimensional convolution neural net layer, ωijSwash to be described
Element in every layer of neural network locating for function living in neuron weight vector is connection input i and the weight for exporting j.
9. the method according to claim 2 or 6, which is characterized in that the preset algorithm are as follows:
Wherein, x is input vector, and y is the matching value of video ads, and ω is the weight vector of neuron in every layer of neural network,
ω and x dimension having the same.
10. a kind of device of video matching advertisement, which is characterized in that the device includes:
Video ads description obtains module, and the video presentation of the video for obtaining advertisement to be matched is waited from advertisement base
The advertisement of advertisement is selected to describe;
Video ads describe word segmentation processing module, for according to preset rules, to the video presentation and the advertisement describe into
Row word segmentation processing obtains video presentation participle and advertisement description participle;
Video ads matching degree prediction model input module, for video presentation participle and advertisement description participle to be input to
The video ads matching degree prediction model pre-established;
Video ads distributed nature obtains module, in the video ads matching degree prediction model, according to participle
With the corresponding relationship of distributed nature vector, the distributed nature vector of video presentation participle and advertisement description participle is obtained;Institute
The corresponding relationship for stating participle Yu distributed nature vector is by instructing to video presentation, advertisement description and external corpus
Practice acquisition;
Video ads matching degree prediction model output module, in the video ads matching degree prediction model, by institute
State the multilayer convolutional Neural net that the distributed nature vector that video presentation participle is segmented with advertisement description is input in model
Network obtains the video of the advertisement to be matched and the matching value of candidate locations;The multilayer convolutional neural networks are according to advertisement
The promotion degree of clicking rate is trained acquisition, and the promotion degree of the ad click rate is the matching value of video and advertisement;It is described
Multilayer convolutional neural networks to the sample distribution formula feature of the Sample video of input description participle and sample advertisement description participle to
Amount is trained, and obtains the sample matches value L ' of Sample video description and sample advertisement description;Judging the error range of L ' and L is
It is no to be lower than preset sample training error threshold: if so, multilayer convolutional neural networks model training terminates, to determine every layer of nerve
The weight ω of neuron in network;Wherein, L is the promotion degree of the corresponding ad click rate of Sample video advertisement;
Video ads matching judgment module, if the matching value is greater than preset matching degree threshold value, the advertisement to be matched
Video matches with candidate locations.
11. device according to claim 10, it is characterised in that: the video ads matching degree prediction model exports mould
Block, comprising:
Distributed nature vector input submodule, the distribution for segmenting video presentation participle and advertisement description
Feature vector is input to the one-dimensional convolutional neural networks layer in the multilayer convolutional neural networks in model;
One-dimensional convolutional neural networks layer handles submodule, for the video presentation point for inputting one-dimensional convolutional neural networks layer
The distributed nature vector of word and advertisement description participle carries out one-dimensional convolution algorithm, obtains that video frequency retouches participle and advertisement frequency is retouched point
The distributed nature One-Dimensional Extended vector of word, output to the first maximum pond layer;
First maximum pond layer handles submodule, for the distributed nature One-Dimensional Extended for inputting the described first maximum pond layer
Vector carries out data compression by down-sampling algorithm, obtains the first maximum pond layer bivector, output to the first two-dimensional convolution
Neural net layer;
First two-dimensional convolution neural net layer handles submodule, for inputting the of the first two-dimensional convolution neural net layer
One maximum pond layer bivector obtains multiple two-dimensional convolution minds identical with input vector dimension by two-dimensional convolution operation
Through network layer bivector;Each element in the multiple two-dimensional convolution neural net layer bivector is carried out with activation primitive
It calculates, the first bivector after obtaining multiple calculating of identical quantity, exports to coupled next intermediate maximum pond
Layer;
Next intermediate maximum pond layer handles submodule, after to multiple calculating of next intermediate maximum pond layer are inputted
Bivector, data compression is carried out by down-sampling algorithm, obtains intermediate maximum pond layer bivector, output extremely with its phase
The intermediate two-dimentional neural net layer of even next;
Next intermediate two-dimentional neural net layer handles submodule, for in input next intermediate two-dimentional neural net layer
Between maximum pond layer bivector obtain multiple intermediate two dimensions identical with input vector dimension by two-dimensional convolution operation and roll up
Product neural net layer bivector, with activation primitive to each in the multiple intermediate two-dimensional convolution neural net layer bivector
Element is calculated, the bivector after obtaining multiple calculating of identical quantity, judge obtain multiple calculating after two dimension to
Amount, if be 1 × 1 bivector, if it is, triggering object vector generates submodule;Otherwise it exports to coupled
Next intermediate maximum pond layer, next intermediate maximum pond layer processing submodule described in sequence trigger switch and next intermediate two-dimentional neural
Network layer handles submodule;
Object vector generates submodule, for by one one-dimensional object vector of Element generation of obtained whole vectors;
Matching value obtains submodule, for carrying out operation using object vector of the preset algorithm to acquisition, obtains video presentation
With the matching value of advertisement description.
12. device according to claim 10, it is characterised in that: the participle is corresponding with distributed nature vector to close
System, be passed through by non-supervisory formula training method training module it is non-supervisory to video presentation, advertisement description and external corpus, use
The training of formula training method obtains.
13. device according to claim 12, it is characterised in that: the non-supervisory formula training method training module, comprising:
Verbal description obtains submodule, for obtaining the passage description in video presentation, advertisement description or external corpus;
Verbal description word segmentation processing submodule obtains N number of description participle for carrying out word segmentation processing to the verbal description;
Maps feature vectors submodule, for by it is described it is N number of description participle be mapped as N number of length be m one-dimensional continuous feature to
Amount;
Predicted vector obtains submodule, for being weighted and averaged fortune to the one-dimensional continuous characteristic vector that preceding N-1 length is m
It calculates, obtains a predicted vector;
Participle prediction error rate judging submodule, for when using predicted vector prediction n-th description participle, judgement to be pre-
It surveys participle and whether is lower than preset prediction threshold value with practical n-th description participle prediction error rate:
If so, distributed nature training terminates, N number of one-dimensional continuous characteristic vector is right for N number of description participle
The N number of distributed nature vector answered;If not, adjusting N number of one-dimensional continuous characteristic vector using back-propagation algorithm, obtain
New N number of length is the one-dimensional continuous characteristic vector of m, and predicted vector described in sequence trigger switch obtains submodule and participle prediction error
Rate judging submodule.
14. device according to claim 10, which is characterized in that the multilayer convolutional neural networks are by neural metwork training
Module, the promotion degree according to ad click rate are trained acquisition;
The neural metwork training module, comprising:
Sample input submodule, for obtaining a Sample video advertisement pair from the training sample of multilayer convolutional neural networks
The ad click rate promotion degree L of Sample video description, sample advertisement description and the Sample video advertisement pair;
Sample video advertisement describes word segmentation processing submodule, for according to the preset rules, to Sample video description and
Sample advertisement description carries out word segmentation processing, obtains Sample video description participle and sample advertisement description participle;
Sample distribution formula feature vector obtains submodule, for describing to Sample video description participle and the sample advertisement
Participle carries out distributed nature training, respectively obtains the sample distribution formula of Sample video description participle and sample advertisement description participle
Feature vector, wherein the sample distribution formula feature vector of the Sample video description participle and sample advertisement description participle is more
The input of layer convolutional neural networks;
Sample video advertisement matching degree predicts submodule, describes participle for Sample video of the multilayer convolutional neural networks to input
It is trained with the sample distribution formula feature vector of sample advertisement description participle, obtains Sample video description and sample advertisement description
Sample matches value L ';
Sample matches degree judging submodule, for judging whether L ' and the error range of L are lower than preset sample training error threshold
Value:
If so, multilayer convolutional neural networks model training terminates, the weight ω of the neuron in every layer of neural network is determined;Such as
Fruit is no, and the weight ω of neuron in every layer of neural network is adjusted using back-propagation algorithm, then, sequence trigger switch sample input
Module is to sample matches degree judging submodule.
15. device according to claim 14, which is characterized in that the Sample video advertisement matching degree predicts submodule,
Include:
Sample distribution formula feature One-Dimensional Extended unit, it is right for the one-dimensional convolutional neural networks layer in multilayer convolutional neural networks
The Sample video description participle of input and the sample distribution formula feature vector of sample advertisement description participle carry out one-dimensional convolution
Operation obtains the sample distribution formula feature One-Dimensional Extended vector of Sample video description participle and sample advertisement description participle, output
To the maximum pond layer of sample first;
The maximum pond layer processing unit of sample first, for the sample distribution formula feature one to the maximum pond layer of input sample first
Spread vector is tieed up, data compression is carried out by down-sampling algorithm, obtains the first pond layer sample bivector, is exported to sample the
One two-dimensional convolution neural net layer;
Sample the first two-dimensional convolution neural net layer processing unit, for input sample the first two-dimensional convolution neural net layer
First pond layer sample bivector obtains multiple two-dimensional convolutions identical with input vector dimension by two-dimensional convolution operation
Neural net layer sample bivector;With activation primitive to every in the multiple two-dimensional convolution neural net layer sample bivector
A element is calculated, the sample bivector after obtaining multiple calculating of identical quantity, under output to coupled sample
Maximum pond layer among one;
Maximum pond layer processing unit among one under sample, for multiple calculating to the maximum pond layer in a centre under input sample
Sample bivector afterwards carries out data compression by down-sampling algorithm, obtain among sample maximum pond layer sample two dimension to
Amount, output to the two-dimentional neural net layer in a centre under coupled sample;
Two-dimentional neural net layer processing unit among one under sample, for the two-dimentional neural net layer in a centre under input sample
Intermediate maximum pond layer sample bivector obtains multiple centres identical with input vector dimension by two-dimensional convolution operation
Two-dimensional convolution neural net layer sample bivector, with activation primitive to the multiple intermediate two-dimensional convolution neural net layer sample
Each element is calculated in bivector, and the intermediate sample bivector after obtaining multiple calculating of identical quantity, judgement obtains
Sample bivector after the multiple calculating obtained, if be 1 × 1 bivector, if it is, triggering sample object generates list
Member;Otherwise maximum pond layer among one under exporting to coupled sample, one under sample described in sequence trigger switch among maximum pond
Change two-dimentional neural net layer processing unit among one under layer processing unit and sample;
Sample object vector generation unit, for by one one-dimensional sample object of Element generation of obtained whole vectors to
Amount;
Sample matches value obtaining unit obtains sample for carrying out operation using sample object vector of the preset algorithm to acquisition
The sample matches value L ' of this video presentation and sample advertisement description.
16. device according to claim 10, which is characterized in that the promotion degree of the ad click rate, are as follows:
Wherein, ctr(ad,video)The clicking rate for being targeted advertisements on target video, ctradIt is targeted advertisements on all videos
Average click-through rate, ctrvideoFor the average click-through rate of whole advertisements on target video.
17. device described in 1 or 15 according to claim 1, which is characterized in that the activation primitive are as follows: ReLU function, are as follows:
Wherein, xiFor the input of two-dimensional convolution neural net layer, yjFor the output of two-dimensional convolution neural net layer, ωijSwash to be described
Element in every layer of neural network locating for function living in neuron weight vector is connection input i and the weight for exporting j.
18. device described in 1 or 15 according to claim 1, which is characterized in that the preset algorithm are as follows:
Wherein, x is input vector, and y is the matching value of video ads, and ω is the weight vector of neuron in every layer of neural network,
ω and x dimension having the same.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510338003.0A CN104992347B (en) | 2015-06-17 | 2015-06-17 | A kind of method and device of video matching advertisement |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510338003.0A CN104992347B (en) | 2015-06-17 | 2015-06-17 | A kind of method and device of video matching advertisement |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104992347A CN104992347A (en) | 2015-10-21 |
CN104992347B true CN104992347B (en) | 2018-12-14 |
Family
ID=54304156
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510338003.0A Active CN104992347B (en) | 2015-06-17 | 2015-06-17 | A kind of method and device of video matching advertisement |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104992347B (en) |
Families Citing this family (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180012251A1 (en) * | 2016-07-11 | 2018-01-11 | Baidu Usa Llc | Systems and methods for an attention-based framework for click through rate (ctr) estimation between query and bidwords |
CN106227793B (en) * | 2016-07-20 | 2019-10-22 | 优酷网络技术(北京)有限公司 | A kind of determination method and device of video and the Video Key word degree of correlation |
CN106355446B (en) * | 2016-08-31 | 2019-11-05 | 镇江乐游网络科技有限公司 | A kind of advertisement recommender system of network and mobile phone games |
US10839226B2 (en) | 2016-11-10 | 2020-11-17 | International Business Machines Corporation | Neural network training |
CN106649603B (en) * | 2016-11-25 | 2020-11-10 | 北京资采信息技术有限公司 | Designated information pushing method based on emotion classification of webpage text data |
CN106682108B (en) * | 2016-12-06 | 2022-07-12 | 浙江大学 | Video retrieval method based on multi-mode convolutional neural network |
CN106792003B (en) * | 2016-12-27 | 2020-04-14 | 西安石油大学 | Intelligent advertisement insertion method and device and server |
CN107172448A (en) * | 2017-06-19 | 2017-09-15 | 环球智达科技(北京)有限公司 | The method for managing video and audio |
US10728553B2 (en) * | 2017-07-11 | 2020-07-28 | Sony Corporation | Visual quality preserving quantization parameter prediction with deep neural network |
CN109391829A (en) * | 2017-08-09 | 2019-02-26 | 创意引晴(开曼)控股有限公司 | Video gets position analysis system, analysis method and storage media ready |
CN107730002B (en) * | 2017-10-13 | 2020-06-02 | 国网湖南省电力公司 | Intelligent fuzzy comparison method for remote control parameters of communication gateway machine |
CN107507046A (en) * | 2017-10-13 | 2017-12-22 | 北京奇艺世纪科技有限公司 | The method and system that advertisement is recalled |
CN108805611A (en) * | 2018-05-21 | 2018-11-13 | 北京小米移动软件有限公司 | Advertisement screening technique and device |
CN111093101B (en) * | 2018-10-23 | 2023-03-24 | 腾讯科技(深圳)有限公司 | Media file delivery method and device, storage medium and electronic device |
CN110278466B (en) * | 2019-06-06 | 2021-08-06 | 浙江口碑网络技术有限公司 | Short video advertisement putting method, device and equipment |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102591854A (en) * | 2012-01-10 | 2012-07-18 | 凤凰在线(北京)信息技术有限公司 | Advertisement filtering system and advertisement filtering method specific to text characteristics |
CN102708498A (en) * | 2012-01-13 | 2012-10-03 | 合一网络技术(北京)有限公司 | Theme orientation based advertising method |
CN103164454A (en) * | 2011-12-15 | 2013-06-19 | 百度在线网络技术(北京)有限公司 | Keyword grouping method and keyword grouping system |
CN103617230A (en) * | 2013-11-26 | 2014-03-05 | 中国科学院深圳先进技术研究院 | Method and system for advertisement recommendation based microblog |
CN104636487A (en) * | 2015-02-26 | 2015-05-20 | 湖北光谷天下传媒股份有限公司 | Advertising information management method |
-
2015
- 2015-06-17 CN CN201510338003.0A patent/CN104992347B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103164454A (en) * | 2011-12-15 | 2013-06-19 | 百度在线网络技术(北京)有限公司 | Keyword grouping method and keyword grouping system |
CN102591854A (en) * | 2012-01-10 | 2012-07-18 | 凤凰在线(北京)信息技术有限公司 | Advertisement filtering system and advertisement filtering method specific to text characteristics |
CN102708498A (en) * | 2012-01-13 | 2012-10-03 | 合一网络技术(北京)有限公司 | Theme orientation based advertising method |
CN103617230A (en) * | 2013-11-26 | 2014-03-05 | 中国科学院深圳先进技术研究院 | Method and system for advertisement recommendation based microblog |
CN104636487A (en) * | 2015-02-26 | 2015-05-20 | 湖北光谷天下传媒股份有限公司 | Advertising information management method |
Non-Patent Citations (1)
Title |
---|
基于卷积神经网络的深度学习算法与应用研究;陈先昌;《中国优秀硕士学位论文全文数据库 信息科技辑》;20140915(第9期);第I140-127页 * |
Also Published As
Publication number | Publication date |
---|---|
CN104992347A (en) | 2015-10-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104992347B (en) | A kind of method and device of video matching advertisement | |
CN104462593B (en) | A kind of method and apparatus that the push of user individual message related to resources is provided | |
EP4016432A1 (en) | Method and apparatus for training fusion ordering model, search ordering method and apparatus, electronic device, storage medium, and program product | |
CN103020845B (en) | A kind of method for pushing and system of mobile application | |
CN103593373B (en) | A kind of method and apparatus for search results ranking | |
CN107315841A (en) | A kind of information search method, apparatus and system | |
CN107256267A (en) | Querying method and device | |
CN110019286B (en) | Expression recommendation method and device based on user social relationship | |
CN108122122A (en) | Advertisement placement method and system | |
CN103034718B (en) | A kind of target data sort method and device | |
CN109522556A (en) | A kind of intension recognizing method and device | |
KR102340463B1 (en) | Sample weight setting method and device, electronic device | |
CN103365904B (en) | A kind of advertising message searching method and system | |
CN110532468B (en) | Website resource recommendation method and device and computing equipment | |
CN106339502A (en) | Modeling recommendation method based on user behavior data fragmentation cluster | |
CN105701216A (en) | Information pushing method and device | |
CN109815980A (en) | Prediction technique, device, electronic equipment and the readable storage medium storing program for executing of user type | |
CN109961142A (en) | A kind of Neural network optimization and device based on meta learning | |
CN106105096A (en) | System and method for continuous social communication | |
CN107229666B (en) | A kind of interest heuristic approach and device based on recommender system | |
CN108446351B (en) | Hotel screening method and system based on user preference of OTA platform | |
CN107357917A (en) | A kind of resume search method and computing device | |
CN108345601A (en) | Search result ordering method and device | |
CN109063104A (en) | Method for refreshing, device, storage medium and the terminal device of recommendation information | |
CN111210258A (en) | Advertisement putting method and device, electronic equipment and readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |