CN109670077B - Video recommendation method and device and computer-readable storage medium - Google Patents

Video recommendation method and device and computer-readable storage medium Download PDF

Info

Publication number
CN109670077B
CN109670077B CN201811293507.5A CN201811293507A CN109670077B CN 109670077 B CN109670077 B CN 109670077B CN 201811293507 A CN201811293507 A CN 201811293507A CN 109670077 B CN109670077 B CN 109670077B
Authority
CN
China
Prior art keywords
video
user
neural network
rate
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811293507.5A
Other languages
Chinese (zh)
Other versions
CN109670077A (en
Inventor
蔡锦龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN201811293507.5A priority Critical patent/CN109670077B/en
Publication of CN109670077A publication Critical patent/CN109670077A/en
Application granted granted Critical
Publication of CN109670077B publication Critical patent/CN109670077B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application relates to a video recommendation method, a video recommendation device and a computer storage medium. The video recommendation method comprises the following steps: acquiring user characteristics of sample users and video characteristics of sample videos; performing joint learning on the click rate, the like rate and the attention rate respectively at a plurality of user side neural networks and a plurality of video side neural networks; and obtaining a recommendation list of the video according to the network parameters of the neural network algorithm obtained by the joint learning. In the video recommendation method, joint learning is performed on the click rate, the like rate and the attention rate respectively at a plurality of user side neural networks and a plurality of video side neural networks, and a recommendation list of videos is obtained according to network parameters of a neural network algorithm obtained by the joint learning. The designed neural network algorithm can simultaneously estimate the click rate, the like rate and the attention rate of the user to the video, and the video recommendation efficiency is greatly improved.

Description

Video recommendation method and device and computer-readable storage medium
Technical Field
The application belongs to the field of computer software application, and particularly relates to a video recommendation method and device.
Background
With the increasing progress of science and technology and the popularization of the internet, more and more people transmit information and share life through videos, and massive personalized video recommendation is increasingly important. At present, the application is widely that the click rate and other targets of the user to the video are estimated through a machine learning method.
In the related technology, a video recommendation technology based on large-scale discrete deep learning is to separate a user side network and a video side network, respectively transform user characteristics and video characteristics in the user side network and the video side network, and then learn parameters in a neural network through a minimum loss function, so as to estimate the click rate and other targets of a user on a video.
The video recommendation technology based on large-scale discrete deep learning respectively transforms user characteristics and video characteristics in a user side network and a video side network, and established prediction models of the user for the click rate of the video and the like are inaccurate, so that the accuracy of predicting targets of the user for the click rate of the video and the like is reduced. The Euclidean distance and the cosine distance are used for estimating the target such as the video click rate of the user, and the Euclidean distance and the cosine distance are not suitable for a video recommendation scene, so that the accuracy of estimating the target such as the video click rate of the user is further reduced. The used deep learning algorithm can only estimate one target model at a time, so that the efficiency of estimating the targets such as the click rate of the user to the video is reduced.
Disclosure of Invention
In order to overcome the problems in the related art, the application discloses a video recommendation method and device, wherein click rate, praise rate and attention rate are jointly learned in a plurality of user side neural networks and a plurality of video side neural networks respectively; and obtaining a recommendation list of the video according to the network parameters of the neural network algorithm obtained by the joint learning so as to realize accurate and efficient video recommendation.
According to a first aspect of embodiments of the present application, there is provided a video recommendation method, including:
acquiring user characteristics of sample users and video characteristics of sample videos;
performing joint learning on the click rate, the like rate and the attention rate respectively at a plurality of user side neural networks and a plurality of video side neural networks;
and obtaining a recommendation list of the video according to the network parameters of the neural network algorithm obtained by the joint learning.
Optionally, the user characteristics of the sample user comprise at least one of: an ID characteristic of the user, a static characteristic of the user, and a dynamic characteristic of the user.
Optionally, the dynamic characteristics of the user comprise at least one of the following characteristics: a user click history feature, a user likes history feature, and a user concerns list feature.
Optionally, the video features of the sample video include at least one of: ID features of the video, ID features of the video author, video tag features, and statistical features of the video.
Optionally, the user features of the sample user and the video features of the sample video are first user features and first video features, respectively;
the ID feature of the user is a second user feature;
the ID feature of the video and the ID feature of the video author are second video features;
splicing the first user characteristic and the second user characteristic to obtain a third user characteristic; and
and splicing the first video characteristic and the second video characteristic to obtain a third video characteristic.
Optionally, the step of performing joint learning on the click rate, the like rate and the attention rate at the plurality of user-side neural networks and the plurality of video-side neural networks respectively includes:
respectively establishing a click rate model, a like rate model and an attention rate model in a user side neural network and a video side neural network based on a neural network algorithm;
respectively carrying out forward learning on the click rate model, the like rate model and the attention rate model on a user side neural network and a video side neural network;
and respectively carrying out reverse learning on the click rate model, the like rate model and the attention rate model at the user side neural network and the video side neural network.
Optionally, the step of establishing a click rate model, a like rate model and an attention rate model in the user-side neural network and the video-side neural network respectively based on the neural network algorithm includes:
respectively establishing click rate models on a first user side neural network and a first video side neural network based on a neural network algorithm; and
and establishing a praise rate model and an attention rate model on the second user side neural network and the second video side neural network respectively based on a neural network algorithm.
Optionally, the step of performing forward learning on the click rate model, the like rate model and the attention rate model at the user-side neural network and the video-side neural network respectively includes:
inputting the first user characteristic and the first video characteristic into the first user-side neural network and the first video-side neural network, respectively;
and respectively transforming the first user characteristic and the first video characteristic layer by layer from bottom to top in the first user side neural network and the first video side neural network to respectively obtain a top vector of the click rate of the user side and a top vector of the click rate of the video side.
Optionally, the step of performing forward learning on the click rate model, the like rate model and the attention rate model at the user side neural network and the video side neural network respectively further includes:
inputting the third user characteristic and the third video characteristic into the second user-side neural network and the second video-side neural network, respectively;
and respectively transforming the third user characteristic and the third video characteristic layer by layer from bottom to top in the second user side neural network and the second video side neural network to respectively obtain a top vector of the praise rate of the user side and a top vector of the praise rate of the video side.
Optionally, the step of performing forward learning on the click rate model, the like rate model and the attention rate model at the user side neural network and the video side neural network respectively further includes:
inputting the third user characteristic and the third video characteristic into the second user-side neural network and the second video-side neural network, respectively;
and respectively transforming the third user characteristic and the third video characteristic layer by layer from bottom to top in the second user side neural network and the second video side neural network to respectively obtain a top vector of the attention rate of the user side and a top vector of the attention rate of the video side.
Optionally, the step of performing forward learning on the click rate model, the like rate model and the attention rate model at the user side neural network and the video side neural network respectively further includes:
and respectively calculating the inner product distance of the top vector of the click rate of the user side and the top vector of the click rate of the video side, and the inner product distance of the top vector of the attention rate of the user side and the top vector of the attention rate of the video side.
Optionally, the step of performing forward learning on the click rate model, the like rate model and the attention rate model at the user side neural network and the video side neural network respectively further includes:
converting the inner product distance of the top-level vector of the click rate of the user side and the top-level vector of the click rate of the video side, and the inner product distance of the top-level vector of the attention rate of the user side and the top-level vector of the attention rate of the video side into the probability of the click rate, the probability of the click rate and the probability of the attention rate, respectively.
Optionally, the step of performing reverse learning on the click rate model, the like rate model and the attention rate model at the user side neural network and the video side neural network respectively includes:
calculating a loss function of the click rate model according to the probability of the click rate model and the sample label;
calculating a loss function of the like rate model according to the probability of the like rate model and the sample label; and
and calculating a loss function of the attention rate model according to the probability of the attention rate model and the sample label.
Optionally, the step of performing reverse learning on the click rate model, the like rate model and the attention rate model at the user side neural network and the video side neural network respectively further includes:
minimizing a loss function of the click rate model by adopting a random gradient descent method;
solving the gradient of the loss function of the click rate model;
updating network parameters of the first user side neural network and the first video side neural network layer by layer from top to bottom respectively;
and updating the network parameters corresponding to the first user characteristic and the first video characteristic.
Optionally, the step of performing reverse learning on the click rate model, the like rate model and the attention rate model at the user side neural network and the video side neural network respectively further includes:
minimizing a loss function of the praise rate model by adopting a random gradient descent method;
solving the gradient of the loss function of the praise rate model;
updating network parameters of the second user-side neural network and the second video-side neural network of the like rate model layer by layer from top to bottom respectively;
and updating the network parameters corresponding to the second user characteristic and the second video characteristic of the praise rate model.
Optionally, the step of performing reverse learning on the click rate model, the like rate model and the attention rate model from the user-side neural network and the video-side neural network respectively further includes:
minimizing a loss function of the interest rate model by adopting a random gradient descent method;
solving a gradient of a loss function of the interest rate model;
updating network parameters of the second user side neural network and the second video side neural network of the attention rate model layer by layer from top to bottom respectively;
and updating the network parameters corresponding to the second user characteristic and the second video characteristic of the attention rate model.
Optionally, before the step of obtaining the user characteristics of the sample user and the video characteristics of the sample video, the method further includes: a sample user and a sample video are obtained, and a sample label is labeled for the sample video.
Optionally, in the click rate model, if the sample user clicks the sample video displayed on the operation page, the sample video is marked as a positive sample, and if the sample user does not click the sample video displayed on the operation page, the sample video is marked as a negative sample;
in the praise rate model, if the sample user clicks and praise the sample video, the sample video is marked as a positive sample, and if the sample user clicks but does not praise the sample video, the sample video is marked as a negative sample.
In the interest rate model, if the sample user clicks the sample video and pays attention to a video author of the sample video, the sample video is marked as a positive sample, and if the sample user clicks the sample video but does not pay attention to the video author of the sample video, the sample video is marked as a negative sample.
Optionally, the step of obtaining a recommendation list of videos according to the network parameters of the neural network algorithm obtained by the joint learning includes:
receiving a video acquisition request of a target user;
acquiring video characteristics and user characteristics of the target user;
respectively calculating a top vector of the click rate of the user side and a top vector of the click rate of the video side according to the first user side neural network and the first video side neural network;
respectively calculating a top vector of the praise rate of the user side and a top vector of the praise rate of the video side according to the second user side neural network and the second video side neural network;
respectively calculating a top vector of the attention rate of the user side and a top vector of the attention rate of the video side according to the second user side neural network and the second video side neural network;
respectively calculating the inner product distance of the top vector of the click rate of the user side and the top vector of the click rate of the video side, and the inner product distance of the top vector of the attention rate of the user side and the top vector of the attention rate of the video side;
and sequencing the target videos according to the inner product distance of the click rate, the inner product distance of the like rate and the inner product distance of the attention rate to obtain a recommendation list of the target videos.
Optionally, the video side neural network periodically calculates a top vector of click rate of the video side, a top vector of like rate of the video side, and a top vector of attention rate of the video side.
According to a third aspect of the embodiments of the present invention, there is provided a video recommendation apparatus including:
a feature extraction unit: configured to obtain user characteristics of a sample user and video characteristics of a sample video;
a joint learning unit: configured to jointly learn click rates, like rates and attention rates at a plurality of user-side neural networks and a plurality of video-side neural networks, respectively;
an online video recommendation unit: and obtaining a recommendation list of the video according to the network parameters of the neural network algorithm obtained by the joint learning.
Optionally, the video recommendation apparatus further includes:
a sample collection unit: is configured to obtain a sample user and a sample video, and label the sample video with a sample label.
Optionally, the feature extraction unit is further configured to periodically acquire video features of the sample video;
the joint learning unit is further configured to periodically calculate a top vector of the click rate of the video side, a top vector of the like rate of the video side and a top vector of the attention rate of the video side by the video side neural network;
according to a third aspect of the embodiments of the present invention, there is provided a video recommendation apparatus including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform any one of the video recommendation methods described above.
According to a fourth aspect of the embodiments of the present invention, there is provided a computer-readable storage medium storing computer instructions which, when executed, implement the above-mentioned video recommendation method.
The technical scheme provided by the embodiment of the application can have the following beneficial effects:
in the video recommendation method, joint learning is performed on the click rate, the like rate and the attention rate respectively at a plurality of user side neural networks and a plurality of video side neural networks, and a recommendation list of videos is obtained according to network parameters of a neural network algorithm obtained by the joint learning. The designed neural network algorithm can simultaneously estimate the click rate, the like rate and the attention rate of the user to the video, and the video recommendation efficiency is greatly improved.
In the video recommendation method, click rate models are respectively established on a first user side neural network and a first video side neural network based on a neural network algorithm; and establishing a praise rate model and an attention rate model on the second user side neural network and the second video side neural network respectively based on a neural network algorithm. The user side and the video side respectively use the two neural networks to estimate the click rate, the like rate and the attention rate of the user to the video, and the video recommendation efficiency is further improved.
In the video recommendation method, the user characteristics of the sample users of the click rate model and the video characteristics of the sample videos are multiplexed by the click rate model and the attention rate model, so that the behavior characteristics of the click rate model can be learned by the click rate model and the attention rate model, and the accuracy of video recommendation is improved.
In the video recommendation method, a proper inner product distance is designed for a video recommendation scene to represent the click rate, the like rate and the attention rate of a user to a video, and a proper loss function is designed to respectively update the network parameters of the neural network algorithm of the click rate model, the like rate model and the attention rate model layer by layer, so that the accuracy of video recommendation is further improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
Fig. 1 is a flow diagram illustrating a video recommendation method according to an example embodiment.
Fig. 2 is a flow diagram illustrating a video recommendation method according to an example embodiment.
Fig. 3 is a flow diagram illustrating a video recommendation method according to an example embodiment.
Fig. 4 is a flow diagram illustrating a video recommendation method according to an example embodiment.
Fig. 5 is a schematic diagram illustrating a video recommendation apparatus according to an example embodiment.
Fig. 6 is a block diagram illustrating an apparatus for performing a video recommendation method according to an example embodiment.
Fig. 7 is a block diagram illustrating an apparatus for performing a video recommendation method according to an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
Fig. 1 is a flowchart of a video recommendation method according to an exemplary embodiment, which specifically includes the following steps:
in step S101, user characteristics of sample users and video characteristics of sample videos are acquired.
In step S102, joint learning is performed on the click rate, the like rate, and the attention rate at the plurality of user-side neural networks and the plurality of video-side neural networks, respectively.
In step S103, a recommendation list of videos is obtained according to the network parameters of the neural network algorithm obtained by the joint learning.
In one embodiment of the present application, first, user characteristics of a sample user and video characteristics of a sample video are obtained. And then, performing joint learning on the click rate, the like rate and the attention rate respectively at a plurality of user-side neural networks and a plurality of video-side neural networks. And finally, obtaining a recommendation list of the video according to the network parameters of the neural network algorithm obtained by the joint learning.
According to the embodiment of the application, joint learning is carried out on the click rate, the like rate and the attention rate respectively at a plurality of user side neural networks and a plurality of video side neural networks, and a recommendation list of videos is obtained according to network parameters of a neural network algorithm obtained by the joint learning. The designed neural network algorithm can simultaneously estimate the click rate, the like rate and the attention rate of the user to the video, and the video recommendation efficiency is greatly improved.
Fig. 2 is a flowchart of a video recommendation method according to an exemplary embodiment, which specifically includes the following steps:
in step S201, a sample user and a sample video are acquired, and a sample label is labeled to the sample video.
In step S202, user characteristics of the sample user and video characteristics of the sample video are acquired.
In step S203, based on a neural network algorithm, a click rate model is respectively established in the first user-side neural network and the first video-side neural network; and establishing a praise rate model and an attention rate model on the second user side neural network and the second video side neural network respectively based on a neural network algorithm.
In step S204, the click rate model, the like rate model and the attention rate model are forward learned in the user-side neural network and the video-side neural network, respectively.
In step S205, the click rate model, the like rate model, and the attention rate model are reversely learned in the user-side neural network and the video-side neural network, respectively.
In step S206, a recommendation list of videos is obtained according to the network parameters of the neural network algorithm obtained by the joint learning.
In one embodiment of the present application, first, a sample user and a sample video are obtained, and a sample label is labeled for the sample video. Then, user characteristics of the sample user and video characteristics of the sample video are obtained. Secondly, respectively establishing click rate models on a first user side neural network and a first video side neural network based on a neural network algorithm; and establishing a praise rate model and an attention rate model on the second user side neural network and the second video side neural network respectively based on a neural network algorithm. And thirdly, respectively carrying out forward learning on the click rate model, the like rate model and the attention rate model in the user side neural network and the video side neural network. And thirdly, reversely learning the click rate model, the like rate model and the attention rate model at the user side neural network and the video side neural network respectively. And finally, obtaining a recommendation list of the video according to the network parameters of the neural network algorithm obtained by the joint learning.
According to the embodiment of the application, a click rate model is respectively established on a first user side neural network and a first video side neural network based on a neural network algorithm; and establishing a praise rate model and an attention rate model on the second user side neural network and the second video side neural network respectively based on a neural network algorithm. The user side and the video side respectively use the two neural networks to estimate the click rate, the like rate and the attention rate of the user to the video, and the video recommendation efficiency is further improved.
In an optional embodiment, the user characteristics of the sample user comprise at least one of the following characteristics: an ID characteristic of the user, a static characteristic of the user, and a dynamic characteristic of the user. The dynamic characteristics of the user include at least one of the following characteristics: a user click history feature, a user likes history feature, and a user concerns list feature. The video features of the sample video include at least one of: ID features of the video, ID features of the video author, video tag features, and statistical features of the video. The user characteristics of the sample user and the video characteristics of the sample video are first user characteristics and first video characteristics respectively; the ID feature of the user is a second user feature; the ID feature of the video and the ID feature of the video author are second video features; splicing the first user characteristic and the second user characteristic to obtain a third user characteristic; and splicing the first video characteristic and the second video characteristic to obtain a third video characteristic.
According to the embodiment, the user characteristics of the sample users of the click rate model and the video characteristics of the sample videos are multiplexed by the click rate model and the attention rate model, so that the behavior characteristics of the click rate model can be learned by the click rate model and the attention rate model, and the accuracy of video recommendation is improved.
In an optional embodiment, in the click rate model, if the sample user clicks the sample video displayed on the operation page, the sample video is marked as a positive sample; and if the sample user does not click the sample video displayed on the operation page, marking the sample video as a negative sample. In one embodiment, the sample label of the positive sample is labeled 1 and the sample label of the negative sample is labeled 0.
In the praise rate model, if the sample user clicks and praise the sample video, marking the sample video as a positive sample; if the sample user clicks but does not approve the sample video, the sample video is marked as a negative sample. In one embodiment, the sample label of the positive sample is labeled 1 and the sample label of the negative sample is labeled 0.
In the interest rate model, if the sample user clicks the sample video and pays attention to a video author of the sample video, marking the sample video as a positive sample; if the sample user clicks on the sample video but does not concern the video author of the sample video, the sample video is marked as a negative sample. In one embodiment, the sample label of the positive sample is labeled 1 and the sample label of the negative sample is labeled 0.
In an optional embodiment, the step of obtaining the recommendation list of videos according to the network parameters of the neural network algorithm obtained by joint learning includes: receiving a video acquisition request of a target user; acquiring video characteristics and user characteristics of the target user; respectively calculating a top vector of the click rate of the user side and a top vector of the click rate of the video side according to the first user side neural network and the first video side neural network; respectively calculating a top vector of the praise rate of the user side and a top vector of the praise rate of the video side according to the second user side neural network and the second video side neural network; respectively calculating a top vector of the attention rate of the user side and a top vector of the attention rate of the video side according to the second user side neural network and the second video side neural network; respectively calculating the inner product distance of the top vector of the click rate of the user side and the top vector of the click rate of the video side, and the inner product distance of the top vector of the attention rate of the user side and the top vector of the attention rate of the video side; and sequencing the target videos according to the inner product distance of the click rate, the inner product distance of the like rate and the inner product distance of the attention rate to obtain a recommendation list of the target videos. The smaller the inner product distance of the click rate, the inner product distance of the like rate and the inner product distance of the attention rate, the larger the probability that the target user clicks, like and pays attention to the target video. When the online video recommendation is carried out, a sorting formula is adopted to sort the videos.
In an alternative embodiment, the video side neural network periodically calculates a top vector of click rate, a top vector of like rate and a top vector of attention rate of the video side. After receiving a video acquisition request of a target user, acquiring user characteristics of the target user; calculating a top vector of the click rate of the user side according to the first user side neural network; calculating a top vector of the praise rate of the user side according to the second user side neural network; calculating a top vector of the attention rate of the user side according to the second user side neural network; respectively calculating the inner product distance of the top vector of the click rate of the user side and the top vector of the click rate of the regularly calculated video side, and the inner product distance of the top vector of the attention rate of the user side and the top vector of the attention rate of the regularly calculated video side; and sequencing the target videos according to the inner product distance of the click rate, the inner product distance of the like rate and the inner product distance of the attention rate to obtain a recommendation list of the videos.
According to the embodiment, the video side neural network periodically calculates the top vector of the click rate of the video side, the top vector of the like rate of the video side and the top vector of the attention rate of the video side. When the on-line video respectively carries out forward learning recommendation on the click rate model, the like rate model and the attention rate model by the user side neural network and the video side neural network, a top vector of the click rate of the video side, a top vector of the like rate of the video side and a top vector of the attention rate of the video side do not depend on the target user, calculation can be carried out in advance, algorithm calculation time is saved, and video recommendation efficiency is improved.
Fig. 3 is a flowchart of a video recommendation method according to an exemplary embodiment, and in particular, the flowchart of the method specifically includes the following steps:
in step S301, inputting the first user characteristic and the first video characteristic into the first user-side neural network and the first video-side neural network, respectively; and respectively transforming the first user characteristic and the first video characteristic layer by layer from bottom to top in the first user side neural network and the first video side neural network to respectively obtain a top vector of the click rate of the user side and a top vector of the click rate of the video side.
In step S302, inputting the third user characteristic and the third video characteristic into the second user-side neural network and the second video-side neural network, respectively; and respectively transforming the third user characteristic and the third video characteristic layer by layer from bottom to top in the second user side neural network and the second video side neural network to respectively obtain a top vector of the praise rate of the user side and a top vector of the praise rate of the video side.
In step S303, inputting the third user characteristic and the third video characteristic into the second user-side neural network and the second video-side neural network, respectively; and respectively transforming the third user characteristic and the third video characteristic layer by layer from bottom to top in the second user side neural network and the second video side neural network to respectively obtain a top vector of the attention rate of the user side and a top vector of the attention rate of the video side.
In step S304, the inner product distance of the top-level vector of the user-side click rate and the top-level vector of the video-side click rate, the inner product distance of the top-level vector of the user-side approval rate and the top-level vector of the video-side approval rate, and the inner product distance of the top-level vector of the user-side attention rate and the top-level vector of the video-side attention rate are calculated, respectively.
In step S305, the inner product distance of the top-level vector of the user-side click rate and the top-level vector of the video-side click rate, the inner product distance of the top-level vector of the user-side approval rate and the top-level vector of the video-side approval rate, and the inner product distance of the top-level vector of the user-side attention rate and the top-level vector of the video-side attention rate are converted into a probability of click rate, a probability of approval rate, and a probability of attention rate, respectively.
In one embodiment of the present application, the click rate model, the like rate model, and the attention rate model are forward learned at the user-side neural network and the video-side neural network, respectively. Firstly, inputting the first user characteristic and the first video characteristic into the first user side neural network and the first video side neural network respectively; and respectively transforming the first user characteristic and the first video characteristic layer by layer from bottom to top in the first user side neural network and the first video side neural network to respectively obtain a top vector of the click rate of the user side and a top vector of the click rate of the video side. Then, inputting the third user characteristic and the third video characteristic into the second user-side neural network and the second video-side neural network respectively; and respectively transforming the third user characteristic and the third video characteristic layer by layer from bottom to top in the second user side neural network and the second video side neural network to respectively obtain a top vector of the praise rate of the user side and a top vector of the praise rate of the video side. Secondly, inputting the third user characteristic and the third video characteristic into the second user side neural network and the second video side neural network respectively; and respectively transforming the third user characteristic and the third video characteristic layer by layer from bottom to top in the second user side neural network and the second video side neural network to respectively obtain a top vector of the attention rate of the user side and a top vector of the attention rate of the video side. And calculating the inner product distance of the top vector of the click rate of the user side and the top vector of the click rate of the video side, and the inner product distance of the top vector of the attention rate of the user side and the top vector of the attention rate of the video side respectively. And finally, converting the inner product distance of the top-level vector of the click rate of the user side and the top-level vector of the click rate of the video side, and the inner product distance of the top-level vector of the attention rate of the user side and the top-level vector of the attention rate of the video side into the probability of the click rate, the probability of the click rate and the probability of the attention rate respectively.
According to the embodiment of the application, the inner product distance of the top level vector of the click rate of the user side and the top level vector of the click rate of the video side, the inner product distance of the top level vector of the like rate of the user side and the top level vector of the like rate of the video side, and the inner product distance of the top level vector of the attention rate of the user side and the top level vector of the attention rate of the video side are calculated respectively. And designing a proper inner product distance for a video recommendation scene to represent the click rate, the like rate and the attention rate of a user to the video, so that the accuracy of video recommendation is further improved.
The calculation formula of the inner product distance is as follows:
Figure GDA0002971750810000131
wherein A, B ∈ Rd,AiAs top-level vectors on the user side, BiThe top vector on the video side.
In one embodiment, the top vector of the click rate of the user side, the top vector of the like rate of the user side, and the top vector of the attention rate of the user side are a1、A2And A3(ii) a The top vector of the click rate of the video side, the top vector of the like rate of the video side and the top vector of the attention rate of the video side are respectively B1、B2And B3(ii) a The inner product distance between the top level vector of the click rate of the user side and the top level vector of the click rate of the video side, and the inner product distance between the top level vector of the attention rate of the user side and the top level vector of the attention rate of the video side are distances (A)1,B1)、distance(A2,B2)、distance(A3,B3)。
In an optional embodiment, the calculation formula of converting the inner product distance of the top-level vector of the click rate of the user side and the top-level vector of the click rate of the video side, and the inner product distance of the top-level vector of the interest rate of the user side and the top-level vector of the interest rate of the video side into the probability of the click rate, and the probability of the interest rate is a sigmoid function:
Figure GDA0002971750810000141
wherein a is the inner product distance, σ (a) is the corresponding probability of the inner product distance a, and the value range is (0, 1).
Fig. 4 is a flowchart of a video recommendation method according to an exemplary embodiment, specifically, a flowchart of a method for performing reverse learning on the click rate model, the like rate model and the attention rate model at the user-side neural network and the video-side neural network respectively, which specifically includes the following steps:
in step S401, calculating a loss function of the click rate model according to the probability of the click rate model and the sample label; calculating a loss function of the like rate model according to the probability of the like rate model and the sample label; and calculating a loss function of the attention rate model according to the probability of the attention rate model and the sample label.
In step S402, minimizing a loss function of the click rate model by using a stochastic gradient descent method; solving the gradient of the loss function of the click rate model; updating network parameters of the first user side neural network and the first video side neural network layer by layer from top to bottom respectively; and updating the network parameters corresponding to the first user characteristic and the first video characteristic.
In step S403, minimizing a loss function of the like rate model by using a stochastic gradient descent method; solving the gradient of the loss function of the praise rate model; updating network parameters of the second user-side neural network and the second video-side neural network of the like rate model layer by layer from top to bottom respectively; and updating the network parameters corresponding to the second user characteristic and the second video characteristic of the praise rate model.
In step S404, minimizing a loss function of the interest rate model by using a stochastic gradient descent method; solving a gradient of a loss function of the interest rate model; updating network parameters of the second user side neural network and the second video side neural network of the attention rate model layer by layer from top to bottom respectively; and updating the network parameters corresponding to the second user characteristic and the second video characteristic of the attention rate model.
In one embodiment of the present application, the click rate model, the like rate model and the attention rate model are reversely learned at the user side neural network and the video side neural network, respectively. Firstly, calculating a loss function of the click rate model according to the probability of the click rate model and a sample label; calculating a loss function of the like rate model according to the probability of the like rate model and the sample label; and calculating a loss function of the attention rate model according to the probability of the attention rate model and the sample label. Then, minimizing a loss function of the click rate model by adopting a random gradient descent method; solving the gradient of the loss function of the click rate model; updating network parameters of the first user side neural network and the first video side neural network layer by layer from top to bottom respectively; and updating the network parameters corresponding to the first user characteristic and the first video characteristic. Secondly, minimizing a loss function of the praise rate model by adopting a random gradient descent method; solving the gradient of the loss function of the praise rate model; updating network parameters of the second user-side neural network and the second video-side neural network of the like rate model layer by layer from top to bottom respectively; and updating the network parameters corresponding to the second user characteristic and the second video characteristic of the praise rate model. Finally, minimizing a loss function of the attention rate model by adopting a random gradient descent method; solving a gradient of a loss function of the interest rate model; updating network parameters of the second user side neural network and the second video side neural network of the attention rate model layer by layer from top to bottom respectively; and updating the network parameters corresponding to the second user characteristic and the second video characteristic of the attention rate model.
According to the embodiment of the application, the loss functions of the click rate model, the like rate model and the attention rate model are minimized by adopting a stochastic gradient descent method, the gradients of the loss functions of the click rate model, the like rate model and the attention rate model are respectively solved, and the network parameters of the neural network algorithm of the click rate model, the like rate model and the attention rate model are respectively updated layer by layer. And designing a proper loss function aiming at the scene of video recommendation to respectively update the network parameters of the neural network algorithm of the click rate model, the like rate model and the attention rate model layer by layer, thereby further improving the accuracy of video recommendation.
In an optional embodiment, the calculating a loss function of the click rate model according to the probability of click rate of the click rate model and the sample label; calculating a loss function of the like rate model according to the probability of the like rate model and the sample label; and calculating a loss function of the attention rate model according to the probability of the attention rate model and the sample label. The calculation formula of the Loss function (Log Loss) is as follows:
lt(At,Bt)=-ytlog pt-(1-yt)lo g(1-pt) (3)
wherein A ist,Bt∈Rd,AtAs top-level vectors on the user side, BtAs top-level vectors, p, on the video sidet=σ(At·Bt) Sigma is sigmoid function, y is estimated as probability of click rate, probability of like rate and probability of attention ratetE {0,1} is the sample label.
Fig. 5 is a schematic diagram illustrating a video recommendation apparatus according to an example embodiment. As shown in fig. 5, the apparatus 50 includes: the online video recommendation system comprises a feature extraction unit 501, a joint learning unit 502, an online video recommendation unit 503 and a sample acquisition unit 504.
Feature extraction unit 501: configured to obtain user characteristics of the sample user and video characteristics of the sample video.
The joint learning unit 502: configured to jointly learn click rates, like rates and attention rates at a plurality of user-side neural networks and a plurality of video-side neural networks, respectively.
The online video recommendation unit 503: and obtaining a recommendation list of the video according to the network parameters of the neural network algorithm obtained by the joint learning.
The sample acquisition unit 504: configured to obtain a sample user and a sample video, and label the sample video with a sample label
In an optional embodiment, the feature extraction unit 501 is further configured to periodically obtain video features of the sample video; the joint learning unit 502 is further configured to periodically calculate a top vector of the click rate of the video side, a top vector of the like rate of the video side, and a top vector of the attention rate of the video side by the video side neural network.
Fig. 6 is a block diagram illustrating an apparatus 1200 for performing a video recommendation method according to an example embodiment. For example, the interaction apparatus 1200 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 6, the apparatus 1200 may include one or more of the following components: processing component 1202, memory 1204, power component 1206, multimedia component 1208, audio component 1210, input/output (I/O) interface 1212, sensor component 1214, and communications component 1216.
The processing component 1202 generally controls overall operation of the apparatus 1200, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 1202 may include one or more processors 1220 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 1202 can include one or more modules that facilitate interaction between the processing component 1202 and other components. For example, the processing component 1202 can include a multimedia module to facilitate interaction between the multimedia component 1208 and the processing component 1202.
The memory 1204 is configured to store various types of data to support operation at the device 1200. Examples of such data include instructions for any application or method operating on the device 1200, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 1204 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
A power supply component 1206 provides power to the various components of the device 1200. Power components 1206 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for apparatus 1200.
The multimedia components 1208 include a screen that provides an output interface between the device 1200 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 1208 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the device 1200 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
Audio component 1210 is configured to output and/or input audio signals. For example, audio component 1210 includes a Microphone (MIC) configured to receive external audio signals when apparatus 1200 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 1204 or transmitted via the communication component 1216. In some embodiments, audio assembly 1210 further includes a speaker for outputting audio signals.
The I/O interface 1212 provides an interface between the processing component 1202 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 1214 includes one or more sensors for providing various aspects of state assessment for the apparatus 1200. For example, the sensor assembly 1214 may detect an open/closed state of the device 1200, the relative positioning of the components, such as a display and keypad of the apparatus 1200, the sensor assembly 1214 may also detect a change in the position of the apparatus 1200 or a component of the apparatus 1200, the presence or absence of user contact with the apparatus 1200, an orientation or acceleration/deceleration of the apparatus 1200, and a change in the temperature of the apparatus 1200. The sensor assembly 1214 may include a proximity sensor configured to detect the presence of a nearby object in the absence of any physical contact. The sensor assembly 1214 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 1214 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communications component 1216 is configured to facilitate communications between the apparatus 1200 and other devices in a wired or wireless manner. The apparatus 1200 may access a wireless network based on a communication standard, such as WiFi, an operator network (such as 2G, 3G, 4G, or 5G), or a combination thereof. In an exemplary embodiment, the communication component 1216 receives the broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communications component 1216 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 1200 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer readable storage medium comprising instructions, such as memory 1204 comprising instructions, executable by processor 1220 of apparatus 1200 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Fig. 7 is a block diagram illustrating an apparatus 1300 that performs a video recommendation method according to an example embodiment. For example, the apparatus 1300 may be provided as a server. Referring to fig. 7, apparatus 1300 includes a processing component 1322, which further includes one or more processors, and memory resources, represented by memory 1332, for storing instructions, such as application programs, that may be executed by processing component 1322. The application programs stored in memory 1332 may include one or more modules that each correspond to a set of instructions. Further, processing component 1322 is configured to execute instructions to perform the video recommendation method described above.
The apparatus 1300 may also include a power component 1326 configured to perform power management for the apparatus 1300, a wired or wireless network interface 1350 configured to connect the apparatus 1300 to a network, and an input-output (I/O) interface 1358. The apparatus 1300 may operate based on an operating system stored in the memory 1332, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, or the like.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (24)

1. A method for video recommendation, comprising:
acquiring user characteristics of sample users and video characteristics of sample videos;
performing joint learning on the click rate, the like rate and the attention rate respectively at a plurality of user side neural networks and a plurality of video side neural networks;
the method for obtaining the recommendation list of the video according to the network parameters of the neural network algorithm obtained by the joint learning comprises the following steps: receiving a video acquisition request of a target user; acquiring video characteristics and user characteristics of the target user; respectively calculating a top vector of the click rate of the user side and a top vector of the click rate of the video side according to the first user side neural network and the first video side neural network; respectively calculating a top vector of the praise rate of the user side and a top vector of the praise rate of the video side according to the second user side neural network and the second video side neural network; respectively calculating a top vector of the attention rate of the user side and a top vector of the attention rate of the video side according to the second user side neural network and the second video side neural network; respectively calculating the inner product distance of the top vector of the click rate of the user side and the top vector of the click rate of the video side, and the inner product distance of the top vector of the attention rate of the user side and the top vector of the attention rate of the video side; and sequencing the target videos according to the inner product distance of the click rate, the inner product distance of the like rate and the inner product distance of the attention rate to obtain a recommendation list of the target videos.
2. The video recommendation method of claim 1, wherein the user characteristics of the sample user comprise at least one of: an ID characteristic of the user, a static characteristic of the user, and a dynamic characteristic of the user.
3. The video recommendation method according to claim 2, wherein said dynamic characteristics of said user comprise at least one of the following characteristics: a user click history feature, a user likes history feature, and a user concerns list feature.
4. The video recommendation method according to claim 3, wherein the video features of the sample video comprise at least one of: ID features of the video, ID features of the video author, video tag features, and statistical features of the video.
5. The video recommendation method according to claim 4, wherein the user features of the sample user and the video features of the sample video are first user features and first video features, respectively;
the ID feature of the user is a second user feature;
the ID feature of the video and the ID feature of the video author are second video features;
splicing the first user characteristic and the second user characteristic to obtain a third user characteristic; and
and splicing the first video characteristic and the second video characteristic to obtain a third video characteristic.
6. The video recommendation method according to claim 5, wherein the step of performing joint learning on click rate, like rate and attention rate at a plurality of user-side neural networks and a plurality of video-side neural networks respectively comprises:
respectively establishing a click rate model, a like rate model and an attention rate model in a user side neural network and a video side neural network based on a neural network algorithm;
respectively carrying out forward learning on the click rate model, the like rate model and the attention rate model on a user side neural network and a video side neural network;
and respectively carrying out reverse learning on the click rate model, the like rate model and the attention rate model at the user side neural network and the video side neural network.
7. The video recommendation method according to claim 6, wherein the step of establishing a click-through rate model, a like rate model and an attention rate model in the user-side neural network and the video-side neural network respectively based on the neural network algorithm comprises:
respectively establishing click rate models on a first user side neural network and a first video side neural network based on a neural network algorithm; and
and establishing a praise rate model and an attention rate model on the second user side neural network and the second video side neural network respectively based on a neural network algorithm.
8. The video recommendation method according to claim 7, wherein said step of forward learning said click-through rate model, like-to-like rate model and attention rate model at said user side neural network and said video side neural network, respectively, comprises:
inputting the first user characteristic and the first video characteristic into the first user-side neural network and the first video-side neural network, respectively;
and respectively transforming the first user characteristic and the first video characteristic layer by layer from bottom to top in the first user side neural network and the first video side neural network to respectively obtain a top vector of the click rate of the user side and a top vector of the click rate of the video side.
9. The video recommendation method according to claim 8, wherein said step of forward learning said click-through rate model, like-rate model and attention rate model at said user side neural network and said video side neural network, respectively, further comprises:
inputting the third user characteristic and the third video characteristic into the second user-side neural network and the second video-side neural network, respectively;
and respectively transforming the third user characteristic and the third video characteristic layer by layer from bottom to top in the second user side neural network and the second video side neural network to respectively obtain a top vector of the praise rate of the user side and a top vector of the praise rate of the video side.
10. The video recommendation method according to claim 9, wherein said step of forward learning said click-through rate model, like-rate model and attention rate model at said user side neural network and said video side neural network, respectively, further comprises:
inputting the third user characteristic and the third video characteristic into the second user-side neural network and the second video-side neural network, respectively;
and respectively transforming the third user characteristic and the third video characteristic layer by layer from bottom to top in the second user side neural network and the second video side neural network to respectively obtain a top vector of the attention rate of the user side and a top vector of the attention rate of the video side.
11. The video recommendation method according to claim 10, wherein said step of forward learning said click-through rate model, like-rate model and attention rate model at said user side neural network and said video side neural network, respectively, further comprises:
and respectively calculating the inner product distance of the top vector of the click rate of the user side and the top vector of the click rate of the video side, and the inner product distance of the top vector of the attention rate of the user side and the top vector of the attention rate of the video side.
12. The video recommendation method according to claim 11, wherein said step of forward learning said click-through rate model, like-rate model and attention rate model at said user side neural network and said video side neural network, respectively, further comprises:
converting the inner product distance of the top-level vector of the click rate of the user side and the top-level vector of the click rate of the video side, and the inner product distance of the top-level vector of the attention rate of the user side and the top-level vector of the attention rate of the video side into the probability of the click rate, the probability of the click rate and the probability of the attention rate, respectively.
13. The video recommendation method according to claim 12, wherein the step of learning the click-through rate model, the like rate model and the attention rate model in a reverse direction in the user-side neural network and the video-side neural network, respectively, comprises:
calculating a loss function of the click rate model according to the probability of the click rate model and the sample label of the sample video;
calculating a loss function of the like rate model according to the probability of the like rate model and the sample label of the sample video; and
and calculating a loss function of the attention rate model according to the probability of the attention rate model and the sample label of the sample video.
14. The video recommendation method according to claim 13, wherein said step of learning back said click-through rate model, like-rate model and attention rate model at said user side neural network and video side neural network respectively, further comprises:
minimizing a loss function of the click rate model by adopting a random gradient descent method;
solving the gradient of the loss function of the click rate model;
updating network parameters of the first user side neural network and the first video side neural network layer by layer from top to bottom respectively;
and updating the network parameters corresponding to the first user characteristic and the first video characteristic.
15. The video recommendation method according to claim 14, wherein said step of learning back said click-through rate model, like-rate model and attention rate model at said user side neural network and video side neural network, respectively, further comprises:
minimizing a loss function of the praise rate model by adopting a random gradient descent method;
solving the gradient of the loss function of the praise rate model;
updating network parameters of the second user-side neural network and the second video-side neural network of the like rate model layer by layer from top to bottom respectively;
and updating the network parameters corresponding to the second user characteristic and the second video characteristic of the praise rate model.
16. The video recommendation method according to claim 15, wherein said step of learning back said click-through rate model, like-rate model and attention rate model from user-side neural network and video-side neural network, respectively, further comprises:
minimizing a loss function of the interest rate model by adopting a random gradient descent method;
solving a gradient of a loss function of the interest rate model;
updating network parameters of the second user side neural network and the second video side neural network of the attention rate model layer by layer from top to bottom respectively;
and updating the network parameters corresponding to the second user characteristic and the second video characteristic of the attention rate model.
17. The video recommendation method of claim 16, wherein said step of obtaining user characteristics of the sample user and video characteristics of the sample video is preceded by the step of: a sample user and a sample video are obtained, and a sample label is labeled for the sample video.
18. The video recommendation method according to claim 17, wherein in the click rate model, if the sample user clicks the sample video displayed on the operation page, the sample video is labeled as a positive sample, and if the sample user does not click the sample video displayed on the operation page, the sample video is labeled as a negative sample;
in the praise rate model, if the sample user clicks and praise the sample video, marking the sample video as a positive sample, and if the sample user clicks but does not praise the sample video, marking the sample video as a negative sample;
in the interest rate model, if the sample user clicks the sample video and pays attention to a video author of the sample video, the sample video is marked as a positive sample, and if the sample user clicks the sample video but does not pay attention to the video author of the sample video, the sample video is marked as a negative sample.
19. The video recommendation method according to claim 18, wherein the video side neural network periodically calculates a top vector of click rate, a top vector of like rate and a top vector of attention rate of the video side.
20. A video recommendation apparatus, comprising:
a feature extraction unit: configured to obtain user characteristics of a sample user and video characteristics of a sample video;
a joint learning unit: configured to jointly learn click rates, like rates and attention rates at a plurality of user-side neural networks and a plurality of video-side neural networks, respectively;
an online video recommendation unit: the method for obtaining the recommendation list of the video according to the network parameters of the neural network algorithm obtained by the joint learning comprises the following steps: receiving a video acquisition request of a target user; acquiring video characteristics and user characteristics of the target user; respectively calculating a top vector of the click rate of the user side and a top vector of the click rate of the video side according to the first user side neural network and the first video side neural network; respectively calculating a top vector of the praise rate of the user side and a top vector of the praise rate of the video side according to the second user side neural network and the second video side neural network; respectively calculating a top vector of the attention rate of the user side and a top vector of the attention rate of the video side according to the second user side neural network and the second video side neural network; respectively calculating the inner product distance of the top vector of the click rate of the user side and the top vector of the click rate of the video side, and the inner product distance of the top vector of the attention rate of the user side and the top vector of the attention rate of the video side; and sequencing the target videos according to the inner product distance of the click rate, the inner product distance of the like rate and the inner product distance of the attention rate to obtain a recommendation list of the target videos.
21. The video recommendation device of claim 20, further comprising:
a sample collection unit: is configured to obtain a sample user and a sample video, and label the sample video with a sample label.
22. The video recommendation device according to claim 21, wherein the feature extraction unit is further configured to periodically obtain video features of the sample video;
the joint learning unit is further configured to periodically calculate a top vector of the click rate of the video side, a top vector of the like rate of the video side and a top vector of the attention rate of the video side by the video side neural network.
23. A video recommendation apparatus, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the video recommendation method of any of the above claims 1 to 19.
24. A computer-readable storage medium storing computer instructions which, when executed, implement the video recommendation method of any one of claims 1 to 19.
CN201811293507.5A 2018-11-01 2018-11-01 Video recommendation method and device and computer-readable storage medium Active CN109670077B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811293507.5A CN109670077B (en) 2018-11-01 2018-11-01 Video recommendation method and device and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811293507.5A CN109670077B (en) 2018-11-01 2018-11-01 Video recommendation method and device and computer-readable storage medium

Publications (2)

Publication Number Publication Date
CN109670077A CN109670077A (en) 2019-04-23
CN109670077B true CN109670077B (en) 2021-07-13

Family

ID=66141753

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811293507.5A Active CN109670077B (en) 2018-11-01 2018-11-01 Video recommendation method and device and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN109670077B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112446727B (en) * 2019-09-04 2023-09-12 百度在线网络技术(北京)有限公司 Advertisement triggering method, device, equipment and computer readable storage medium
CN110727827B (en) * 2019-10-22 2022-05-13 深圳墨世科技有限公司 Video list data aggregation method and device, computer equipment and storage medium
CN110996142B (en) * 2019-11-08 2021-12-07 北京奇艺世纪科技有限公司 Video recall method and device, electronic equipment and storage medium
CN111242239B (en) * 2020-01-21 2023-05-30 腾讯科技(深圳)有限公司 Training sample selection method, training sample selection device and computer storage medium
CN111126578B (en) * 2020-04-01 2020-08-25 阿尔法云计算(深圳)有限公司 Joint data processing method, device and system for model training
CN113194360B (en) * 2021-04-25 2022-06-24 北京达佳互联信息技术有限公司 Video recommendation method and device, server and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9129227B1 (en) * 2012-12-31 2015-09-08 Google Inc. Methods, systems, and media for recommending content items based on topics
CN105701191A (en) * 2016-01-08 2016-06-22 腾讯科技(深圳)有限公司 Push information click rate estimation method and device
CN108416625A (en) * 2018-02-28 2018-08-17 阿里巴巴集团控股有限公司 The recommendation method and apparatus of marketing product
CN108537624A (en) * 2018-03-09 2018-09-14 西北大学 A kind of tourist service recommendation method based on deep learning
CN108573032A (en) * 2018-03-27 2018-09-25 麒麟合盛网络技术股份有限公司 Video recommendation method and device
CN108664658A (en) * 2018-05-21 2018-10-16 南京大学 A kind of collaborative filtering video recommendation method considering user preference dynamic change

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105446970A (en) * 2014-06-10 2016-03-30 华为技术有限公司 Item recommendation method and device
US10643120B2 (en) * 2016-11-15 2020-05-05 International Business Machines Corporation Joint learning of local and global features for entity linking via neural networks
US20180204113A1 (en) * 2017-01-13 2018-07-19 Ebay Inc. Interaction analysis and prediction based neural networking
CN107220328B (en) * 2017-05-23 2020-05-19 南京大学 Social network-based weak relation and strong relation video recommendation method
CN108182621A (en) * 2017-12-07 2018-06-19 合肥美的智能科技有限公司 The Method of Commodity Recommendation and device for recommending the commodity, equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9129227B1 (en) * 2012-12-31 2015-09-08 Google Inc. Methods, systems, and media for recommending content items based on topics
CN105701191A (en) * 2016-01-08 2016-06-22 腾讯科技(深圳)有限公司 Push information click rate estimation method and device
CN108416625A (en) * 2018-02-28 2018-08-17 阿里巴巴集团控股有限公司 The recommendation method and apparatus of marketing product
CN108537624A (en) * 2018-03-09 2018-09-14 西北大学 A kind of tourist service recommendation method based on deep learning
CN108573032A (en) * 2018-03-27 2018-09-25 麒麟合盛网络技术股份有限公司 Video recommendation method and device
CN108664658A (en) * 2018-05-21 2018-10-16 南京大学 A kind of collaborative filtering video recommendation method considering user preference dynamic change

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"DeepFM: A Factorization-Machine based Neural Network for CTR Prediction";Huifeng Guo 等;《https://arxiv.org/pdf/1703.04247.pdf》;20170313;1-8 *
"基于深度学习的推荐系统研究综述";黄立威 等;《计算机学报》;20180731;第41卷(第7期);1619-1647 *

Also Published As

Publication number Publication date
CN109670077A (en) 2019-04-23

Similar Documents

Publication Publication Date Title
CN109543066B (en) Video recommendation method and device and computer-readable storage medium
CN109800325B (en) Video recommendation method and device and computer-readable storage medium
CN109670077B (en) Video recommendation method and device and computer-readable storage medium
CN109871896B (en) Data classification method and device, electronic equipment and storage medium
CN109543069B (en) Video recommendation method and device and computer-readable storage medium
CN109961094B (en) Sample acquisition method and device, electronic equipment and readable storage medium
CN109670632B (en) Advertisement click rate estimation method, advertisement click rate estimation device, electronic device and storage medium
CN110781323A (en) Method and device for determining label of multimedia resource, electronic equipment and storage medium
CN112148980B (en) Article recommending method, device, equipment and storage medium based on user click
CN112784151B (en) Method and related device for determining recommended information
CN109842688B (en) Content recommendation method and device, electronic equipment and storage medium
CN110941727A (en) Resource recommendation method and device, electronic equipment and storage medium
CN112308588A (en) Advertisement putting method and device and storage medium
CN115512116A (en) Image segmentation model optimization method and device, electronic equipment and readable storage medium
CN110659726B (en) Image processing method and device, electronic equipment and storage medium
CN111949808B (en) Multimedia content similarity determination method and device, electronic equipment and storage medium
CN112434714A (en) Multimedia identification method, device, storage medium and electronic equipment
CN108154092B (en) Face feature prediction method and device
CN110929055A (en) Multimedia quality detection method and device, electronic equipment and storage medium
CN113190725B (en) Object recommendation and model training method and device, equipment, medium and product
CN110929771A (en) Image sample classification method and device, electronic equipment and readable storage medium
CN110674416A (en) Game recommendation method and device
CN114722238B (en) Video recommendation method and device, electronic equipment, storage medium and program product
CN112990240B (en) Method and related device for determining vehicle type
CN113254707B (en) Model determination method and device and associated media resource determination method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant