CN110837577A - Video recommendation method, device, equipment and storage medium - Google Patents

Video recommendation method, device, equipment and storage medium Download PDF

Info

Publication number
CN110837577A
CN110837577A CN201911067087.3A CN201911067087A CN110837577A CN 110837577 A CN110837577 A CN 110837577A CN 201911067087 A CN201911067087 A CN 201911067087A CN 110837577 A CN110837577 A CN 110837577A
Authority
CN
China
Prior art keywords
vector
video
user
model
new
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911067087.3A
Other languages
Chinese (zh)
Inventor
胡志超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Himalaya Technology Co Ltd
Original Assignee
Shanghai Himalaya Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Himalaya Technology Co Ltd filed Critical Shanghai Himalaya Technology Co Ltd
Priority to CN201911067087.3A priority Critical patent/CN110837577A/en
Publication of CN110837577A publication Critical patent/CN110837577A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/735Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9536Search customisation based on social or collaborative filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The invention discloses a video recommendation method, a video recommendation device, video recommendation equipment and a storage medium. The method comprises the following steps: inputting user information and video information into a neural collaborative filtering model added with an attention mechanism, wherein the neural collaborative filtering model added with the attention mechanism is obtained according to training user data and training video data; and outputting video recommendation information corresponding to the user information. According to the invention, the attention mechanism is added into the neural collaborative filtering model to weight and interact the feature vectors in the neural collaborative filtering model, so that the accuracy of the video recommendation information output by the model is improved after model parameters are continuously updated, the realization effect of the attention mechanism on the importance degree of different features and the relevance among the features is reflected, the problem of insufficient consideration on the importance degree of each feature and the relevance among the features in the prior art is solved, the video information which is interested in the user is more accurately recommended for the user, and the better recommendation effect is realized.

Description

Video recommendation method, device, equipment and storage medium
Technical Field
The embodiment of the invention relates to an intelligent recommendation technology, in particular to a video recommendation method, a video recommendation device, video recommendation equipment and a storage medium.
Background
With the coming of the big data era, a user acquires information in an active manner and tends to intelligent recommendation service, and in the recommendation field, Deep learning algorithms such as Deep recommendation models (Deep neural networks) of YouTube and neural collaborative filtering (neural collaborative filter) models in a Deep learning framework Tensflow of Google open source are applied more.
In a recommendation model based on a deep learning algorithm, a feature vector is used for representing a user and recommendation information, in a specific recommendation scene, the feature vector needs to be fused, for example, averaging is performed, and in a general method, averaging is performed with equal weight, so that the recommendation accuracy is not high.
Disclosure of Invention
The invention provides a video recommendation method, a video recommendation device, video recommendation equipment and a storage medium, which are used for recommending videos which are interested by a user.
In a first aspect, an embodiment of the present invention provides a video recommendation method, including:
inputting user information and video information into a neural collaborative filtering model added with an attention mechanism, wherein the neural collaborative filtering model added with the attention mechanism is obtained according to training user data and training video data;
and outputting video recommendation information corresponding to the user information.
Optionally, before inputting the user information and the video information into the neural collaborative filtering model to which the attention mechanism is added, the method further includes:
and adding an attention mechanism into the neural collaborative filtering model, wherein the neural collaborative filtering model is a linear model, a nonlinear model or a combined model, and the combined model is formed by combining the linear model and the nonlinear model.
Optionally, the adding an attention mechanism to the neural collaborative filtering model includes:
acquiring the training user data and the training video data, and performing vector mapping on the training user data and the training video data to generate a user vector and a video vector;
combining the user vector and the video vector by adopting a first activation function to generate a user weight vector and a video weight vector;
generating a new user vector according to the user vector and the video weight vector, and generating a new video vector according to the video vector and the user weight vector;
the new user vector and the new video vector are interacted to determine an interaction result, wherein the interaction result is a linear interaction result, a nonlinear interaction result or a combined interaction result;
and combining the interaction result by adopting a second activation function to obtain interest prediction information.
Optionally, the generating a new user vector according to the user vector and the video weight vector, and generating a new video vector according to the video vector and the user weight vector, includes:
if the neural collaborative filtering model is the linear model, splicing the user vector and the video weight vector to generate the new user vector, and splicing the video vector and the user weight vector to generate the new video vector;
and if the neural collaborative filtering model is the nonlinear model, multiplying the user vector by the video weight vector to generate the new user vector, and multiplying the video vector by the user weight vector to generate the new video vector.
Optionally, interacting the new user vector with the new video vector, including,
if the neural collaborative filtering model is the linear model, multiplying the new user vector and the new video vector to obtain the linear interaction result;
and if the neural collaborative filtering model is the nonlinear model, splicing the new user vector and the new video vector, and inputting the new user vector and the new video vector into a multilayer perceptron adopting a third activation function to obtain the nonlinear interaction result.
Optionally, if the neuro-collaborative filtering model is the combined model, after the new user vector and the new video vector are interacted, further comprising,
and splicing the linear interaction result and the nonlinear interaction result to obtain the combined interaction result.
In a second aspect, an embodiment of the present invention further provides a video recommendation apparatus, where the apparatus includes:
the input module is used for inputting the user information and the video information into the neural collaborative filtering model added with the attention mechanism, and the neural collaborative filtering model added with the attention mechanism is obtained according to training user data and training video data;
and the output module outputs video recommendation information corresponding to the user information.
Optionally, the apparatus further comprises a control unit,
and the model construction module is used for adding an attention mechanism into the nerve collaborative filtering model, the nerve collaborative filtering model is a linear model, a nonlinear model or a combined model, and the combined model is formed by combining the linear model and the nonlinear model.
The model-building module comprises a model-building module,
the first vector generation unit is used for acquiring the training user data and the training video data, and performing vector mapping on the training user data and the training video data to generate a user vector and a video vector;
a second vector generation unit, configured to generate a user weight vector and a video weight vector by using a first activation function in combination with the user vector and the video vector;
a third vector generating unit, configured to generate a new user vector according to the user vector and the video weight vector, and generate a new video vector according to the video vector and the user weight vector;
the interaction unit is used for interacting the new user vector with the new video vector to obtain an interaction result, wherein the interaction result is a linear interaction result, a nonlinear interaction result or a combined interaction result;
and the prediction unit is used for combining the interaction result by adopting a second activation function to obtain interest prediction information.
In a third aspect, an embodiment of the present invention further provides a computer device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor executes the computer program to implement the video recommendation method according to any embodiment of the present invention.
In a fourth aspect, the embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the video recommendation method according to any embodiment of the present invention.
According to the invention, an attention mechanism is added into the neural collaborative filtering model, namely, weighting operation is carried out on the feature vectors corresponding to the user data and the video data in the neural collaborative filtering model to generate the weight vectors, the feature vectors of the user are interacted with the weight vectors of the video, the feature vectors of the video are interacted with the weight vectors of the user, after model parameters are continuously updated, the accuracy of video recommendation information output by the model is improved, the realization effect of the attention mechanism on the importance degree of different features and the relevance among the features is reflected, the problem that the relevance among the importance degree and the features of each feature is not considered sufficiently in the prior art is solved, the video information which is interested in the user is recommended more accurately, and the better recommendation effect is realized.
Drawings
Fig. 1 is a flowchart of a video recommendation method according to an embodiment of the present invention;
fig. 2 is a flowchart of a video recommendation method according to a second embodiment of the present invention;
fig. 3 is a flowchart of a video recommendation method according to a second embodiment of the present invention;
fig. 4 is a flowchart of a video recommendation method according to a third embodiment of the present invention;
FIG. 5 is a schematic diagram of a linear model without an attention mechanism according to a third embodiment of the present invention;
FIG. 6 is a schematic diagram of a linear model of a attention adding mechanism provided in the third embodiment of the present invention;
fig. 7 is a flowchart of a video recommendation method according to a fourth embodiment of the present invention;
FIG. 8 is a schematic diagram of a non-linear model without an attention mechanism according to a fourth embodiment of the present invention;
FIG. 9 is a schematic diagram of a non-linear model of a attentive power adding mechanism according to a fourth embodiment of the present invention;
fig. 10 is a flowchart of a video recommendation method according to a fifth embodiment of the present invention;
FIG. 11 is a schematic diagram of a combined model without an attention adding mechanism according to a fifth embodiment of the present invention;
FIG. 12 is a schematic diagram of a combined model of a power adding mechanism according to a fifth embodiment of the present invention;
fig. 13 is a block diagram of a video recommendation apparatus according to a sixth embodiment of the present invention;
fig. 14 is a schematic structural diagram of a computer device according to a seventh embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Example one
Fig. 1 is a flowchart of a video recommendation method according to an embodiment of the present invention, where the present embodiment is applicable to a case of recommending video information of interest to a user, and the method may be executed by a video recommendation apparatus, and the apparatus may be implemented by software and/or hardware.
As shown in fig. 1, the method specifically includes the following steps:
and step 110, inputting the user information and the video information into the attention mechanism added neural collaborative filtering model, wherein the attention mechanism added neural collaborative filtering model is obtained according to training user data and training video data.
The user is the service main body of the method, and the user information is the related information for identifying the main body. Video information is the relevant information that identifies the video.
The method comprises the following steps of carrying out Neural Collaborative Filtering (NCF) model, wherein the NCF is a universal framework and can express and popularize matrix decomposition, and the key point of the personalized recommendation system is that modeling is carried out according to the user preference according to the content of past user interaction, such as scoring and clicking, namely the Collaborative filtering.
Attention Mechanism (Attention Mechanism) is derived from the research on human vision, in the cognitive science, due to the bottleneck of information processing, human beings selectively focus on a part of all information, meanwhile neglects other visible information, acquires needed information, constructs a certain description about the environment, and Attention can be understood as weighted summation from the aspect of mathematical formula and code realization.
The training user data and the training video data are user data and video data in a training data set.
Specifically, a training data set is input into the built neural collaborative filtering model to generate a feature vector, a weighting factor is added to the feature vector, model parameters are continuously updated, and finally the neural collaborative filtering model with the attention mechanism is obtained.
And inputting the user information of the user to be recommended and the corresponding video information into the neural collaborative filtering model added with the attention mechanism.
And step 120, outputting video recommendation information corresponding to the user information.
The video recommendation information may be understood as video information corresponding to the user information and interested by the user, and may be a predicted interest value for each video or a video recommendation list interested by the user.
According to the technical scheme of the embodiment, an attention mechanism is added into the neural collaborative filtering model, namely the weighting operation is carried out on the feature vectors corresponding to the user data and the video data in the neural collaborative filtering model to generate the weight vectors, the feature vectors of the user are interacted with the weight vectors of the video, the feature vectors of the video are interacted with the weight vectors of the user, after model parameters are continuously updated, the accuracy of video recommendation information output by the model is improved, the realization effect of the attention mechanism on the importance degree of different features and the relevance among the features is reflected, the problem that in the prior art, the relevance among the importance degree and the features of each feature is not considered sufficiently is solved, the video information interested by the user is recommended more accurately, and the better recommendation effect is realized.
Example two
Fig. 2 is a flowchart of a video recommendation method according to a second embodiment of the present invention. On the basis of the above embodiments, the present embodiment further optimizes the video recommendation method.
As shown in fig. 2, the method specifically includes:
and step 210, constructing a training data set according to the behavior data of the user to the video.
The behavior data is data generated by the behavior of the user on the video, and the behavior of the user on the video can be understood when a user watches a certain video or does not watch a certain video.
Specifically, the method for constructing the training data set may be as follows:
the user and the video are described in symbols, the user is marked as user _ x, x is 1, 2, …, k represents the number of Users, and the user set Users is { user _1, user _2, …, user _ k }; the videos are denoted as a _ y, y is 1, 2, … and m, m represents the number of videos, and the video set a is { a _1, a _2, … and a _ m }.
And marking the behavior of the user on the video to generate behavior data (user _ x, a _ y, B), wherein B is a behavior mark which represents the behavior of the user on the video and takes the value of 0 or 1. When B is 0, the user _ x has no over-viewing behavior on the video a _ y, and the behavior data is a behavior data negative sample; when B is 0, the user _ x has an over-viewing behavior on the video a _ y, and the behavior data is a behavior data positive sample.
And selecting a preset number of positive samples of the behavior data, and randomly selecting a corresponding number of negative samples of the behavior data to form a training data set.
In a specific example, assuming that a user has viewing behavior data, i.e., a viewing history, on the video set a, taking user _1 and user _2 as examples, user _1 views movies a _1 and a _ 2; user _2 has viewed movies A _2, A _3, and A _10, as shown in Table 1:
table 1 behavior data positive sample
User' s Video Behavior markers
user_1 a_1 1
user_1 a_2 1
user_2 a_2 1
user_2 a_3 1
user_2 a_10 1
The behavior data are all positive samples, and during training, the positive and negative samples are needed at the same time, so that negative sampling is needed. The negative sampling method is that each positive sample corresponds to a plurality of negative samples, namely, a user has watching behavior for a certain video and then uses the video as a positive sample, then a plurality of videos are randomly selected from a video set without watching behavior of the user, and corresponding behavior data are used as negative samples. In this way, positive and negative samples required by the training model are obtained.
For example, a corresponding number of negative samples of behavior data are randomly selected from the positive samples of behavior data in table 1, and 1 positive sample corresponds to 4 negative samples as an example, so as to obtain positive and negative sample samples of behavior data, as shown in table 2:
table 2 positive and negative sample examples of behavior data
User' s Video Behavior markers
user_1 A_1 1
user_1 A_5 0
user_1 A_8 0
user_1 A_11 0
user_1 A_20 0
user_1 A_2 1
user_1 A_50 0
user_1 A_9 0
user_1 A_30 0
user_1 A_39 0
user_2 a_2 1
user_2 a_1 0
user_2 a_24 0
user_2 a_29 0
user_2 a_43 0
…… …… ……
And updating the model parameters by adopting a multiple-iteration random gradient descent algorithm, and generating a positive and negative sample in each iteration process.
And the selected positive and negative samples form a training data set during model training.
Step 220, adding an attention mechanism to the neural collaborative filtering model.
The neural collaborative filtering model is a linear model, a nonlinear model or a combined model, and the combined model is formed by combining the linear model and the nonlinear model.
As shown in fig. 3, an attention mechanism is added to the neural collaborative filtering model, and the method specifically includes:
step 2201, training user data and training video data are obtained, and vector mapping is carried out on the training user data and the training video data to generate user vectors and video vectors.
Specifically, training user data and training video data in a training data set are obtained, the training user data and the training video data are used as input layers, then vector mapping is performed on the training user data and the training video data, namely the training user data and the training video data are input into an Embedding (Embedding) layer, each training user data corresponds to 1 user vector, each training video data corresponds to 1 video vector, the dimensionality of the vector can be about 50, and the user vectors and the video vectors are characteristic vectors of training users and training videos.
Step 2202, combining the user vector and the video vector with a first activation function to generate a user weight vector and a video weight vector.
Specifically, a full connection layer is constructed, and a weight vector is generated by adopting an activation function, namely, a user vector is connected to an output layer with the same dimension as the user vector, the output layer serves as the user weight vector, and similarly, a video vector is connected to an output layer with the same dimension as the video vector, and the output layer serves as the video weight vector.
And 2203, generating a new user vector according to the user vector and the video weight vector, and generating a new video vector according to the video vector and the user weight vector.
Specifically, the embodiment of the invention fully considers the interaction behavior of the user and the video, and when generating a new user vector, the user vector and the video weight vector are interacted, namely, the new user vector is obtained by calculating the user vector and the video weight vector; when a new video vector is generated, the video vector and the user weight vector are interacted, namely, the new user vector is obtained by calculating the video vector and the user weight vector.
Optionally, if the neural collaborative filtering model is a linear model, splicing the user vector and the video weight vector to generate a new user vector, and splicing the video vector and the user weight vector to generate a new video vector; if the neural collaborative filtering model is a nonlinear model, the product of the user vector and the video weight vector generates a new user vector, and the product of the video vector and the user weight vector generates a new video vector.
And 2204, interacting the new user vector and the new video vector to determine an interaction result.
Specifically, if the neural collaborative filtering model is a linear model, the new user vector and the new video vector are interacted to obtain a linear interaction result, the linear interaction result is determined to be an interaction result, and step 2205 is performed.
If the neural collaborative filtering model is a nonlinear model, the new user vector and the new video vector are interacted to obtain a nonlinear interaction result, the nonlinear interaction result is determined to be an interaction result, and the step 2205 is carried out.
If the neural collaborative filtering model is a combined model, the new user vector and the new video vector are interacted, the linear model part obtains a linear interaction result, the nonlinear model part obtains a nonlinear interaction result, the linear interaction result and the nonlinear interaction result are spliced to obtain a combined interaction result, the combined interaction result is determined to be an interaction result, and the step 2205 is carried out.
And 2205, combining the interaction result by adopting a second activation function to obtain interest prediction information.
Specifically, the interaction result is optimized, and prediction information is output.
Specifically, model parameters are continuously updated through a training data set, and finally the neural collaborative filtering model added with the attention mechanism is obtained.
Step 230, inputting the user information and the video information into the neural collaborative filtering model added with the attention mechanism.
And 240, outputting video recommendation information corresponding to the user information.
According to the technical scheme, the training data set is input into the built nerve collaborative filtering model to generate the feature vector, the weight factor is added into the feature vector, interaction is carried out on the user and the video when the new feature vector is generated, model parameters are continuously updated, the nerve collaborative filtering model with the attention mechanism is obtained, user information and corresponding video information are input into the model to obtain video recommendation information interesting to the user, the problem that relevance among importance degrees and features of the features is not considered enough in the prior art is solved, the video information interesting to the user is recommended more accurately, and better recommendation effect is achieved.
EXAMPLE III
Fig. 4 is a flowchart of a video recommendation method according to a third embodiment of the present invention. On the basis of the above embodiments, the present embodiment further optimizes the video recommendation method.
This example specifically describes a method of adding attention mechanism when the neural collaborative filtering model is a linear model. Fig. 5 is a schematic view of a linear model without an attention mechanism, and fig. 6 is a schematic view of a linear model with an attention mechanism.
As shown in fig. 4, the method for adding the attention mechanism to the linear model specifically includes:
and 310, constructing a training data set according to the behavior data of the user to the video.
And step 320, acquiring training user data and training video data, and performing vector mapping on the training user data and the training video data to generate a user vector and a video vector.
Specifically, as shown in fig. 6, the training user data and the training video data are symbolized, the training user data is recorded as userId, and the training video data is recorded as itemId. Vector mapping is carried out on training user data and training video data, namely each userId corresponds to 1 user vector userEmbedding vector, each itemId corresponds to 1 video vector itemEmbedding vector, and the dimension of the vector can be about 50.
Step 330, combining the user vector and the video vector by using a first activation function to generate a user weight vector and a video weight vector.
Specifically, as shown in fig. 6, the user weight vector and the video weight vector are generated according to the user vector and the video vector, that is, the user vector is connected to an output layer with the same dimension as the user vector, the activation function may be a softmax function, the output layer serves as a user weight vector usermedbeddingweight, similarly, the video vector is connected to an output layer with the same dimension as the user vector, the activation function may be softmax, and the output layer serves as a video weight vector itemmedbeddingweight vector.
And 340, splicing the user vector and the video weight vector to generate a new user vector, and splicing the video vector and the user weight vector to generate a new video vector.
Specifically, as shown in fig. 6, the user vector and the video weight vector are spliced, i.e., configured operation is performed, so as to obtain a new user vector newuserlerbedding vector; and splicing the video vector and the user weight vector, namely performing concatemate operation to obtain a new video vector newItemEmbedding vector.
And step 350, multiplying the new user vector and the new video vector to obtain an interaction result.
Specifically, as shown in fig. 6, Multiply the new user vector and the new video vector, that is, perform multi operation, obtain a linear interaction result, and determine that the linear interaction result is an interaction result.
And step 360, combining the interaction result with a second activation function to obtain interest prediction information.
Specifically, as shown in fig. 6, the interaction result is optimized, and the activation function may be a sigmoid function, so as to obtain the interest prediction information of the user on the movie.
And step 370, updating the model parameters to obtain the neural collaborative filtering model with the attention mechanism.
Specifically, the model is optimized through training of training data, and finally the neural collaborative filtering model with the attention mechanism is obtained.
Step 380, inputting the user information and the video information into the neural collaborative filtering model added with the attention mechanism.
And step 390, outputting video recommendation information corresponding to the user information.
Optionally, the public data set is used for comparing the effects of the models.
In the recommended model, HR and NDCG can reflect the quality of the model, and the larger the value of HR and NDCG is, the better the model effect is.
In the top-K recommendation, HR is a commonly used measure of recall and is calculated as follows:
the top-K refers to the first K video recommendation lists which are interesting to the user, the numerator is the sum of the number of the top-K recommendation lists of each user, which belong to the test set, and the denominator is all the test sets.
NDCG refers to Normalized discrete cumulative gain, i.e., Normalized loss cumulative gain.
In a specific example, tests were performed on the published test data set MovieLens 1M, with the results shown in the following table:
table 3 comparison of test results table 1
Type of model HR Whether or not to lift NDCG Whether or not to lift
Linear model 0.7167 0.4365
Linear model with attention mechanism 0.7169 Is that 0.4408 Is that
According to the technical scheme, the training data set is input into the constructed linear neural collaborative filtering model to generate the feature vector, the weight factor is added into the feature vector, when a new feature vector is generated, the feature vector is interacted by adopting a vector splicing method, after model parameters are continuously updated, the neural collaborative filtering model with the attention mechanism is obtained, user information and corresponding video information are input into the model to obtain video recommendation information interesting to the user, the problem that relevance among the importance degree and the features of the features is not sufficiently considered in the prior art is solved, the video information interesting to the user is more accurately recommended to the user, and a better recommendation effect is achieved.
Example four
Fig. 7 is a flowchart of a video recommendation method according to a fourth embodiment of the present invention. On the basis of the above embodiments, the present embodiment further optimizes the video recommendation method.
This example specifically describes a method of adding attention mechanism when the neural collaborative filtering model is a nonlinear model. Fig. 8 is a schematic view of a non-linear model without an attention mechanism, and fig. 9 is a schematic view of a non-linear model with an attention mechanism.
As shown in fig. 7, the method for adding the attention mechanism to the nonlinear model specifically includes:
and step 410, constructing a training data set according to the behavior data of the user on the video.
And step 420, acquiring training user data and training video data, and performing vector mapping on the training user data and the training video data to generate a user vector and a video vector.
Specifically, as shown in fig. 9, the training user data and the training video data are symbolized, the training user data is recorded as userId, and the training video data is recorded as itemId. Vector mapping is carried out on training user data and training video data, namely each userId corresponds to 1 user vector userEmbedding vector, each itemId corresponds to 1 video vector itemEmbedding vector, and the dimension of the vector can be about 50.
And 430, generating a user weight vector and a video weight vector by combining the user vector and the video vector by adopting a first activation function.
Specifically, as shown in fig. 9, the user weight vector and the video weight vector are generated according to the user vector and the video vector, that is, the user vector is connected to an output layer with the same dimension as the user vector, the activation function may be a softmax function, the output layer serves as a user weight vector usermedbeddingweight, similarly, the video vector is connected to an output layer with the same dimension as the user vector, the activation function may be softmax, and the output layer serves as a video weight vector itemmedbeddingweight vector.
Step 440, multiplying the user vector by the video weight vector generates a new user vector, and multiplying the video vector by the user weight vector generates a new video vector.
Specifically, as shown in fig. 9, Multiply the user vector and the video weight vector, i.e., perform a multiplex operation, to obtain a new user vector newuserlembedding vector; and multiplying the video vector and the user weight vector, namely performing multiplex operation to obtain a new video vector newItemEmbedding vector.
And step 450, splicing the new user vector and the new video vector, and inputting the new user vector and the new video vector into a multilayer perceptron adopting a third activation function to obtain an interaction result.
Specifically, as shown in fig. 9, the new user vector and the new video vector are spliced, i.e., a locate operation is performed. And then inputting a Multilayer Perceptron, namely a multilinear Perceptron layer, wherein the activation function can be a relu function, outputting a nonlinear interaction result, and determining the nonlinear interaction result as an interaction result.
And 460, combining the interaction result with a second activation function to obtain interest prediction information.
Specifically, as shown in fig. 9, the interaction result is optimized, and the activation function may be a sigmoid function, so as to obtain the interest prediction information of the user on the movie.
And 470, updating the model parameters to obtain the neural collaborative filtering model with the attention mechanism.
Specifically, the model is optimized through training of training data, and finally the neural collaborative filtering model with the attention mechanism is obtained.
Step 480, inputting the user information and the video information into the neural collaborative filtering model added with the attention mechanism.
And step 490, outputting video recommendation information corresponding to the user information.
Optionally, the public data set is used for comparing the effects of the models.
In a specific example, tests were performed on the published test data set MovieLens 1M, with the results shown in the following table:
table 4 comparison of test effects table 2
Type of model HR Whether or not to lift NDCG Whether or not to lift
Non-linear model 0.6846 0.4109
Non-linear model with attention mechanism 0.6924 Is that 0.4183 Is that
According to the technical scheme, the training data set is input into the established nonlinear neural collaborative filtering model to generate the feature vector, the weight factor is added into the feature vector, the feature vector is interacted, the neural collaborative filtering model with the attention mechanism is obtained after model parameters are continuously updated, the user information and the corresponding video information are input into the model to obtain the video recommendation information interested by the user, the problem that relevance among the importance degree and the features of the features is not sufficiently considered in the prior art is solved, the video information interested by the user is recommended more accurately, and a better recommendation effect is achieved.
EXAMPLE five
Fig. 10 is a flowchart of a video recommendation method according to a fifth embodiment of the present invention. On the basis of the above embodiments, the present embodiment further optimizes the video recommendation method.
This embodiment specifically describes a method of adding an attention mechanism when the neural collaborative filtering model is a combined model. Fig. 11 is a schematic view of a combination model without an attention mechanism, and fig. 12 is a schematic view of a combination model with an attention mechanism.
As shown in fig. 10, the method for adding the attention mechanism to the linear model specifically includes:
and step 510, constructing a training data set according to the behavior data of the user on the video.
Step 520, obtaining training user data and training video data, and performing vector mapping on the training user data and the training video data to generate a first user vector, a second user vector, a first video vector and a second video vector.
Specifically, as shown in fig. 12, the training user data and the training video data are symbolized, the training user data is recorded as userId, and the training video data is recorded as itemId. Performing vector mapping on training user data and training video data, namely each userId corresponds to 2 user vectors, a first user vector userEmbedding1 vector and a second user vector userEmbedding2 vector; each itemld corresponds to 2 video vectors, namely a first video vector itemlebedding 1 vector and a second video vector itemlebedding 2 vector, and the dimension of the vectors can be about 50.
Step 530, combining the first user vector, the second user vector, the first video vector and the second video vector by using a first activation function to generate a first user weight vector, a second user weight vector, a first video weight vector and a second video weight vector.
Specifically, as shown in fig. 12, a user weight vector and a video weight vector are generated according to the user vector and the video vector, that is, the first user vector is connected to an output layer having the same dimension as the user vector, the activation function may be a softmax function, the output layer serves as the first user weight vector usernenbeddingweight 1, and similarly, the second user vector is connected to an output layer having the same dimension as the user vector, so as to obtain a second user weight vector usernenbeddingweight 2; similarly, the first video vector is connected to an output layer with the same dimension as the first video weight vector itemEmbeddingweight1, and the second video vector is connected to an output layer with the same dimension as the first video weight vector itemEmbeddingweight2, so that the activation function can be softmax.
Step 540, the first user vector and the first video weight vector are spliced to generate a new first user vector, and the first video vector and the first user weight vector are spliced to generate a new first video vector; and multiplying the second user vector by the second video weight vector to generate a new second user vector, and multiplying the second video vector by the second user weight vector to generate a new second video vector.
Specifically, as shown in fig. 12, the first user vector and the first video weight vector are spliced, i.e., a Concatenate operation is performed, so as to obtain a new first user vector newusermembedding 1 vector; and splicing the first video vector and the first user weight vector, namely performing Concatenate operation to obtain a new first video vector newItemEmbedding1 vector.
Multiplying the second user vector and the second video weight vector, namely performing multiplex operation to obtain a new second user vector newUserEmbelling 2 vector; and multiplying the second video vector and the second user weight vector, namely performing multiplex operation to obtain a new second video vector newItemEmbedding2 vector.
Step 550, multiplying the new first user vector and the new first video vector to obtain a linear interaction result; and splicing the new second user vector and the new second video vector, inputting the new second user vector and the new second video vector into a multilayer perceptron adopting a third activation function to obtain a nonlinear interaction result, and splicing the linear interaction result and the nonlinear interaction result to obtain a combined interaction result.
Specifically, as shown in fig. 12, the new first user vector and the new first video vector are multiplied, i.e., multi-ply operation is performed, so as to obtain a linear interaction result; and splicing the new second user vector and the new second video vector, namely, performing a concatemate operation, then inputting a Multilayer Perceptron, namely, a multiliner Perceptron layer, wherein the activation function can be a relu function, outputting a nonlinear interaction result, splicing the linear interaction result and the nonlinear interaction result, namely, performing the concatemate operation, and obtaining a combined interaction result.
And step 560, combining the interaction result by adopting a second activation function to obtain interest prediction information.
And 570, updating the model parameters to obtain the neural collaborative filtering model added with the attention mechanism.
Step 580, inputting the user information and the video information into the neural collaborative filtering model with the attention mechanism added.
And step 590, outputting video recommendation information corresponding to the user information.
Optionally, the public data set is used for comparing the effects of the models.
In a specific example, tests were performed on the published test data set MovieLens 1M, with the results shown in the following table:
table 5 comparison of test results table 3
Type of model HR Whether or not to lift NDCG Whether or not to lift
Combined model 0.7079 0.4307
Combined model with attention adding mechanism 0.7118 Is that 0.4351 Is that
According to the technical scheme, the training data set is input into the built combined nerve collaborative filtering model to generate the feature vector, the weight factor is added into the feature vector, when the new feature vector is generated in the linear model part, the feature vector is interacted by adopting a vector splicing method, after model parameters are continuously updated, the nerve collaborative filtering model added with the attention mechanism is obtained, the user information and the corresponding video information are input into the model to obtain the video recommendation information interested by the user, the problem that the relevance among the importance degree and the features of the features is not enough to be considered in the prior art is solved, the video information interested by the user is more accurately recommended for the user, and a better recommendation effect is achieved.
EXAMPLE six
The video recommendation device provided in the embodiment of the present invention is capable of executing the video recommendation method provided in any embodiment of the present invention, and has functional modules and beneficial effects corresponding to the execution method, and fig. 13 is a block diagram of a structure of the video recommendation device provided in a sixth embodiment of the present invention, and as shown in fig. 13, the device includes: an input module 610 and an output module 620.
The input module 610 is configured to input the user information and the video information into the neural collaborative filtering model added with the attention mechanism, where the neural collaborative filtering model added with the attention mechanism is obtained according to the training user data and the training video data.
And the output module 620 outputs video recommendation information corresponding to the user information.
According to the method, the attention mechanism is added into the neural collaborative filtering model, the weighting factors are added into the user vector and the video vector, the recommendation model is reconstructed, the interaction behavior of the user and the video is fully considered when the new user vector and the new video vector are generated, the relevance between the feature vectors is increased, the problem that the importance degree of each feature and the relevance between the features are not sufficiently considered in the prior art is solved, the video information which the user is interested in is more accurately recommended, and a better recommendation effect is achieved.
Optionally, the apparatus further comprises a model building module 630.
The model building module 630 is configured to add an attention mechanism to the neural collaborative filtering model, where the neural collaborative filtering model is a linear model, a nonlinear model, or a combined model, and the combined model is formed by combining the linear model and the nonlinear model.
Optionally, the model building module 630 includes:
the first vector generation unit is used for acquiring training user data and training video data, and performing vector mapping on the training user data and the training video data to generate a user vector and a video vector;
the second vector generation unit is used for generating a user weight vector and a video weight vector by combining the user vector and the video vector by adopting a first activation function;
the third vector generating unit is used for generating a new user vector according to the user vector and the video weight vector and generating a new video vector according to the video vector and the user weight vector;
the interaction unit is used for interacting the new user vector with the new video vector to obtain an interaction result, wherein the interaction result is a linear interaction result, a nonlinear interaction result or a combined interaction result;
and the prediction unit is used for obtaining interest prediction information by adopting a second activation function and combining the interaction result.
Optionally, the second vector generation unit is specifically configured to:
if the neural collaborative filtering model is a linear model, splicing the user vector and the video weight vector to generate a new user vector, and splicing the video vector and the user weight vector to generate a new video vector;
if the neural collaborative filtering model is a nonlinear model, the product of the user vector and the video weight vector generates a new user vector, and the product of the video vector and the user weight vector generates a new video vector.
Optionally, interacting the new user vector and the new video vector, including:
if the neural collaborative filtering model is a linear model, multiplying the new user vector and the new video vector to obtain a linear interaction result;
and if the neural collaborative filtering model is a nonlinear model, splicing the new user vector and the new video vector, and inputting the new user vector and the new video vector into a multilayer perceptron adopting a third activation function to obtain a nonlinear interaction result.
Optionally, if the neural collaborative filtering model is a combined model, after the new user vector and the new video vector are interacted, the method further includes:
and splicing the linear interaction result and the nonlinear interaction result to obtain a combined interaction result.
According to the technical scheme of the embodiment, an attention mechanism is added into the neural collaborative filtering model, namely the weighting operation is carried out on the feature vectors corresponding to the user data and the video data in the neural collaborative filtering model to generate the weight vectors, the feature vectors of the user are interacted with the weight vectors of the video, the feature vectors of the video are interacted with the weight vectors of the user, after model parameters are continuously updated, the accuracy of video recommendation information output by the model is improved, the realization effect of the attention mechanism on the importance degree of different features and the relevance among the features is reflected, the problem that in the prior art, the relevance among the importance degree and the features of each feature is not considered sufficiently is solved, the video information interested by the user is recommended more accurately, and the better recommendation effect is realized.
EXAMPLE seven
Fig. 14 is a schematic structural diagram of a computer apparatus according to a seventh embodiment of the present invention, as shown in fig. 14, the apparatus includes a processor 710, a memory 720, an input device 730, and an output device 740; the number of processors 710 in the device may be one or more, and one processor 710 is taken as an example in fig. 14; the processor 710, the memory 720, the input device 730, and the output device 740 of the apparatus may be connected by a bus or other means, and fig. 14 illustrates an example of a connection by a bus.
The memory 720, which is a computer-readable storage medium, can be used for storing software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the video recommendation method in the embodiment of the present invention (e.g., the input module 610 and the output module 620 in the video recommendation apparatus). The processor 710 executes various functional applications of the device/terminal/server and data processing by executing software programs, instructions and modules stored in the memory 720, namely, implements the video recommendation method described above.
The memory 720 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the terminal, and the like. Further, the memory 720 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples, the memory 720 may further include memory located remotely from the processor 710, which may be connected to the device/terminal/server via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input means 730 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the device/terminal/server. The output device 740 may include a display device such as a display screen.
Example eight
An eighth embodiment of the present invention further provides a storage medium containing computer-executable instructions, which when executed by a computer processor, perform a video recommendation method, including:
inputting user information and video information into a neural collaborative filtering model added with an attention mechanism, wherein the neural collaborative filtering model added with the attention mechanism is obtained according to training user data and training video data;
and outputting video recommendation information corresponding to the user information.
Of course, the storage medium provided by the embodiment of the present invention contains computer-executable instructions, and the computer-executable instructions are not limited to the method operations described above, and may also perform related operations in the video recommendation method provided by any embodiment of the present invention.
From the above description of the embodiments, it is obvious for those skilled in the art that the present invention can be implemented by software and necessary general hardware, and certainly, can also be implemented by hardware, but the former is a better embodiment in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which may be stored in a computer-readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a FLASH Memory (FLASH), a hard disk or an optical disk of a computer, and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device) to execute the methods according to the embodiments of the present invention.
It should be noted that, in the embodiment of the above search apparatus, each included unit and module are merely divided according to functional logic, but are not limited to the above division as long as the corresponding functions can be implemented; in addition, specific names of the functional units are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present invention.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (10)

1. A method for video recommendation, comprising:
inputting user information and video information into a neural collaborative filtering model added with an attention mechanism, wherein the neural collaborative filtering model added with the attention mechanism is obtained according to training user data and training video data;
and outputting video recommendation information corresponding to the user information.
2. The video recommendation method of claim 1, wherein before inputting the user information and the video information into the neural collaborative filtering model with attention mechanism added, further comprising:
and adding an attention mechanism into the neural collaborative filtering model, wherein the neural collaborative filtering model is a linear model, a nonlinear model or a combined model, and the combined model is formed by combining the linear model and the nonlinear model.
3. The video recommendation method according to claim 2, wherein said adding an attention mechanism to said neural collaborative filtering model comprises:
acquiring the training user data and the training video data, and performing vector mapping on the training user data and the training video data to generate a user vector and a video vector;
combining the user vector and the video vector by adopting a first activation function to generate a user weight vector and a video weight vector;
generating a new user vector according to the user vector and the video weight vector, and generating a new video vector according to the video vector and the user weight vector;
interacting the new user vector and the new video vector to determine an interaction result, wherein the interaction result is a linear interaction result, a nonlinear interaction result or a combined interaction result;
and combining the interaction result by adopting a second activation function to obtain interest prediction information.
4. The video recommendation method according to claim 3, wherein said generating a new user vector based on said user vector and said video weight vector, and generating a new video vector based on said video vector and said user weight vector comprises:
if the neural collaborative filtering model is the linear model, splicing the user vector and the video weight vector to generate the new user vector, and splicing the video vector and the user weight vector to generate the new video vector;
and if the neural collaborative filtering model is the nonlinear model, multiplying the user vector by the video weight vector to generate the new user vector, and multiplying the video vector by the user weight vector to generate the new video vector.
5. The video recommendation method of claim 3, wherein interacting said new user vector with said new video vector comprises,
if the neural collaborative filtering model is the linear model, multiplying the new user vector and the new video vector to obtain the linear interaction result;
and if the neural collaborative filtering model is the nonlinear model, splicing the new user vector and the new video vector, and inputting the new user vector and the new video vector into a multilayer perceptron adopting a third activation function to obtain the nonlinear interaction result.
6. The video recommendation method of claim 3, wherein if said neuro-collaborative filtering model is said combined model, after interacting said new user vector with said new video vector, further comprising,
and splicing the linear interaction result and the nonlinear interaction result to obtain the combined interaction result.
7. A video recommendation apparatus, comprising:
the input module is used for inputting the user information and the video information into the neural collaborative filtering model added with the attention mechanism, and the neural collaborative filtering model added with the attention mechanism is obtained according to training user data and training video data;
and the output module outputs video recommendation information corresponding to the user information.
8. The video recommendation device of claim 7, wherein said device further comprises,
and the model construction module is used for adding an attention mechanism into the nerve collaborative filtering model, the nerve collaborative filtering model is a linear model, a nonlinear model or a combined model, and the combined model is formed by combining the linear model and the nonlinear model.
The model-building module comprises a model-building module,
the first vector generation unit is used for acquiring the training user data and the training video data, and performing vector mapping on the training user data and the training video data to generate a user vector and a video vector;
a second vector generation unit, configured to generate a user weight vector and a video weight vector by using a first activation function in combination with the user vector and the video vector;
a third vector generating unit, configured to generate a new user vector according to the user vector and the video weight vector, and generate a new video vector according to the video vector and the user weight vector;
the interaction unit is used for interacting the new user vector with the new video vector to obtain an interaction result, wherein the interaction result is a linear interaction result, a nonlinear interaction result or a combined interaction result;
and the prediction unit is used for combining the interaction result by adopting a second activation function to obtain interest prediction information.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the video recommendation method of any one of claims 1-6 when executing the program.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out a video recommendation method according to any one of claims 1-6.
CN201911067087.3A 2019-11-04 2019-11-04 Video recommendation method, device, equipment and storage medium Pending CN110837577A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911067087.3A CN110837577A (en) 2019-11-04 2019-11-04 Video recommendation method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911067087.3A CN110837577A (en) 2019-11-04 2019-11-04 Video recommendation method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN110837577A true CN110837577A (en) 2020-02-25

Family

ID=69576090

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911067087.3A Pending CN110837577A (en) 2019-11-04 2019-11-04 Video recommendation method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110837577A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111445282A (en) * 2020-03-20 2020-07-24 支付宝(杭州)信息技术有限公司 Service processing method, device and equipment based on user behaviors
CN111461898A (en) * 2020-02-28 2020-07-28 上海商汤智能科技有限公司 Method for obtaining underwriting result and related device
CN111461896A (en) * 2020-02-28 2020-07-28 上海商汤智能科技有限公司 Method for obtaining underwriting result and related device
CN111461897A (en) * 2020-02-28 2020-07-28 上海商汤智能科技有限公司 Method for obtaining underwriting result and related device
CN112084407A (en) * 2020-09-08 2020-12-15 辽宁工程技术大学 Collaborative filtering recommendation method fusing graph neural network and attention mechanism
CN112541846A (en) * 2020-12-22 2021-03-23 山东师范大学 College course selection and repair mixed recommendation method and system based on attention mechanism
CN112559901A (en) * 2020-12-11 2021-03-26 百度在线网络技术(北京)有限公司 Resource recommendation method and device, electronic equipment, storage medium and computer program product

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108491469A (en) * 2018-03-07 2018-09-04 浙江大学 Introduce the neural collaborative filtering conceptual description word proposed algorithm of concepts tab
CN109299396A (en) * 2018-11-28 2019-02-01 东北师范大学 Merge the convolutional neural networks collaborative filtering recommending method and system of attention model
CN109670121A (en) * 2018-12-18 2019-04-23 辽宁工程技术大学 Project level and feature level depth Collaborative Filtering Recommendation Algorithm based on attention mechanism

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108491469A (en) * 2018-03-07 2018-09-04 浙江大学 Introduce the neural collaborative filtering conceptual description word proposed algorithm of concepts tab
CN109299396A (en) * 2018-11-28 2019-02-01 东北师范大学 Merge the convolutional neural networks collaborative filtering recommending method and system of attention model
CN109670121A (en) * 2018-12-18 2019-04-23 辽宁工程技术大学 Project level and feature level depth Collaborative Filtering Recommendation Algorithm based on attention mechanism

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
谢恩宁: "基于注意力机制的深度协同过滤模型" *
郭旭: "基于用户向量化表示和注意力机制的深度神经网络推荐模型" *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111461898A (en) * 2020-02-28 2020-07-28 上海商汤智能科技有限公司 Method for obtaining underwriting result and related device
CN111461896A (en) * 2020-02-28 2020-07-28 上海商汤智能科技有限公司 Method for obtaining underwriting result and related device
CN111461897A (en) * 2020-02-28 2020-07-28 上海商汤智能科技有限公司 Method for obtaining underwriting result and related device
CN111445282A (en) * 2020-03-20 2020-07-24 支付宝(杭州)信息技术有限公司 Service processing method, device and equipment based on user behaviors
CN111445282B (en) * 2020-03-20 2023-02-10 支付宝(杭州)信息技术有限公司 Service processing method, device and equipment based on user behaviors
CN112084407A (en) * 2020-09-08 2020-12-15 辽宁工程技术大学 Collaborative filtering recommendation method fusing graph neural network and attention mechanism
CN112084407B (en) * 2020-09-08 2024-03-12 辽宁工程技术大学 Collaborative filtering recommendation method integrating graph neural network and attention mechanism
CN112559901A (en) * 2020-12-11 2021-03-26 百度在线网络技术(北京)有限公司 Resource recommendation method and device, electronic equipment, storage medium and computer program product
CN112559901B (en) * 2020-12-11 2022-02-08 百度在线网络技术(北京)有限公司 Resource recommendation method and device, electronic equipment, storage medium and computer program product
CN112541846A (en) * 2020-12-22 2021-03-23 山东师范大学 College course selection and repair mixed recommendation method and system based on attention mechanism
CN112541846B (en) * 2020-12-22 2022-11-29 山东师范大学 College course selection and repair mixed recommendation method and system based on attention mechanism

Similar Documents

Publication Publication Date Title
CN110837577A (en) Video recommendation method, device, equipment and storage medium
Nagatani et al. Augmenting knowledge tracing by considering forgetting behavior
CN111241311B (en) Media information recommendation method and device, electronic equipment and storage medium
CN110990624B (en) Video recommendation method, device, equipment and storage medium
US20120311030A1 (en) Inferring User Interests Using Social Network Correlation and Attribute Correlation
CN111506820B (en) Recommendation model, recommendation method, recommendation device, recommendation equipment and recommendation storage medium
WO2021155691A1 (en) User portrait generating method and apparatus, storage medium, and device
CN110909258B (en) Information recommendation method, device, equipment and storage medium
CN110781396A (en) Information recommendation method, device, equipment and storage medium
CN116186326A (en) Video recommendation method, model training method, electronic device and storage medium
CN111368195B (en) Model training method, device, equipment and storage medium
CN112381236A (en) Data processing method, device, equipment and storage medium for federal transfer learning
CN110827078B (en) Information recommendation method, device, equipment and storage medium
CN113836388A (en) Information recommendation method and device, server and storage medium
CN111814044A (en) Recommendation method and device, terminal equipment and storage medium
CN114580794B (en) Data processing method, apparatus, program product, computer device and medium
CN111368192A (en) Information recommendation method, device, equipment and storage medium
Dos Santos et al. Clustering learning objects for improving their recommendation via collaborative filtering algorithms
CN114398973B (en) Media content tag identification method, device, equipment and storage medium
CN111507471B (en) Model training method, device, equipment and storage medium
CN115017362A (en) Data processing method, electronic device and storage medium
Wang et al. Learning List-wise Representation in Reinforcement Learning for Ads Allocation with Multiple Auxiliary Tasks
Do et al. A context-aware recommendation framework in e-learning environment
CN112418442A (en) Data processing method, device, equipment and storage medium for federal transfer learning
CN111241318B (en) Method, device, equipment and storage medium for selecting object to push cover picture

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination