CN114722238B - Video recommendation method and device, electronic equipment, storage medium and program product - Google Patents

Video recommendation method and device, electronic equipment, storage medium and program product Download PDF

Info

Publication number
CN114722238B
CN114722238B CN202210517169.9A CN202210517169A CN114722238B CN 114722238 B CN114722238 B CN 114722238B CN 202210517169 A CN202210517169 A CN 202210517169A CN 114722238 B CN114722238 B CN 114722238B
Authority
CN
China
Prior art keywords
video
sample
user
target user
sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210517169.9A
Other languages
Chinese (zh)
Other versions
CN114722238A (en
Inventor
骆明楠
廖一桥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202210517169.9A priority Critical patent/CN114722238B/en
Publication of CN114722238A publication Critical patent/CN114722238A/en
Application granted granted Critical
Publication of CN114722238B publication Critical patent/CN114722238B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/735Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/7867Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title and artist information, manually generated time, location and usage information, user ratings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • Library & Information Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The present disclosure relates to a video recommendation method, apparatus, electronic device, storage medium, and program product, the method comprising: acquiring video vectors of a plurality of candidate videos and a user vector of a target user; acquiring a history sequence of a target user consisting of video vectors of a plurality of history videos according to the plurality of history videos watched by the target user; inputting the user vector and the historical sequence of the target user into a first neural network to obtain an individualized sequence of the target user, wherein the individualized sequence of the target user comprises the ratio of the interest of the target user to each historical video to the total interest of the target user to the plurality of historical videos; and inputting the video vectors of the candidate videos and the personalized sequences of the target user into a video recommendation system to obtain the recommended video of the target user determined from the candidate videos. The personalized sequence can reflect the interest distribution of the user in the historical sequence, and then the video which is more consistent with the interest distribution of the user can be recommended based on the personalized sequence.

Description

Video recommendation method and device, electronic equipment, storage medium and program product
Technical Field
The present disclosure relates to the field of video recommendation technologies, and in particular, to a video recommendation method and apparatus, an electronic device, a storage medium, and a program product.
Background
The video recommendation system predicts the interest preference of the user on the candidate video according to the characteristics of the candidate video, the characteristics of the historical video watched by the user and the like, so that video recommendation is performed for the user. In the related art, when the video recommendation system adopts the characteristics of the historical videos watched by the user, the characteristics of a plurality of historical videos are generally summed together with the same weight, and the summed video characteristics represent the interest and preference of the user.
However, each user's habits in viewing videos may differ, for example, user a likes to watch videos of similar types repeatedly, while user B excludes to watch videos of similar types repeatedly. Therefore, the user a may prefer to watch a video similar to the "top video in the history video", and the user B may have a lower interest in a video similar to the "top video in the history video". The features of the historical videos are summed globally by the same weight, and the problem of interest distribution of the user is not considered, so that the accuracy of video recommendation for the user still needs to be improved.
Disclosure of Invention
To overcome the problems in the related art, the present disclosure provides a video recommendation method, apparatus, electronic device, storage medium, and program product. The technical scheme of the disclosure is as follows:
according to a first aspect of the embodiments of the present disclosure, there is provided a video recommendation method, including:
acquiring video vectors of a plurality of candidate videos and a user vector of a target user;
acquiring a history sequence of the target user, which is composed of video vectors of a plurality of history videos, according to the plurality of history videos watched by the target user;
inputting the user vector and the historical sequence of the target user into a first neural network to obtain a personalized sequence of the target user, wherein the personalized sequence of the target user comprises the ratio of the interest of the target user in each historical video to the total interest of the target user in the plurality of historical videos;
and inputting the video vectors of the candidate videos and the personalized sequence of the target user into a video recommendation system to obtain a recommended video of the target user determined from the candidate videos.
Optionally, the inputting the user vector and the history sequence of the target user into a first neural network to obtain a personalized sequence of the target user includes:
inputting the user vector and the historical sequence of the target user into the first neural network to obtain the ratio of the interest of the target user to each historical video to the total interest of the target user to the plurality of historical videos;
obtaining a fusion coefficient of the history sequence of the target user according to the respective corresponding ratios of the plurality of history videos;
and generating a personalized sequence of the target user according to the fusion coefficient and the historical sequence of the target user.
Optionally, the training step of the first neural network includes:
obtaining sample user vectors of a plurality of first sample users and respective sample history sequences of the plurality of first sample users;
inputting the sample user vectors and the sample historical sequences of the plurality of first sample users into an initial first neural network to obtain respective sample personalized sequences of the plurality of first sample users;
acquiring a video vector of a first sample video watched by each first sample user and a label representing whether the first sample video is subjected to various operations by the first sample user;
inputting the video vector of the first sample video watched by each first sample user and the sample personalized sequence of the first sample user into the video recommendation system to obtain a first prediction probability of whether the first sample video is subjected to various operations by the first sample user;
and carrying out supervised training on the initial first neural network based on the first prediction probability and the label of each first sample video to obtain the first neural network.
Optionally, the inputting the video vectors of the multiple candidate videos and the personalized sequence of the target user into a video recommendation system to obtain a recommended video of the target user determined from the multiple candidate videos includes:
inputting the video vectors of the candidate videos and the personalized sequence of the target user into the video recommendation system to obtain a first probability that the target user executes various operations on each of the candidate videos;
calculating a first recommendation score of each of the candidate videos according to a first probability of each of the candidate videos;
and determining a recommended video of the target user from the plurality of candidate videos according to the first recommendation scores of the candidate videos.
Optionally, after the obtaining, according to a plurality of historical videos watched by the target user, a historical sequence of the target user, which is composed of video vectors of the plurality of historical videos, the method further includes:
inputting the user vector and the historical sequence of the target user into a second neural network to obtain the attention sequence of the target user;
inputting the video vectors of the candidate videos and the personalized sequence of the target user into a video recommendation system to obtain a recommended video of the target user determined from the candidate videos, wherein the method comprises the following steps:
and inputting the video vectors of the candidate videos, the personalized sequence and the attention sequence of the target user into the video recommendation system to obtain the recommended video of the target user determined from the candidate videos.
Optionally, the inputting the user vector and the history sequence of the target user into a second neural network to obtain the attention sequence of the target user includes:
inputting the user vector and the historical sequence of the target user into the second neural network, and projecting the user vector of the target user into a projection vector with the same dimension as the video vector of the historical video;
calculating the similarity between the video vector of each historical video in the historical sequence of the target user and the projection vector to obtain the attention coefficient of the historical sequence of the target user;
and generating the attention sequence of the target user according to the attention coefficient and the history sequence of the target user.
Optionally, the training step of the second neural network comprises:
obtaining sample user vectors of a plurality of second sample users and respective sample history sequences of the plurality of second sample users;
inputting the sample user vectors and the sample historical sequences of the plurality of second sample users into an initial second neural network to obtain respective sample attention sequences of the plurality of second sample users;
acquiring a video vector of a second sample video watched by each second sample user and a label representing whether the second sample video is subjected to various operations by the second sample user;
inputting the video vector of the second sample video watched by each second sample user and the sample attention sequence of the second sample user into the video recommendation system to obtain a second prediction probability of whether the second sample video is subjected to various operations by the second sample user;
and carrying out supervised training on the initial second neural network based on the second prediction probability and the label of each second sample video to obtain the second neural network.
Optionally, the inputting the video vector of the second sample video viewed by each second sample user and the sample attention sequence of the second sample user into the video recommendation system to obtain a second prediction probability of whether the second sample video is executed by the second sample user by multiple operations includes:
obtaining a sample personalized sequence of each second sample user;
and inputting the video vector of the second sample video watched by each second sample user, the sample attention sequence and the sample personalized sequence of the second sample user into the video recommendation system to obtain a second prediction probability of whether the second sample video is subjected to various operations by the second sample user.
Optionally, the inputting the video vectors of the plurality of candidate videos and the personalized sequence and the attention sequence of the target user into the video recommendation system to obtain the recommended video of the target user determined from the plurality of candidate videos includes:
inputting the video vectors of the candidate videos, and the personalized sequence and the attention sequence of the target user into the video recommendation system to obtain a second probability that the candidate videos are subjected to multiple operations by the target user;
calculating a second recommendation score of each of the candidate videos according to a second probability of each of the candidate videos;
and determining the recommended video of the target user from the candidate videos according to the second recommended scores of the candidate videos.
According to a second aspect of the embodiments of the present disclosure, there is provided a video recommendation apparatus including:
a vector acquisition module configured to acquire video vectors of a plurality of candidate videos and a user vector of a target user;
the sequence acquisition module is configured to acquire a historical sequence of the target user, which is composed of video vectors of a plurality of historical videos, according to the plurality of historical videos watched by the target user;
a personalized sequence acquisition module configured to input the user vector and the historical sequence of the target user into a first neural network to obtain a personalized sequence of the target user, where the personalized sequence of the target user includes a ratio of an interest of the target user in each historical video to a total interest of the target user in the plurality of historical videos;
and the video recommending module is configured to input the video vectors of the candidate videos and the personalized sequence of the target user into a video recommending system to obtain a recommended video of the target user determined from the candidate videos.
Optionally, the inputting the user vector and the history sequence of the target user into a first neural network to obtain a personalized sequence of the target user includes:
a ratio obtaining unit, configured to input the user vector and the history sequence of the target user into the first neural network, to obtain a ratio of an interest of the target user in each history video to a total interest of the target user in the plurality of history videos;
a fusion coefficient determining unit configured to obtain a fusion coefficient of the history sequence of the target user according to respective corresponding ratios of the plurality of history videos;
and the personalized sequence acquisition unit is configured to generate a personalized sequence of the target user according to the fusion coefficient and the historical sequence of the target user.
Optionally, the training step of the first neural network includes:
obtaining sample user vectors of a plurality of first sample users and respective sample history sequences of the plurality of first sample users;
inputting the sample user vectors and the sample historical sequences of the plurality of first sample users into an initial first neural network to obtain respective sample personalized sequences of the plurality of first sample users;
acquiring a video vector of a first sample video watched by each first sample user and a label representing whether the first sample video is subjected to various operations by the first sample user;
inputting the video vector of the first sample video watched by each first sample user and the sample personalized sequence of the first sample user into the video recommendation system to obtain a first prediction probability of whether the first sample video is subjected to various operations by the first sample user;
and carrying out supervised training on the initial first neural network based on the first prediction probability and the label of each first sample video to obtain the first neural network.
Optionally, the video recommendation module includes:
a first probability obtaining unit configured to input video vectors of the plurality of candidate videos and the personalized sequence of the target user into the video recommendation system, resulting in first probabilities that the plurality of candidate videos are each executed by the target user for a plurality of operations;
a first score calculating unit configured to calculate a first recommendation score for each of the plurality of candidate videos according to a first probability for each of the plurality of candidate videos;
a first video recommending unit configured to determine a recommended video of the target user from the plurality of candidate videos according to the respective first recommendation scores of the plurality of candidate videos.
Optionally, after the obtaining, according to a plurality of historical videos watched by the target user, a historical sequence of the target user, which is composed of video vectors of the plurality of historical videos, the method further includes:
an attention sequence acquisition module configured to input the user vector and the history sequence of the target user into a second neural network to obtain an attention sequence of the target user;
the video recommendation module comprises:
and the video recommending unit is configured to input the video vectors of the candidate videos, and the personalized sequence and the attention sequence of the target user into the video recommending system to obtain the recommended video of the target user determined from the candidate videos.
Optionally, the attention sequence acquisition module comprises:
a vector projection unit configured to input the user vector of the target user and a history sequence into the second neural network, and project the user vector of the target user into a projection vector of the same dimension as a video vector of the history video;
the similarity calculation unit is configured to calculate the similarity between the video vector of each historical video in the historical sequence of the target user and the projection vector, and obtain the attention coefficient of the historical sequence of the target user;
an attention sequence generating unit configured to generate an attention sequence of the target user according to the attention coefficient and a history sequence of the target user.
Optionally, the training step of the second neural network comprises:
obtaining sample user vectors of a plurality of second sample users and respective sample history sequences of the plurality of second sample users;
inputting the sample user vectors and the sample historical sequences of the plurality of second sample users into an initial second neural network to obtain respective sample attention sequences of the plurality of second sample users;
acquiring a video vector of a second sample video watched by each second sample user and a label representing whether the second sample video is subjected to various operations by the second sample user;
inputting the video vector of the second sample video watched by each second sample user and the sample attention sequence of the second sample user into the video recommendation system to obtain a second prediction probability of whether the second sample video is subjected to multiple operations by the second sample user;
and performing supervised training on the initial second neural network based on the second prediction probability and the label of each second sample video to obtain the second neural network.
Optionally, the inputting, into the video recommendation system, the video vector of the second sample video that is viewed by each second sample user and the sample attention sequence of the second sample user to obtain a second prediction probability of whether the second sample video is executed by the second sample user through multiple operations includes:
obtaining a sample personalized sequence of each second sample user;
and inputting the video vector of the second sample video watched by each second sample user, the sample attention sequence and the sample personalized sequence of the second sample user into the video recommendation system to obtain a second prediction probability of whether the second sample video is subjected to various operations by the second sample user.
Optionally, the video recommendation module includes:
a second probability obtaining unit, configured to input the video vectors of the candidate videos, and the personalized sequence and the attention sequence of the target user into the video recommendation system, so as to obtain a second probability that the candidate videos are subjected to multiple operations by the target user;
a second score calculation unit configured to calculate a second recommendation score for each of the plurality of candidate videos according to a second probability for each of the plurality of candidate videos;
a second video recommending unit configured to determine a recommended video of the target user from the plurality of candidate videos according to the respective second recommendation scores of the plurality of candidate videos.
According to a third aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including: a processor; a memory for storing the processor-executable instructions; wherein the processor is configured to execute the instructions to implement the video recommendation method of the first aspect.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium, wherein instructions, when executed by a processor of an electronic device, enable the electronic device to perform the video recommendation method according to the first aspect.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the video recommendation method of the first aspect.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
in the disclosure, the user vector records various behaviors of the user, and the interest degree of the user in each history video in the history sequence can be extracted from the user vector, so that the first neural network can generate the personalized sequence of the target user according to the user vector and the history sequence of the target user, the personalized sequence of the target user includes a ratio of the interest of the target user in each history video to the total interest of the target user in the plurality of history videos, and the personalized sequence of the target user can reflect the interest distribution of the target user in each history video in the history sequence. Furthermore, the video recommendation system carries out video recommendation on the target user according to the personalized sequence of the target user, and the recommended video is more in line with the interest distribution of the user and has higher accuracy.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
FIG. 1 is a flow chart illustrating the steps of a video recommendation method in accordance with an exemplary embodiment;
FIG. 2 is a schematic flow diagram illustrating video recommendation based on two personalized sequences, according to an example embodiment;
FIG. 3 is a schematic flow diagram illustrating a video recommendation system assisting a first neural network in training according to an example embodiment;
FIG. 4 is a schematic flow diagram illustrating video recommendation based on personalized sequences and attention sequences in accordance with an exemplary embodiment;
FIG. 5 is a flow diagram illustrating a video recommendation system assisting a second neural network in training in accordance with an exemplary embodiment;
FIG. 6 is a block diagram illustrating a video recommendation device in accordance with an exemplary embodiment;
FIG. 7 is a block diagram illustrating an apparatus for video recommendation in accordance with an exemplary embodiment;
fig. 8 is a block diagram illustrating an apparatus for video recommendation in accordance with an example embodiment.
Detailed Description
In order to make the technical solutions of the present disclosure better understood, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
One, personalized sequence
Fig. 1 is a flowchart illustrating steps of a video recommendation method according to an exemplary embodiment, where the video recommendation method may be used in an electronic device such as a computer, a mobile phone, a tablet computer, and the like, as shown in fig. 1, and includes the following steps:
in step S11, video vectors of a plurality of candidate videos and a user vector of a target user are acquired;
in step S12, acquiring a history sequence of the target user, which is composed of video vectors of a plurality of history videos, according to the plurality of history videos watched by the target user;
in step S13, inputting the user vector and the history sequence of the target user into a first neural network, to obtain a personalized sequence of the target user, where the personalized sequence of the target user includes a ratio of an interest of the target user in each history video to a total interest of the target user in the plurality of history videos;
in step S14, the video vectors of the candidate videos and the personalized sequence of the target user are input into a video recommendation system, so as to obtain a recommended video of the target user determined from the candidate videos.
In order to facilitate processing by the video recommendation system, a video, a user and the like can be vectorized to obtain a video vector of the video and a user vector of the user. The vectorization method can refer to the related art, and the present invention is not limited thereto.
The video vector can be generated according to various characteristics of the video, wherein the video characteristics comprise various characteristics such as video length, video author, publishing time, video type and the like. The candidate video may be the video left over from the coarse filtering. The plurality of historical videos viewed by the user may be a plurality of historical videos recently viewed by the user.
The historical sequence of the user can be a sequence formed by vectors of a plurality of historical videos watched by the user according to the watched sequence; the plurality of different history sequences may be generated according to a plurality of operations of the user, or one history sequence including a plurality of operations performed for each video may be generated according to a plurality of operations of the user.
For example, for 30 historical videos, a praise sequence and a forward sequence may be generated, where the praise sequence includes video vectors of the respective 30 historical videos, and video vectors of ones of the 30 historical videos that are praised are marked as "1", and video vectors of ones of the 30 historical videos that are not praised are marked as "0" or no mark; the forwarding sequence includes video vectors of 30 historical videos, and the video vector of the forwarded video in the 30 historical videos is marked as "1", and the video vector of the non-forwarded video is marked as "0" or not, but may be distinguished by other numbers or other marks.
For another example, for 30 historical videos, only one sequence may be generated, where the sequence includes video vectors of the 30 historical videos and information on whether the 30 historical videos are approved, forwarded, and the like, where the video vector of the approved video is marked as "1", the video vector of the unsuppressed video is marked as "2", the video vector of the forwarded video is marked as "3", and the video vector of the unreported video is marked as "4", and the like.
The user vector may be generated according to the attribute of the user (e.g., gender, age, occupation, etc.) and the behavior trace of the user, which may be obtained from the behavior log of the user. In a video recommendation scene, a behavior log of a user can record various operations such as watching, long-time playing, commenting, collecting, forwarding, praise and the like of a video when the user accesses a website/platform every time.
The user vector contains all the information of the user, so that the user vector can be processed to obtain the ratio of the interest of the user to each historical video in the watched historical sequence to the total interest of the target user to the plurality of historical videos in the historical sequence. The difficulties are as follows: the user vector is essentially a vector, and how to find a suitable function is used to extract the interest level of each historical video in the viewed historical sequence from the user vector. The interest degree of the target user in one historical video in the historical sequence is the ratio of the interest in the historical video to the total interest of the target user in all the historical videos in the historical sequence. It will be appreciated that the user's interest in the same video may vary under different circumstances or in different historical sequences. For example, for a user who excludes a repeated view of the same type of video, the interest in video a is less interesting in the case where the same type of video as video a has appeared, and the interest in video a is increased in the case where the same type of video as video a has not appeared. Therefore, the historical videos in the historical sequence have a sequence, the interest degree of each historical video extracted according to the user vector also has a sequence, and the interest degree with the sequence can reflect the interest distribution of the user more accurately.
Generally speaking, a user vector is a 64 × 64-dimensional vector, and how to extract the interest level of a target user in each of 30 historical videos in a watched historical sequence from the 64 × 64-dimensional user vector of the target user, where 30 sequential numbers can be used to represent the interest level of the target user in the 30 historical videos, respectively, the problem becomes how to project the 64 × 64-dimensional vector into 30 sequential numbers.
The user's interest in the video may be determined through various operations performed on the video by the user, for example, the user may be considered to be more interested in a video that is approved than a video that is not approved, more interested in a video that is forwarded than a video that is approved, and the like.
The method comprises the steps of weighting a historical sequence of a user by utilizing the ratio of the interest of the user to each historical video which is watched to the total interest of a plurality of videos in the historical sequence, obtaining a personalized sequence of the user, wherein the personalized sequence of the user can reflect the interest distribution of the user to each historical video in the historical sequence, and the more interesting the user is, the higher the weight of the video is.
Aiming at the problem of how to project the user vector as a fusion coefficient, the applicant proposes: and training a single-layer first neural network, and automatically extracting the ratio of the interest of the user to each historical video to the total interest of the target user to the plurality of historical videos from the user vector by using the first neural network.
The interest degree of a user to each historical video in the historical sequence of the user can be represented by the fusion coefficient, elements in the fusion coefficient have a sequence, and the sequence of the elements in the fusion coefficient is consistent with the sequence of each historical video in the historical sequence. When the history sequence of the user is one, a group of fusion coefficients corresponding to the history sequence are extracted from the user vector, the history sequence of the user is processed according to the fusion coefficients, the personalized sequence of the user can be obtained, and then the video recommendation can be carried out on the user according to the personalized sequence. When the number of the history sequences of the user is multiple, the fusion coefficients with the same group number as the number of the history sequences are extracted from the user vector.
For example, for a praise sequence and a forwarding sequence with the same historical video, a fusion coefficient of the praise sequence and a fusion coefficient of the forwarding sequence of the user are respectively extracted from a user vector, and then the sequences are weighted according to the respective fusion coefficients to respectively obtain a praise personalized sequence and a forwarding personalized sequence.
After the two personalized sequences are obtained, the praise personalized sequence and the forward personalized sequence can be fused to obtain a personalized sequence of the user, and the personalized sequence is utilized to carry out video recommendation. And obtaining a recommendation score according to each personalized sequence, and recommending videos based on the recommendation scores.
Fig. 2 is a schematic flowchart of a process of performing video recommendation based on two personalized sequences according to an exemplary embodiment, where the process may include performing video recommendation by using both a praise personalized sequence and a forward personalized sequence as personalized sequences, inputting the praise personalized sequence and the forward personalized sequence, and video vectors of a plurality of candidate videos into a video recommendation system, obtaining a praise score of each candidate video according to the praise personalized sequence by the video recommendation system, obtaining a forward score of each candidate video according to the forward personalized sequence, obtaining a first recommendation score of each candidate video by synthesizing the praise score and the forward score of each candidate video, and determining a recommended video of a user based on the first recommendation score.
The initial first neural network can be supervised-trained by using the sample user vector and the sample historical sequence of the first sample user, so that a trained first neural network is obtained. The first neural network learns how to extract the fusion coefficients of the sample history sequence from the sample user vector. To ensure that the single-layer neural network is suitable for each user vector, the length of each user vector should be the same.
And extracting a fusion coefficient from the user vector by using the first neural network, and further performing weighting processing on the historical sequence of the user by using the fusion coefficient to obtain the personalized sequence of the user. In the personalized sequence of the user, the video vectors of the historical videos have different weights, so that the personalized sequence of the user can more accurately reflect the interest distribution of the user. Therefore, when the video recommendation system carries out video recommendation on the user according to the personalized sequence of the user, the recommended video can better accord with the interest of the user, and the accuracy is higher. It is to be understood that the method of the present application is not limited to target users, but is applicable to users, in other words, users may be target users.
Optionally, while the video vectors of the multiple candidate videos and the personalized sequence of the target user are input into the video recommendation system, the user characteristics, the context characteristics, the time characteristics and the like of the target user can be vectorized and then input into the video recommendation system, so as to obtain a more accurate recommendation result. The training method of the video recommendation system may refer to the related art, which is not limited in the present invention.
In the disclosure, the user vector records various behaviors of the user, and the interest degree of the user in each history video in the history sequence can be extracted from the user vector, so that the first neural network can generate the personalized sequence of the target user according to the user vector and the history sequence of the target user, the personalized sequence of the target user comprises the ratio of the interest of the target user in each history video to the total interest of the target user in the plurality of history videos, and the personalized sequence of the target user can reflect the interest distribution of the target user in each history video in the history sequence. Furthermore, the video recommendation system carries out video recommendation on the target user according to the personalized sequence of the target user, and the recommended video is more in line with the interest distribution of the user and has higher accuracy.
1. Training step of the first neural network
Sample user vectors of a plurality of first sample users and respective sample history sequences of the plurality of first sample users are obtained. The specific obtaining method may refer to the obtaining method for obtaining the user vector and the history sequence of the target user.
And inputting the sample user vectors and the sample historical sequences of the plurality of first sample users into the initial first neural network to obtain respective sample personalized sequences of the plurality of first sample users. Wherein, the initial first neural network is a first neural network which is not trained yet, and the structure of the initial first neural network is the same as that of the trained first neural network. The method for obtaining the sample personalized sequence of the first sample user by using the initial first neural network is the same as the method for obtaining the personalized sequence of the target user by using the first neural network, but the sample personalized sequence obtained by using the initial first neural network is not accurate enough.
The method for obtaining the sample personalized sequence of the first sample user by the initial first neural network comprises the following steps: and inputting the sample user vector and the sample history sequence of the first sample user into an initial first neural network to obtain the ratio of the interest of the first sample user to each history video in the sample history sequence to the total interest of the first sample user to each history video in the sample history sequence. And obtaining a fusion coefficient of the sample history sequence of the first sample user according to the respective corresponding ratios of the plurality of history videos.
Optionally, the multiple activation functions may be used to process the ratios, so that the ratios are in the same value range, and a fusion coefficient of the sample history sequence of the first sample user is obtained. Conventionally, the ratio is processed by using a single activation function, but the single activation function only reduces the data, but does not enlarge the data, and the obtained fusion coefficient may be too small. Therefore, the comparison values can be processed using a twofold activation function δ (x) =2 sigmoid (x) so that the resulting fusion coefficient is in the range of (0, 2).
And generating a sample personalized sequence of the first sample user according to the fusion coefficient and the sample historical sequence of the first sample user. The number of elements in the fusion coefficient is the same as the number of video vectors in the sample history sequence, and each video vector can be weighted according to the corresponding element in the fusion coefficient of each video vector, so that the sample personalized sequence of the first sample user is obtained.
Correspondingly, the method for the first neural network to obtain the personalized sequence of the target user comprises the following steps: inputting the user vector and the historical sequence of the target user into the first neural network to obtain the ratio of the interest of the target user to each historical video in the historical sequence to the total interest of the plurality of historical videos of the target user; obtaining a fusion coefficient of the historical sequence of the target user according to the respective corresponding ratios of the plurality of historical videos; and generating a personalized sequence of the target user according to the fusion coefficient and the historical sequence of the target user. Optionally, the multiple activation function may be used to process the ratio values, so that the ratios are in the same value range, and a fusion coefficient of the history sequence of the target user is obtained.
Therefore, the generated personalized sequences of the users reflect the interest degree of the users to each historical video in the historical sequences, and the processing is carried out according to the respective corresponding specific values of the historical videos, so that the obtained fusion coefficients can be in the same interval range, the obtained personalized sequences of the users are uniform, and the subsequent video recommendation task is facilitated.
Fig. 3 is a schematic flow chart illustrating a video recommendation system assisting a first neural network in training according to an exemplary embodiment. The method comprises the steps of taking a first neural network as a front part of a video recommendation system, predicting whether a video is executed by a user for one or more operations according to the video recommendation system, and carrying out supervised training on the first neural network according to the operation of the video actually executed by the user, so as to obtain a trained first neural network. The personalized sequence generated by the first neural network is difficult to directly carry out supervised training, the first neural network is assisted by a video recommendation system to carry out supervised training, the video recommendation system carries out recommendation based on the personalized sequence output by the first neural network, the video recommendation system can predict whether the video is executed by a user in various operations, and the video is really executed by the user in various operations as constraint to carry out the supervised training on the first neural network.
Under the assistance of a video recommendation system, in order to perform supervised training on an initial first neural network, video vectors of a plurality of first sample videos watched by a first sample user and labels of the plurality of first sample videos are acquired, and the label of one first sample video represents information on whether the first sample video is executed by each operation of the plurality of operations by the corresponding first sample user. The multiple operations may be long-time playing, commenting, collecting, forwarding, praise, and the like.
For each first sample user, inputting the video vector of the first sample video watched by the first sample user and the sample personalized sequence of the first sample user into the video recommendation system, so as to obtain a first prediction probability of whether the first sample video is subjected to various operations by the first sample user, specifically, there is a prediction probability for each operation.
And establishing a loss function according to the difference between the first prediction probability of each first sample video and the label, carrying out supervised training on the initial first neural network based on the loss function, and obtaining the first neural network when the initial first neural network is converged.
Taking the prediction praise operation and the forwarding operation as examples: and inputting the video vector of the first sample video and the corresponding personalized sequence of the first sample user into a video recommendation system, wherein the personalized sequence of the first sample user is obtained by an initial first neural network. The video recommendation system predicts a probability value of the first sample video being praised by the first sample user and a probability value of the first sample video being forwarded by the first sample user. Establishing a praise loss function according to the difference between the condition whether the first sample contained in the label of the first sample video is praise by the first sample user and the predicted probability that the first sample video is praise by the first sample user; establishing a forwarding loss function based on the same principle; updating the network parameters of the initial first neural network based on the praise loss function and the forward loss function.
It can be understood that the principle of training the first neural network based on one or more historical sequences is the same, and all the principles are equivalent to using the first neural network as a front part of a video recommendation system, predicting whether a video is executed with one or more operations corresponding to a sequence according to the video recommendation system, and then performing supervised training on the initial first neural network according to the operation that the video is actually executed, so as to obtain a trained first neural network. In the training stage of the first neural network, the video recommendation system may be trained, or may be trained simultaneously with the first neural network, and the trained video recommendation system is used to assist the first neural network in training, so that the training time of the first neural network can be shortened. However, when video recommendation is actually performed, a trained video recommendation system is used.
In this way, considering that the personalized sequence generated by the first neural network is difficult to directly perform supervised training, the video recommendation system is used for assisting the first neural network to perform supervised training, so that the first neural network can generate an accurate personalized sequence of the user, and the generated personalized sequence is more suitable for the requirements of a downstream video recommendation system.
2. Video recommendation system for determining recommended videos
When the video vectors of the candidate videos and the personalized sequences of the target user are input into the video recommendation system, the user characteristics, the context characteristics, the time characteristics and the like of the target user can be input into the video recommendation system after being vectorized.
The video recommendation system predicts, for each candidate video, a first prediction probability of each operation performed by the target user for the candidate video according to the input vectors. For example, for candidate video a, the probability that it is predicted to be favored by the target user is 0.7, and the probability that it is predicted to be forwarded by the user is 0.4; for candidate video B, the probability that it is predicted to be favored by the target user is 0.6, and the probability that it is predicted to be forwarded by the user is 0.5.
According to the first prediction probability corresponding to each operation of each candidate video, the first recommendation score of the candidate video can be obtained, wherein the first prediction probabilities corresponding to different operations can have different weights. For example, in the case that the weight of the praise is 1 and the weight of the forwarding is 2, following the previous example, the first recommendation score of the candidate video a should be 0.7 × 1+0.4 × 2= 1.5; the first recommendation score for candidate video B should be 0.6 × 1+0.5 × 2= 1.6.
And determining a recommended video of the target user from the plurality of candidate videos according to the first recommendation scores of the plurality of candidate videos. Optionally, the candidate video with the highest first recommendation score in the multiple candidate videos may be determined as the recommended video of the target user. Then, following the previous example, candidate video B should be recommended to the target user more between candidate video a and candidate video B. Optionally, the plurality of candidate videos may also be ranked according to the first recommendation score, and the top N candidate videos are taken as recommended videos.
Therefore, video recommendation is performed on the target user according to the personalized sequence of the target user, the recommended video is more in line with the interest distribution of the target user, and the accuracy is higher.
Second, attention sequence
Optionally, on the basis of the above technical solution, an attention sequence of the target user may also be generated, where the attention sequence of the target user is obtained by weighting a history sequence of the target user, and the added weight reflects a degree of attention of the target user to each history video in the history sequence. Thus, the attention sequence of the target user includes the degree of attention of the target user to each of the historical videos in the historical sequence.
The attention sequence of the target user is obtained by the second neural network after the user vector and the history sequence of the target user are input into the second neural network, the second neural network is obtained by supervised training, and a training method of the second neural network will be described later.
The foregoing introduces that video recommendation is performed on a target user based on a personalized sequence of the target user, that is, video vectors of a plurality of candidate videos and the personalized sequence of the target user are input into a video recommendation system, so as to obtain a recommended video of the target user determined from the plurality of candidate videos; the video recommendation method can also be used for performing video recommendation on the target user based on the attention sequence of the target user, namely, the video vectors of the candidate videos and the attention sequence of the target user are input into a video recommendation system to obtain a recommended video of the target user determined from the candidate videos; and video recommendation can be performed on the target user based on the personalized sequence and the attention sequence of the target user at the same time, namely the video vectors of the candidate videos, the personalized sequence and the attention sequence of the target user are input into a video recommendation system, and the recommended video of the target user determined from the candidate videos is obtained. Fig. 4 is a flow diagram illustrating video recommendation based on personalized sequences and attention sequences, according to an example embodiment.
Because the attention sequence of the target user comprises the attention degree of the target user to each historical video in the historical sequence, compared with the method for performing video recommendation directly based on the historical sequence of the target user, the method for performing video recommendation on the target user based on the attention sequence of the target user can enable the determined recommendation video to be more related to the historical video which is more concerned by the target user in the historical sequence, and therefore recommendation is more accurate.
1. Attention sequence generation method
As described above, the user vector includes all the information of the user that can be acquired, so that the user vector of the target user is processed, the attention degree of the target user to each history video in the history sequence can also be obtained, and the attention degree of the target user to each history video in the history sequence is sorted according to the sequence of each history video in the history sequence, that is, the attention coefficient of the history sequence can be obtained. And weighting the historical sequence of the target user according to the attention coefficient of the historical sequence to obtain the attention sequence of the target user.
The attention sequence of the target user is obtained by inputting the user vector and the history sequence of the target user into the second neural network. The second neural network learns, via the sample user vectors and the sample history sequences of the second sample user, to project the sample user vectors as projection vectors of the same dimensions as the video vectors of the historical video.
Optionally, inputting the user vector of the target user and the history sequence into a second neural network, where the second neural network may project the user vector of the target user into a projection vector of the target user with the same dimension as the video vector of the history video; calculating the similarity between the video vector of each historical video in the historical sequence and the projection vector of the target user, taking the similarity corresponding to each historical video as the weight corresponding to the historical video when the attention sequence is generated according to the historical sequence, wherein the historical videos have the sequence in the historical sequence, so that the weights of the historical videos also have the same sequence, and the weights with the sequence are taken as the attention coefficient of the historical sequence; the attention coefficient is used for weighting the history sequence of the target user, and the attention sequence of the target user can be generated.
In this way, the generated attention sequence of the target user includes the attention degree of the target user to each historical video in the historical sequence, and therefore when video recommendation is performed based on the attention sequence of the target user, the recommended video can be more similar to each video in the historical sequence, namely, the recommended video better conforms to the preference of the user.
2. Training step of second neural network
Sample user vectors of a plurality of second sample users are obtained, and respective sample history sequences of the plurality of second sample users are obtained. The specific obtaining method may refer to the obtaining method for obtaining the user vector and the history sequence of the target user. Wherein the second sample user may be the same as the first sample user.
And inputting the sample user vectors and the sample historical sequences of the plurality of second sample users into the initial second neural network to obtain the respective sample attention sequences of the plurality of second sample users. Wherein, the initial second neural network is a second neural network which is not trained, and the structure of the initial second neural network is the same as that of the trained second neural network. The method for obtaining the sample attention sequence of the second sample user by using the initial second neural network is the same as the method for obtaining the attention sequence of the target user by using the second neural network described above, except that the attention sequence obtained by the initial second neural network is not accurate enough.
FIG. 5 is a schematic flow diagram illustrating a video recommendation system assisting a second neural network in training according to an example embodiment. Under the assistance of a video recommendation system, in order to perform supervised training on an initial second neural network, video vectors of a plurality of second sample videos watched by a second sample user and labels of the plurality of second sample videos are acquired, and the label of one second sample video represents information on whether the second sample video is executed by the corresponding second sample user for each operation in a plurality of operations. The multiple operations may be long-time playing, commenting, collecting, forwarding, praise, and the like. In the case where the second sample user is the first sample user, the first sample video of the first sample user may be directly taken as the second sample video of the second sample user.
For each second sample user, inputting the video vector of the second sample video viewed by the second sample user and the sample attention sequence of the second sample user into the video recommendation system, so as to obtain a second prediction probability of whether the second sample video is executed by the second sample user for a plurality of operations, specifically, there is a second prediction probability for each operation. And establishing a loss function according to the difference between the second prediction probability of each second sample video and the label, carrying out supervised training on the initial second neural network based on the loss function, and obtaining the second neural network when the initial second neural network converges.
Taking the prediction praise operation and the forwarding operation as examples: the video vector of the second sample video and a corresponding sample attention sequence of the second sample user, which is obtained by the initial second neural network, are input into the video recommendation system. The video recommendation system predicts a probability value of the second sample video being praised by the second sample user and a probability value of the second sample video being forwarded by the second sample user. Establishing a praise loss function according to a difference between a situation that whether the second sample video is praise by the second sample user and a second prediction probability that the second sample video is predicted to be praise by the second sample user, wherein the situation is contained in a label of the second sample video; establishing a forwarding loss function based on the same principle; and updating the network parameters of the initial second neural network based on the praise loss function and the forwarding loss function.
In this way, considering that the attention sequence generated by the second neural network is difficult to directly perform supervised training, the second neural network is assisted by the video recommendation system to perform supervised training, so that the second neural network can generate an accurate attention sequence of the user, and the generated attention sequence is more suitable for the requirements of the downstream video recommendation system.
Optionally, the second neural network may be trained in the case that the personalized sequence is input.
And inputting the video vector of the second sample video watched by each second sample user, the sample attention sequence and the sample personalized sequence of the second sample user into a video recommendation system together to obtain a second prediction probability of whether the second sample video is subjected to various operations by the second sample user. The method for obtaining the sample personalized sequence of the second sample user may refer to the method for obtaining the sample personalized sequence of the first sample user.
Alternatively, the second neural network may be trained simultaneously with the first neural network, that is, the sample attention sequence of the second sample user input to the video recommendation system is obtained through the initial second neural network, and the sample personalized sequence of the second sample user input to the video recommendation system may be obtained through the initial first neural network.
Alternatively, the second neural network may be trained separately from the first neural network, that is, the sample attention sequence of the second sample user input into the video recommendation system is obtained through the initial second neural network, and the personalized sequence of the second sample user input into the video recommendation system is obtained through the trained first neural network. Of course, the second neural network may be trained first, and then the first neural network is trained, that is, the attention sequence of the second sample user input to the video recommendation system is obtained through the trained second neural network, and the sample personalized sequence of the second sample user input to the video recommendation system is obtained through the initial first neural network.
Therefore, the video recommendation scene based on the personalized sequence and the attention sequence has higher recommendation accuracy, and compared with the case that the personalized sequence is not input, the second neural network is trained under the condition that the personalized sequence is input, the video recommendation scene based on the personalized sequence and the attention sequence is more suitable, and therefore the video recommendation scene based on the personalized sequence and the attention sequence has higher accuracy. In addition, the video recommendation system assists the second neural network in training, so that the second neural network can obtain the attention sequence of the user and further meet the requirements of the video recommendation system.
3. Video recommendation system for determining recommended videos
The video vectors of the candidate videos, the personalized sequences and the attention sequences of the target user can be input into the video recommendation system, and second prediction probabilities of the candidate videos and the target user performing various operations are obtained. When the video vectors of a plurality of candidate videos and the personalized sequences of the target users are input into the video recommendation system, the method is equivalent to the scheme described in the foregoing; when the video vectors of a plurality of candidate videos and the attention sequence of the target user are input into the video recommendation system, the attention degree of the target user to each historical video in the historical sequence is considered; when the video vectors of a plurality of candidate videos, the personalized sequence of the target user and the attention sequence are input into the video recommendation system, the interest degree and the interest degree of the target user in each video in the historical sequence are comprehensively considered, and the obtained recommended video has higher accuracy.
Optionally, while the video vectors of the multiple candidate videos, the personalized sequences and the attention sequences of the target user are input into the video recommendation system, the user characteristics, the context characteristics, the time characteristics and the like of the target user may also be input into the video recommendation system after being vectorized, so as to improve the accuracy of the recommended video determined by the video recommendation system.
The video recommendation system may predict, for each candidate video, a second probability that the candidate video is performed by the target user for each operation based on the input vectors. And obtaining a second recommendation score of each candidate video according to a second probability corresponding to each operation of each candidate video, wherein the second probabilities corresponding to different operations can have different weights.
And determining the recommended video of the target user from the plurality of candidate videos according to the second recommendation scores of the plurality of candidate videos. Optionally, the candidate video with the highest second recommendation score in the plurality of candidate videos may be determined as the recommended video of the target user; the plurality of candidate videos can also be ranked according to the second recommendation score, and the top N candidate videos are taken as recommendation videos.
It is understood that the second probability may be obtained when the video recommendation system inputs the video vectors of the candidate videos and the personalized sequence and the attention sequence of the target user, or may be obtained when the video recommendation system inputs the video vectors of the candidate videos and one of the personalized sequence and the attention sequence of the target user.
Therefore, video recommendation is performed on the target user based on the personalized sequence and the attention sequence of the target user, the recommended videos not only accord with the interest distribution of the target user, but also better fit historical videos in the historical sequence concerned by the target user, and therefore the accuracy is higher.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Fig. 6 is a block diagram illustrating a video recommendation apparatus according to an exemplary embodiment, and referring to fig. 6, the apparatus includes a vector acquisition module 61, a sequence acquisition module 62, a personalized sequence acquisition module 63, and a video recommendation module 64.
The vector acquiring module 61 is configured to acquire video vectors of a plurality of candidate videos and a user vector of a target user;
the sequence obtaining module 62 is configured to obtain a history sequence of the target user, which is composed of video vectors of a plurality of history videos, according to the plurality of history videos watched by the target user;
the personalized sequence acquiring module 63 is configured to input the user vector and the historical sequence of the target user into a first neural network to obtain a personalized sequence of the target user, where the personalized sequence of the target user includes a ratio of the interest of the target user in each historical video to the total interest of the target user in the plurality of historical videos;
the video recommendation module 64 is configured to input the video vectors of the plurality of candidate videos and the personalized sequence of the target user into a video recommendation system, so as to obtain a recommended video of the target user determined from the plurality of candidate videos.
Optionally, the inputting the user vector and the history sequence of the target user into a first neural network to obtain a personalized sequence of the target user includes:
a ratio obtaining unit configured to input the user vector and the history sequence of the target user into the first neural network, and obtain a ratio of the interest of the target user in each history video to the total interest of the target user in the plurality of history videos;
a fusion coefficient determining unit configured to obtain a fusion coefficient of the history sequence of the target user according to respective corresponding ratios of the plurality of history videos;
and the personalized sequence acquisition unit is configured to generate a personalized sequence of the target user according to the fusion coefficient and the historical sequence of the target user.
Optionally, the training step of the first neural network includes:
obtaining sample user vectors of a plurality of first sample users and respective sample history sequences of the plurality of first sample users;
inputting the sample user vectors and the sample historical sequences of the plurality of first sample users into an initial first neural network to obtain respective sample personalized sequences of the plurality of first sample users;
acquiring a video vector of a first sample video watched by each first sample user and a label representing whether the first sample video is subjected to various operations by the first sample user;
inputting the video vector of the first sample video watched by each first sample user and the sample personalized sequence of the first sample user into the video recommendation system to obtain a first prediction probability of whether the first sample video is subjected to various operations by the first sample user;
and carrying out supervised training on the initial first neural network based on the first prediction probability and the label of each first sample video to obtain the first neural network.
Optionally, the video recommendation module 64 includes:
a first probability obtaining unit configured to input video vectors of the plurality of candidate videos and the personalized sequence of the target user into the video recommendation system, resulting in first probabilities that the plurality of candidate videos are each executed by the target user for a plurality of operations;
a first score calculation unit configured to calculate a first recommendation score for each of the plurality of candidate videos according to a first probability for each of the plurality of candidate videos;
the first video recommending unit is configured to determine recommended videos of the target user from the candidate videos according to the first recommendation scores of the candidate videos.
Optionally, after the obtaining, according to a plurality of historical videos watched by the target user, a historical sequence of the target user, which is composed of video vectors of the plurality of historical videos, the method further includes:
an attention sequence acquisition module configured to input the user vector and the history sequence of the target user into a second neural network to obtain an attention sequence of the target user;
the video recommendation module 64 includes:
and the video recommending unit is configured to input the video vectors of the candidate videos and the personalized sequence and the attention sequence of the target user into the video recommending system to obtain the recommended video of the target user determined from the candidate videos.
Optionally, the attention sequence acquisition module comprises:
a vector projection unit configured to input the user vector of the target user and a history sequence into the second neural network, and project the user vector of the target user into a projection vector of the same dimension as a video vector of the history video;
the similarity calculation unit is configured to calculate the similarity between the video vector of each historical video in the historical sequence of the target user and the projection vector, and obtain the attention coefficient of the historical sequence of the target user;
an attention sequence generating unit configured to generate an attention sequence of the target user according to the attention coefficient and a history sequence of the target user.
Optionally, the training step of the second neural network comprises:
obtaining sample user vectors of a plurality of second sample users and respective sample history sequences of the plurality of second sample users;
inputting the sample user vectors and the sample historical sequences of the plurality of second sample users into an initial second neural network to obtain respective sample attention sequences of the plurality of second sample users;
acquiring a video vector of a second sample video watched by each second sample user, and a label for representing whether the second sample video is subjected to various operations by the second sample user;
inputting the video vector of the second sample video watched by each second sample user and the sample attention sequence of the second sample user into the video recommendation system to obtain a second prediction probability of whether the second sample video is subjected to various operations by the second sample user;
and carrying out supervised training on the initial second neural network based on the second prediction probability and the label of each second sample video to obtain the second neural network.
Optionally, the inputting the video vector of the second sample video viewed by each second sample user and the sample attention sequence of the second sample user into the video recommendation system to obtain a second prediction probability of whether the second sample video is executed by the second sample user by multiple operations includes:
obtaining a sample personalized sequence of a second sample user;
and inputting the video vector of the second sample video watched by each second sample user, the sample attention sequence and the sample personalized sequence of the second sample user into the video recommendation system to obtain a second prediction probability of whether the second sample video is subjected to various operations by the second sample user.
Optionally, the video recommendation module 64 includes:
a second probability obtaining unit, configured to input the video vectors of the candidate videos, and the personalized sequence and the attention sequence of the target user into the video recommendation system, so as to obtain a second probability that the candidate videos are subjected to multiple operations by the target user;
a second score calculation unit configured to calculate a second recommendation score for each of the plurality of candidate videos according to a second probability for each of the plurality of candidate videos;
a second video recommending unit configured to determine a recommended video of the target user from the plurality of candidate videos according to the respective second recommendation scores of the plurality of candidate videos.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Fig. 7 is a block diagram illustrating an apparatus for video recommendation, according to an example embodiment. The apparatus 700 may be, among other things, a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, etc.
Referring to fig. 7, apparatus 700 may include one or more of the following components: a processing component 702, a memory 704, a power component 706, a multimedia component 708, an audio component 710, an input/output (I/O) interface 712, a sensor component 714, and a communication component 716.
The processing component 702 generally controls overall operation of the device 700, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 702 may include one or more processors 720 to execute instructions to perform all or a portion of the steps of the methods described above. Further, processing component 702 may include one or more modules that facilitate interaction between processing component 702 and other components. For example, the processing component 702 may include a multimedia module to facilitate interaction between the multimedia component 708 and the processing component 702.
The memory 704 is configured to store various types of data to support operation at the apparatus 700. Examples of such data include instructions for any application or method operating on device 700, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 704 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 706 provides power to the various components of the device 700. The power components 706 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the apparatus 700.
The multimedia component 708 includes a screen that provides an output interface between the device 700 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 708 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the device 700 is in an operation mode, such as a photographing mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 710 is configured to output and/or input audio signals. For example, audio component 710 includes a Microphone (MIC) configured to receive external audio signals when apparatus 700 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in the memory 704 or transmitted via the communication component 716. In some embodiments, audio component 710 also includes a speaker for outputting audio signals.
The I/O interface 712 provides an interface between the processing component 702 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 714 includes one or more sensors for providing status assessment of various aspects of the apparatus 700. For example, sensor assembly 714 may detect an open/closed state of device 700, the relative positioning of components, such as a display and keypad of device 700, sensor assembly 714 may also detect a change in position of device 700 or a component of device 700, the presence or absence of user contact with device 700, orientation or acceleration/deceleration of device 700, and a change in temperature of device 700. The sensor assembly 714 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 714 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 714 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 716 is configured to facilitate wired or wireless communication between the apparatus 700 and other devices. The apparatus 700 may access a wireless network based on a communication standard, such as WiFi, an operator network (such as 2G, 3G, 4G, or 5G), or a combination thereof. In an exemplary embodiment, the communication component 716 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 716 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 700 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer readable storage medium comprising instructions, such as the memory 704 comprising instructions, executable by the processor 720 of the apparatus 700 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Fig. 8 is a block diagram illustrating an apparatus for video recommendation in accordance with an example embodiment. For example, the apparatus 800 may be provided as a server. Referring to fig. 8, the apparatus 800 includes a processing component 822, which further includes one or more processors, and memory resources, represented by memory 832, for storing instructions, e.g., a computer program product, executable by the processing component 822. The computer program product stored in memory 832 may include one or more modules each corresponding to a set of instructions. Further, the processing component 822 is configured to execute instructions to perform the video recommendation method described above.
The device 800 may also include a power component 826 configured to perform power management of the device 800, a wired or wireless network interface 850 configured to connect the device 800 to a network, and an input/output (I/O) interface 858. The apparatus 800 may operate based on an operating system stored in the memory 832, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, or the like.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
It will be understood that the invention is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the invention is limited only by the appended claims.

Claims (20)

1. A method for video recommendation, comprising:
acquiring video vectors of a plurality of candidate videos and a user vector of a target user, wherein the user vector of the target user is generated according to the attribute of the target user and the behavior track of the target user;
acquiring a history sequence of the target user, which is composed of video vectors of a plurality of history videos, according to the plurality of history videos watched by the target user;
inputting the user vector and the history sequence of the target user into a first neural network to obtain a personalized sequence of the target user, wherein the personalized sequence of the target user comprises the ratio of the interest of the target user to each history video to the total interest of the target user to the plurality of history videos;
inputting the video vectors of the candidate videos and the personalized sequence of the target user into a video recommendation system, and obtaining a recommended video of the target user determined by the video recommendation system from the candidate videos based on a first probability that the video recommendation system is respectively executed by the target user for various operations.
2. The method of claim 1, wherein the inputting the user vector and the historical sequence of the target user into a first neural network to obtain a personalized sequence of the target user comprises:
inputting the user vector and the historical sequence of the target user into the first neural network to obtain the ratio of the interest of the target user to each historical video to the total interest of the target user to the plurality of historical videos;
obtaining a fusion coefficient of the historical sequence of the target user according to the respective corresponding ratios of the plurality of historical videos;
and generating a personalized sequence of the target user according to the fusion coefficient and the historical sequence of the target user.
3. The method of claim 1 or 2, wherein the step of training the first neural network comprises:
obtaining sample user vectors of a plurality of first sample users and respective sample history sequences of the plurality of first sample users;
inputting the sample user vectors and the sample historical sequences of the plurality of first sample users into an initial first neural network to obtain respective sample personalized sequences of the plurality of first sample users;
acquiring a video vector of a first sample video watched by each first sample user and a label representing whether the first sample video is subjected to various operations by the first sample user;
inputting the video vector of the first sample video watched by each first sample user and the sample personalized sequence of the first sample user into the video recommendation system to obtain a first prediction probability of whether the first sample video is subjected to various operations by the first sample user;
and carrying out supervised training on the initial first neural network based on the first prediction probability and the label of each first sample video to obtain the first neural network.
4. The method of claim 1, wherein inputting the video vectors of the plurality of candidate videos and the personalized sequence of the target user into a video recommendation system results in a recommended video of the target user determined from the plurality of candidate videos, comprising:
inputting the video vectors of the candidate videos and the personalized sequence of the target user into the video recommendation system to obtain a first probability that the candidate videos are respectively executed by the target user for various operations;
calculating a first recommendation score of each of the candidate videos according to the first probability of each of the candidate videos;
and determining a recommended video of the target user from the candidate videos according to the first recommendation scores of the candidate videos.
5. The method according to claim 1, further comprising, after the obtaining the historical sequence of the target user composed of video vectors of a plurality of historical videos watched by the target user, the method further comprising:
inputting the user vector and the historical sequence of the target user into a second neural network to obtain the attention sequence of the target user;
inputting the video vectors of the candidate videos and the personalized sequence of the target user into a video recommendation system to obtain a recommended video of the target user determined from the candidate videos, wherein the method comprises the following steps:
and inputting the video vectors of the candidate videos, the personalized sequences and the attention sequences of the target user into the video recommendation system to obtain the recommended video of the target user determined from the candidate videos.
6. The method of claim 5, wherein the inputting the user vector and the historical sequence of the target user into a second neural network to obtain the attention sequence of the target user comprises:
inputting the user vector and the historical sequence of the target user into the second neural network, and projecting the user vector of the target user into a projection vector with the same dimension as the video vector of the historical video;
calculating the similarity between the video vector of each historical video in the historical sequence of the target user and the projection vector to obtain the attention coefficient of the historical sequence of the target user;
and generating the attention sequence of the target user according to the attention coefficient and the history sequence of the target user.
7. The method of claim 5 or 6, wherein the training step of the second neural network comprises:
obtaining sample user vectors of a plurality of second sample users and respective sample history sequences of the plurality of second sample users;
inputting the sample user vectors and the sample historical sequences of the plurality of second sample users into an initial second neural network to obtain respective sample attention sequences of the plurality of second sample users;
acquiring a video vector of a second sample video watched by each second sample user and a label representing whether the second sample video is subjected to various operations by the second sample user;
inputting the video vector of the second sample video watched by each second sample user and the sample attention sequence of the second sample user into the video recommendation system to obtain a second prediction probability of whether the second sample video is subjected to various operations by the second sample user;
and carrying out supervised training on the initial second neural network based on the second prediction probability and the label of each second sample video to obtain the second neural network.
8. The method of claim 7, wherein inputting the video vector of the second sample video viewed by each of the second sample users and the sample attention sequence of the second sample user into the video recommendation system to obtain a second prediction probability of whether the second sample video is performed by the second sample user comprises:
obtaining a sample personalized sequence of each second sample user;
and inputting the video vector of the second sample video watched by each second sample user, the sample attention sequence and the sample personalized sequence of the second sample user into the video recommendation system to obtain a second prediction probability of whether the second sample video is subjected to various operations by the second sample user.
9. The method of claim 5, wherein the inputting the video vectors of the plurality of candidate videos and the personalized sequence and attention sequence of the target user into the video recommendation system to obtain the recommended video of the target user determined from the plurality of candidate videos comprises:
inputting the video vectors of the candidate videos, and the personalized sequence and the attention sequence of the target user into the video recommendation system to obtain a second probability that the candidate videos are subjected to multiple operations by the target user;
calculating a second recommendation score of each of the candidate videos according to a second probability of each of the candidate videos;
and determining the recommended video of the target user from the candidate videos according to the second recommended scores of the candidate videos.
10. A video recommendation apparatus, comprising:
a vector acquisition module configured to acquire video vectors of a plurality of candidate videos and a user vector of a target user, the user vector of the target user being generated according to an attribute of the target user and a behavior track of the target user;
the sequence acquisition module is configured to acquire a historical sequence of the target user, which is composed of video vectors of a plurality of historical videos, according to the plurality of historical videos watched by the target user;
a personalized sequence acquisition module configured to input the user vector and the historical sequence of the target user into a first neural network to obtain a personalized sequence of the target user, where the personalized sequence of the target user includes a ratio of an interest of the target user in each historical video to a total interest of the target user in the plurality of historical videos;
a video recommendation module configured to input the video vectors of the plurality of candidate videos and the personalized sequence of the target user into a video recommendation system, resulting in a recommended video of the target user determined by the video recommendation system from the plurality of candidate videos based on first probabilities that the plurality of candidate videos are each performed by the target user in a plurality of operations.
11. The apparatus of claim 10, wherein the inputting the user vector and the historical sequence of the target user into a first neural network to obtain a personalized sequence of the target user comprises:
a ratio obtaining unit, configured to input the user vector and the history sequence of the target user into the first neural network, to obtain a ratio of an interest of the target user in each history video to a total interest of the target user in the plurality of history videos;
a fusion coefficient determining unit configured to obtain a fusion coefficient of the history sequence of the target user according to respective corresponding ratios of the plurality of history videos;
and the personalized sequence acquisition unit is configured to generate a personalized sequence of the target user according to the fusion coefficient and the historical sequence of the target user.
12. The apparatus of claim 10 or 11, wherein the training step of the first neural network comprises:
obtaining sample user vectors of a plurality of first sample users and respective sample history sequences of the plurality of first sample users;
inputting the sample user vectors and the sample historical sequences of the plurality of first sample users into an initial first neural network to obtain respective sample personalized sequences of the plurality of first sample users;
acquiring a video vector of a first sample video watched by each first sample user and a label representing whether the first sample video is subjected to various operations by the first sample user;
inputting the video vector of the first sample video watched by each first sample user and the sample personalized sequence of the first sample user into the video recommendation system to obtain a first prediction probability of whether the first sample video is subjected to various operations by the first sample user;
and carrying out supervised training on the initial first neural network based on the first prediction probability and the label of each first sample video to obtain the first neural network.
13. The apparatus of claim 10, wherein the video recommendation module comprises:
a first probability obtaining unit configured to input the video vectors of the plurality of candidate videos and the personalized sequence of the target user into the video recommendation system, resulting in first probabilities of the plurality of candidate videos being each executed by the target user for various operations;
a first score calculation unit configured to calculate a first recommendation score for each of the plurality of candidate videos according to a first probability for each of the plurality of candidate videos;
a first video recommending unit configured to determine a recommended video of the target user from the plurality of candidate videos according to the respective first recommendation scores of the plurality of candidate videos.
14. The apparatus according to claim 10, further comprising, after said obtaining a history sequence of the target user composed of video vectors of a plurality of history videos watched by the target user, a step of:
an attention sequence acquisition module configured to input the user vector and the history sequence of the target user into a second neural network to obtain an attention sequence of the target user;
the video recommendation module comprises:
and the video recommending unit is configured to input the video vectors of the candidate videos and the personalized sequence and the attention sequence of the target user into the video recommending system to obtain the recommended video of the target user determined from the candidate videos.
15. The apparatus of claim 14, wherein the attention sequence acquisition module comprises:
a vector projection unit configured to input the user vector of the target user and a history sequence into the second neural network, and project the user vector of the target user into a projection vector of the same dimension as a video vector of the history video;
the similarity calculation unit is configured to calculate the similarity between the video vector of each historical video in the historical sequence of the target user and the projection vector, and obtain the attention coefficient of the historical sequence of the target user;
an attention sequence generating unit configured to generate an attention sequence of the target user according to the attention coefficient and a history sequence of the target user.
16. The apparatus of claim 14 or 15, wherein the training step of the second neural network comprises:
obtaining sample user vectors of a plurality of second sample users and respective sample history sequences of the plurality of second sample users;
inputting the sample user vectors and the sample historical sequences of the plurality of second sample users into an initial second neural network to obtain respective sample attention sequences of the plurality of second sample users;
acquiring a video vector of a second sample video watched by each second sample user and a label representing whether the second sample video is subjected to various operations by the second sample user;
inputting the video vector of the second sample video watched by each second sample user and the sample attention sequence of the second sample user into the video recommendation system to obtain a second prediction probability of whether the second sample video is subjected to various operations by the second sample user;
and carrying out supervised training on the initial second neural network based on the second prediction probability and the label of each second sample video to obtain the second neural network.
17. The apparatus of claim 16, wherein said inputting the video vector of the second sample video viewed by each of the second sample users and the sample attention sequence of the second sample user into the video recommendation system to obtain a second prediction probability of whether the second sample video is performed by the second sample user comprises:
obtaining a sample personalized sequence of each second sample user;
and inputting the video vector of the second sample video watched by each second sample user, the sample attention sequence and the sample personalized sequence of the second sample user into the video recommendation system to obtain a second prediction probability of whether the second sample video is subjected to various operations by the second sample user.
18. The apparatus of claim 14, wherein the video recommendation module comprises:
a second probability obtaining unit, configured to input the video vectors of the candidate videos, and the personalized sequence and the attention sequence of the target user into the video recommendation system, so as to obtain a second probability that the candidate videos are subjected to multiple operations by the target user;
a second score calculation unit configured to calculate a second recommendation score for each of the plurality of candidate videos according to a second probability for each of the plurality of candidate videos;
a second video recommending unit configured to determine a recommended video of the target user from the plurality of candidate videos according to the respective second recommendation scores of the plurality of candidate videos.
19. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the video recommendation method of any of claims 1 to 9.
20. A computer-readable storage medium, wherein instructions in the computer-readable storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the video recommendation method of any of claims 1-9.
CN202210517169.9A 2022-05-13 2022-05-13 Video recommendation method and device, electronic equipment, storage medium and program product Active CN114722238B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210517169.9A CN114722238B (en) 2022-05-13 2022-05-13 Video recommendation method and device, electronic equipment, storage medium and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210517169.9A CN114722238B (en) 2022-05-13 2022-05-13 Video recommendation method and device, electronic equipment, storage medium and program product

Publications (2)

Publication Number Publication Date
CN114722238A CN114722238A (en) 2022-07-08
CN114722238B true CN114722238B (en) 2022-09-30

Family

ID=82230454

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210517169.9A Active CN114722238B (en) 2022-05-13 2022-05-13 Video recommendation method and device, electronic equipment, storage medium and program product

Country Status (1)

Country Link
CN (1) CN114722238B (en)

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110008409A (en) * 2019-04-12 2019-07-12 苏州市职业大学 Based on the sequence of recommendation method, device and equipment from attention mechanism
CN110399565A (en) * 2019-07-29 2019-11-01 北京理工大学 Based on when null cycle attention mechanism recurrent neural network point of interest recommended method
CN110929164A (en) * 2019-12-09 2020-03-27 北京交通大学 Interest point recommendation method based on user dynamic preference and attention mechanism
CN112395496A (en) * 2020-10-22 2021-02-23 上海众源网络有限公司 Information recommendation method and device, electronic equipment and storage medium
CN112417207A (en) * 2020-11-24 2021-02-26 未来电视有限公司 Video recommendation method, device, equipment and storage medium
CN112528147A (en) * 2020-12-10 2021-03-19 北京百度网讯科技有限公司 Content recommendation method and apparatus, training method, computing device, and storage medium
CN112883257A (en) * 2021-01-11 2021-06-01 北京达佳互联信息技术有限公司 Behavior sequence data processing method and device, electronic equipment and storage medium
CN112905648A (en) * 2021-02-04 2021-06-04 北京邮电大学 Multi-target recommendation method and system based on multi-task learning
CN113207010A (en) * 2021-06-02 2021-08-03 清华大学 Model training method, live broadcast recommendation method, device and program product
CN113435685A (en) * 2021-04-28 2021-09-24 桂林电子科技大学 Course recommendation method of hierarchical Attention deep learning model
CN113487377A (en) * 2021-06-07 2021-10-08 贵州电网有限责任公司 Individualized real-time recommendation method based on GRU network
CN113746875A (en) * 2020-05-27 2021-12-03 百度在线网络技术(北京)有限公司 Voice packet recommendation method, device, equipment and storage medium
CN113822776A (en) * 2021-09-29 2021-12-21 中国平安财产保险股份有限公司 Course recommendation method, device, equipment and storage medium
CN114245185A (en) * 2021-11-30 2022-03-25 北京达佳互联信息技术有限公司 Video recommendation method, model training method, device, electronic equipment and medium
CN114339417A (en) * 2021-12-30 2022-04-12 未来电视有限公司 Video recommendation method, terminal device and readable storage medium

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110008409A (en) * 2019-04-12 2019-07-12 苏州市职业大学 Based on the sequence of recommendation method, device and equipment from attention mechanism
CN110399565A (en) * 2019-07-29 2019-11-01 北京理工大学 Based on when null cycle attention mechanism recurrent neural network point of interest recommended method
CN110929164A (en) * 2019-12-09 2020-03-27 北京交通大学 Interest point recommendation method based on user dynamic preference and attention mechanism
CN113746875A (en) * 2020-05-27 2021-12-03 百度在线网络技术(北京)有限公司 Voice packet recommendation method, device, equipment and storage medium
CN112395496A (en) * 2020-10-22 2021-02-23 上海众源网络有限公司 Information recommendation method and device, electronic equipment and storage medium
CN112417207A (en) * 2020-11-24 2021-02-26 未来电视有限公司 Video recommendation method, device, equipment and storage medium
CN112528147A (en) * 2020-12-10 2021-03-19 北京百度网讯科技有限公司 Content recommendation method and apparatus, training method, computing device, and storage medium
CN112883257A (en) * 2021-01-11 2021-06-01 北京达佳互联信息技术有限公司 Behavior sequence data processing method and device, electronic equipment and storage medium
CN112905648A (en) * 2021-02-04 2021-06-04 北京邮电大学 Multi-target recommendation method and system based on multi-task learning
CN113435685A (en) * 2021-04-28 2021-09-24 桂林电子科技大学 Course recommendation method of hierarchical Attention deep learning model
CN113207010A (en) * 2021-06-02 2021-08-03 清华大学 Model training method, live broadcast recommendation method, device and program product
CN113487377A (en) * 2021-06-07 2021-10-08 贵州电网有限责任公司 Individualized real-time recommendation method based on GRU network
CN113822776A (en) * 2021-09-29 2021-12-21 中国平安财产保险股份有限公司 Course recommendation method, device, equipment and storage medium
CN114245185A (en) * 2021-11-30 2022-03-25 北京达佳互联信息技术有限公司 Video recommendation method, model training method, device, electronic equipment and medium
CN114339417A (en) * 2021-12-30 2022-04-12 未来电视有限公司 Video recommendation method, terminal device and readable storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Mashup FOAF for Video Recommendation LightWeight Prototype;Shijun Li等;《2010 Seventh Web Information Systems and Applications Conference》;20100923;190-193 *
一种基于用户播放行为序列的个性化视频推荐策略;王娜等;《计算机学报》;20200131;第43卷(第1期);123-135 *
基于FM与DQN 结合的视频推荐算法;吕亚珉;《计算机与数字工程》;20210930(第9期);1771-1776 *

Also Published As

Publication number Publication date
CN114722238A (en) 2022-07-08

Similar Documents

Publication Publication Date Title
CN109800325B (en) Video recommendation method and device and computer-readable storage medium
CN109871896B (en) Data classification method and device, electronic equipment and storage medium
CN109684510B (en) Video sequencing method and device, electronic equipment and storage medium
CN110688527A (en) Video recommendation method and device, storage medium and electronic equipment
CN111859020B (en) Recommendation method, recommendation device, electronic equipment and computer readable storage medium
CN111160448B (en) Training method and device for image classification model
CN109670077B (en) Video recommendation method and device and computer-readable storage medium
CN109961094B (en) Sample acquisition method and device, electronic equipment and readable storage medium
CN111539443A (en) Image recognition model training method and device and storage medium
CN109543069B (en) Video recommendation method and device and computer-readable storage medium
CN112148980B (en) Article recommending method, device, equipment and storage medium based on user click
CN112148923B (en) Method for ordering search results, method, device and equipment for generating ordering model
CN110941727B (en) Resource recommendation method and device, electronic equipment and storage medium
CN112948704A (en) Model training method and device for information recommendation, electronic equipment and medium
CN111246255B (en) Video recommendation method and device, storage medium, terminal and server
CN112784151B (en) Method and related device for determining recommended information
CN110019965B (en) Method and device for recommending expression image, electronic equipment and storage medium
CN112308588A (en) Advertisement putting method and device and storage medium
CN113656637B (en) Video recommendation method and device, electronic equipment and storage medium
CN114722238B (en) Video recommendation method and device, electronic equipment, storage medium and program product
CN115203573A (en) Portrait label generating method, model training method, device, medium and chip
CN111428806B (en) Image tag determining method and device, electronic equipment and storage medium
CN112712385B (en) Advertisement recommendation method and device, electronic equipment and storage medium
CN111898019B (en) Information pushing method and device
CN113609380A (en) Label system updating method, searching method, device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant