CN109558514B - Video recommendation method, device thereof, information processing equipment and storage medium - Google Patents
Video recommendation method, device thereof, information processing equipment and storage medium Download PDFInfo
- Publication number
- CN109558514B CN109558514B CN201910014819.6A CN201910014819A CN109558514B CN 109558514 B CN109558514 B CN 109558514B CN 201910014819 A CN201910014819 A CN 201910014819A CN 109558514 B CN109558514 B CN 109558514B
- Authority
- CN
- China
- Prior art keywords
- video
- user
- vector
- videos
- sequence
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Abstract
The invention discloses a video recommendation method, a device thereof, information processing equipment and a storage medium, wherein a plurality of video sequences with front and back relevance are determined in all user historical behavior data to be used as samples for training a neural network model. After training the neural network model using such video sequence samples, an item vector for each video may be determined, such that each video is represented by a vector. Considering that the real-time interest of the user is related to time, the user vector corresponding to the user is determined according to videos watched by the user from near to far and the item vectors corresponding to the videos according to the time, and the real-time interest of the user can be embodied. Therefore, the similar sequence of the videos interested by each user can be determined according to the similarity degree of the object vector corresponding to each video and the user vector corresponding to the user, the videos recommended to the user by the sequence are more in line with the tendency and real-time interest of the user, the accuracy of video recommendation is improved, and the user experience is improved.
Description
Technical Field
The present invention relates to the field of video technologies, and in particular, to a video recommendation method, an apparatus thereof, an information processing device, and a storage medium.
Background
With the continuous development of internet technology, network videos are increasingly abundant, users can watch videos without being limited to televisions, interested videos can be searched through the internet to be watched, and the playing time limit of the televisions is not limited any more. In addition, the internet video can also recommend the user to the user, so that the user can conveniently select the internet video.
At present, videos are generally recommended to users in a collaborative filtering mode, the method depends on historical behaviors of the users watching the videos, video recommendation needs to be carried out based on a large amount of historical behavior data, the amount of data needing to be processed is large, and efficiency is low. And the interest of the user in watching the video may change with time, but the current video recommendation method does not consider the real-time interest of the user, and the recommendation accuracy needs to be improved.
Disclosure of Invention
The invention provides a video recommendation method, a device thereof, information processing equipment and a storage medium, which are used for optimizing the accuracy of video recommendation.
In a first aspect, the present invention provides a video recommendation method, including:
determining a plurality of video sequence samples and video sequences of videos watched by each user according to the time sequence from near to far according to historical behavior data of all users in a database; correlation exists between each video in the video sequence sample and the video adjacent to the video;
training a set neural network model by adopting the video sequence samples, and determining an article vector corresponding to each video;
determining a user vector corresponding to each user according to the arrangement sequence of each video in the video sequence of each user and the item vector corresponding to each video;
and sequentially determining the similarity between the item vector corresponding to each video and the user vector corresponding to each user, and recommending similar videos to the user according to the sequence from high to low of the similarity of the user vectors corresponding to the users.
In an implementable embodiment, the above method is provided by the invention wherein the neural network model is an item2vec model;
the training of the set neural network model by adopting the video sequence samples to determine the item vector corresponding to each video comprises the following steps:
training the item2vec model by adopting the video sequence sample;
representing the implicit space parameters of the trained item2vec model as item vectors corresponding to the videos;
wherein the number of components contained by the item vector is less than the number of videos contained by the video sequence samples.
In an implementation manner, the above method provided by the present invention, wherein the user vector is determined by using the following formula:
wherein, the first and the second end of the pipe are connected with each other,represents the user vector, <' > is selected>An item vector, w, representing the jth video j Representing the temporal decay weight of the jth video.
In an implementable embodiment, the above method is provided wherein the temporal attenuation weight is determined using the following equation:
wherein, w j Representing the temporal attenuation weight, alpha representing the temporal attenuation coefficient, and order representing the ordering of the video in said video sequence.
In an implementation manner, in the above method provided by the present invention, the similarity between the item vector and the user vector is determined by using the following formula:
wherein sim (i) j ,u j ) Representing a similarity of the item vector and the user vector,represents the item vector, <' > is selected>Representing the user vector.
In a second aspect, the present invention provides a video recommendation apparatus, including:
the video sequence determining unit is used for determining a plurality of video sequence samples and video sequences of videos watched by each user according to the time sequence from near to far according to historical behavior data of all users in the database; correlation exists between each video in the video sequence sample and the video adjacent to the video;
the article vector determining unit is used for training a set neural network model by adopting the video sequence samples and determining article vectors corresponding to the videos;
the user vector determining unit is used for determining a user vector corresponding to each user according to the arrangement sequence of each video in the video sequence of each user and the item vector corresponding to each video;
and the recommending unit is used for sequentially determining the similarity between the item vector corresponding to each video and the user vector corresponding to each user and recommending similar videos to the user according to the sequence from high to low of the similarity of the user vectors corresponding to the users.
In an implementable embodiment, in the above apparatus provided by the present invention, the item vector determination unit is specifically configured to train the item2vec model with the video sequence samples; representing the implicit space parameters of the trained item2vec model as item vectors corresponding to the videos;
wherein the number of components contained by the item vector is less than the number of videos contained by the video sequence samples.
In an implementable implementation, in the foregoing device provided by the present invention, the user vector determining unit determines the user vector by using the following formula:
represents the user vector, <' > based on>An item vector, w, representing the jth video j Represents a temporal decay weight for the jth video;
wherein the time decay weight is determined using the following formula:
alpha is a time attenuation coefficient, and order is the ordering of the video in the video sequence.
In an implementable implementation, in the above apparatus provided by the present invention, the recommending unit determines the similarity between the item vector and the user vector by using the following formula:
wherein sim (i) j ,u j ) Representing a similarity of the item vector and the user vector,represents the item vector, <' > is selected>Representing the user vector.
In a third aspect, the present invention provides an information processing apparatus comprising:
a memory for storing program instructions;
a processor for calling the program instructions stored in the memory, and executing according to the obtained program: determining a plurality of video sequence samples and video sequences of videos watched by each user according to the time sequence from near to far according to historical behavior data of all users in a database; training a set neural network model by adopting the video sequence samples, and determining an article vector corresponding to each video; determining a user vector corresponding to each user according to the video sequence of each user and the item vector of each video; sequentially determining the similarity between the item vector corresponding to each video and the user vector corresponding to each user, and recommending similar videos to the user according to the sequence from high similarity to low similarity of the user vectors corresponding to the users;
wherein there is a correlation between each video in the video sequence sample and the video adjacent to the video.
In a fourth aspect, the present invention provides a computer-readable non-volatile storage medium having computer-executable instructions stored thereon for causing a computer to perform any of the video recommendation methods described above.
The video recommendation method, the device thereof, the information processing equipment and the storage medium provided by the invention determine a plurality of video sequence samples and video sequences of videos watched by each user according to the time sequence from near to far according to historical behavior data of all users in a database; training a set neural network model by adopting a video sequence sample, and determining an article vector corresponding to each video; determining a user vector corresponding to each user according to the arrangement sequence of each video in the video sequence of each user and the article vector corresponding to each video; and sequentially determining the similarity between the item vector corresponding to each video and the user vector corresponding to each user, and recommending similar videos to the user according to the sequence from high similarity to low similarity of the user vectors corresponding to the users. And determining a plurality of video sequences with contextual relevance in all the user historical behavior data as samples for training the neural network model. After training the neural network model using such video sequence samples, an item vector for each video may be determined, such that each video is represented by a vector. The user vectors are related to the item vectors, which shows that each user tends to watch which videos, and considering that the real-time interest of the user is related to time, the relevance between the current interest of the user and recently watched videos is possibly far greater than that of the videos watched earlier, so that the user vectors corresponding to the users are determined according to the videos watched by the users from near to far according to time and the item vectors corresponding to the videos, which not only can show which videos the user tends to watch, but also can show the real-time interest of the users. Therefore, the similar sequence of the videos interested by each user can be determined according to the similarity degree of the object vector corresponding to each video and the user vector corresponding to the user, the videos recommended to the user by the sequence are more in line with the tendency and real-time interest of the user, the accuracy of video recommendation is improved, and the user experience is improved.
Drawings
Fig. 1 is a flowchart of a video recommendation method according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a video recommendation apparatus according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of an information processing apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the present invention will be described in further detail with reference to the accompanying drawings, and it is apparent that the described embodiments are only a part of the embodiments of the present invention, not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
A video recommendation method, an apparatus thereof, an information processing device, and a storage medium according to embodiments of the present invention are described in detail below with reference to the accompanying drawings.
In a first aspect of the embodiments of the present invention, a video recommendation method is provided, as shown in fig. 1, the video recommendation method provided in an embodiment of the present invention includes:
s101, determining a plurality of video sequence samples and video sequences of videos watched by each user according to a time sequence from near to far according to historical behavior data of all users in a database;
s102, training a set neural network model by adopting a video sequence sample, and determining an article vector corresponding to each video;
s103, determining a user vector corresponding to each user according to the arrangement sequence of each video in the video sequence of each user and the article vector corresponding to each video;
and S104, sequentially determining the similarity between the item vector corresponding to each video and the user vector corresponding to each user, and recommending similar videos to the user according to the sequence from high to low of the similarity of the user vectors corresponding to the user.
Wherein there is a correlation between each video in the video sequence sample and the video adjacent to the video. Historical behavior data of users watching videos reflects the correlation between videos. This is because when the user watches the video, the real-time interest generated at the current moment is related to the video watched at the moment, and then when the user watches the current video, the user is more inclined to search for the video watching similar to the current video type, similar to the actor, similar to the director, and the like. According to the embodiment of the invention, based on the characteristics generated when the user watches the video, a plurality of video sequences with front and back relevance are determined in all the historical behavior data of the user and are used as samples for training a neural network model. After training the neural network model using such video sequence samples, an item vector for each video may be determined, such that each video is represented by a vector. In the embodiment of the invention, considering that the real-time interest of the user is related to time, the relevance between the current interest of the user and a recently watched video is far greater than that of the video watched earlier, so that the user vector corresponding to the user is determined according to the video sequence of the videos watched by the user from near to far according to time and the item vector corresponding to each video in the video sequence, which not only can embody which videos the user tends to watch, but also can embody the real-time interest of the user. Therefore, the similar sequence of the videos interested by each user can be determined according to the similarity degree of the item vectors corresponding to the videos and the user vectors corresponding to the users, the videos recommended to the users by the sequence are more in line with the tendency and the real-time interest of the users, the accuracy of video recommendation is improved, and the user experience is improved.
In particular, the neural network model may employ an item2vec model in an embodiment of the present invention. In the prior art, two problems exist in recommending videos by adopting a collaborative filtering model, firstly, the number of users and the number of videos are very large, and therefore, the efficiency is not high when matrix decomposition calculation is carried out; secondly, the video watched by the user has long distribution, so that the video cannot be well represented by the collaborative filtering model. The article vector determined by the item2vec model adopted by the embodiment of the invention can express the article at the long tail more, and the calculation efficiency of the model is superior to that of the prior art under the same data quantity.
In the step S102, training the set neural network model by using the video sequence samples to determine the item vector corresponding to each video, where the neural network model is an item2vec model, and the method specifically includes:
training an item2vec model by adopting a video sequence sample;
expressing the implicit space parameters of the trained item2vec model as item vectors corresponding to the videos;
the number of components contained in the item vector is smaller than the number of videos contained in the video sequence sample.
A plurality of video sequence samples, including both positive and negative samples, are determined in the historical behavior data of all users. And coding the positive and negative samples by using vectors, wherein the length of the vectors of the positive and negative samples is equal, correlation exists between the video in the positive samples and the adjacent video, and no correlation exists between the video in the negative samples and the adjacent video. Video sequences that are consistent with the user's historical behavior may each generate a positive example, while video sequences that are not consistent with the user's historical behavior may generate a negative example. Generally, the negative samples can be generated randomly, and the number of the negative samples can be much higher than that of the positive samples. After positive and negative samples are determined, the item2vec model is trained using these positive and negative samples. The item2vec model comprises a CBOW model and a Skip-gram model, and any model can be adopted for training in practical application, which is not limited herein. For example, the Skip-gram model may be trained using positive and negative samples, and the Skip-gram model may be trained using a negative sampling method, which may improve speed and model quality. The number of the hidden space parameters and the length of the prediction window are set, and specific values of the number of the hidden space parameters can be set according to experience and actual conditions, which is not specifically limited herein. However, in general, the number of the hidden space parameters is far smaller than the length of the video sequence sample, so that the finally obtained hidden space parameters are represented as the object vectors corresponding to each video by using the vectors, and the function of reducing the dimension of the vectors representing the videos can be achieved, so that the operation amount can be greatly reduced, and the efficiency is improved.
After the number of hidden space parameters is determined and a window with the prediction length of 2r +1 is taken, the positive samples and the negative samples are sequentially input into the model to train the model, so that when the finally trained model inputs the vector of one video in the positive samples, the first r videos and the last r videos of the input video in the positive samples can be output. Therefore, the implicit space parameters of the model after training can be obtained. The hidden space parameters are expressed by vectors as object vectors corresponding to the videos. The component of the article vector corresponding to each video is divided into continuous partial parameters in the hidden space parameters, and the continuous parameters corresponding to the article vector corresponding to each video can be different. For example, the item vector corresponding to a video may be represented as:
wherein the content of the first and second substances,representing the corresponding item vector, w, of the video r 1 ,w 2 ,w 3 ......w k And acquiring continuous k floating point numerical values in the hidden space parameters. Each video in the database may then be represented by a vector of length k. If the total number of the videos is n, the number of the videos should be nk when the parameters are set for the Skip-gram model, and in practical application, the specific number may be set according to actual needs, which is not limited herein.
Furthermore, after the object vectors corresponding to the videos are obtained, historical behaviors of each user watching the videos need to be represented by the vectors, in the embodiment of the present invention, considering real-time interests of the users, the historical watching record of each user is first obtained from the historical behavior data of all the users, and the videos watched by the users can be formed into a video sequence corresponding to the user according to the sequence of time from near to far, so that a historical behavior table for each user is formed, which can be represented as:
wherein u is i Indicating user i, the length of S is m indicating the number of users,a video sequence representing the mth user, the video sequence having an ordering such that earlier ranked videos have closer interaction times with the user than now.
In specific implementation, according to the arrangement order of the videos in the video sequence and the item vector corresponding to each video, the following formula may be used to determine the user vector:
wherein the content of the first and second substances,represents a user vector, <' > based on a user selection criterion>An item vector, w, representing the jth video j Representing the temporal decay weight of the jth video.
Since the user may have real-time interest for a short time, the video of the interaction closer to the current time can affect the next interaction behavior of the user. Therefore, in order to reasonably embody the characteristic in the user vector, the embodiment of the invention adopts the linear weighted object vector of time attenuation to represent the user vector.
In another embodiment, an average vector of the item vectors corresponding to the videos in the video sequence corresponding to the user may be used as the user vector.
For example, a video sequence for the mth userThe user vector corresponding to the user can be represented by the following relational expression:
wherein the content of the first and second substances,represents a user vector, <' > based on a user selection criterion>An item vector representing the jth video.
In practical application, any one of the above manners can be adopted to determine the user vector corresponding to the user, the item vector adopting time attenuation linear weighting can fully reflect the real-time interest of the user, and the manner of adopting the average vector is more concise. In addition, other ways capable of representing the real-time interest of the user may also be used to represent the user vector, which is not limited herein.
Further, the time decay weight may be determined using the following equation:
wherein, w j Representing the temporal attenuation weight, alpha representing the temporal attenuation coefficient, and order representing the ordering of the video in the video sequence. It can be seen that the video in the video sequence that is ranked further up corresponds to the temporal attenuation weight w j The larger the value of (a), the greater the influence of the item vector corresponding to the video on the user vector. By adopting the method, the real-time interest of the user can be fully represented, and the accuracy of video recommendation can be improved.
After the item vectors corresponding to the videos and the user vectors corresponding to the users are obtained, similarity calculation is sequentially carried out on the user vectors corresponding to the users and the item vectors corresponding to the videos, accordingly, the interest degree of the users in the videos can be obtained, the videos are recommended to the users according to the sequence from high similarity to low similarity, the expectation of the users is better met, and user experience is improved.
In specific implementation, a cosine similarity calculation method may be used to calculate the similarity between the item vector and the user vector, which may be referred to the following formula:
wherein sim (i) j ,u j ) Representing the similarity of the item vector and the user vector,represents a item vector, <' > is>Representing a user vector. The numerator is the item vector and the userThe product of the vectors, the denominator, is the product of the modulo lengths of the two vectors.
Therefore, the matching result of each user and each video in the database can be calculated, the video is more suitable for the watching tendency of the user when the similarity between the user vector and the object vector is higher, and the video is more suitable for the current real-time interest of the user, and the videos are recommended to the user from high to low in the similarity, so that the user experience can be improved. In addition, the number of paid videos in the recommendation list can be properly adjusted according to the ratio of the paid videos in the videos interacted by the user, so that the payment conversion rate of the product is improved. When the user is a new user, there is no historical behavior data in the database for the new user. The attribute information of the new user may be compared with the attribute information of the users existing in the database, for example, the attributes of the new user and the old user, such as region, gender, and the like, may be compared, and the recommendation result of the old user with a high matching degree with the new user may be recommended to the new user, and the paid video percentage may be set to 50%.
In a second aspect of the embodiment of the present invention, there is provided a video recommendation apparatus, as shown in fig. 2, the video recommendation apparatus provided in the embodiment of the present invention includes:
a video sequence determining unit 21, configured to determine, according to historical behavior data of all users in the database, a plurality of video sequence samples and a video sequence of videos watched by each user in a time sequence from near to far; correlation exists between each video in the video sequence sample and the video adjacent to the video;
the article vector determining unit 22 is configured to train the set neural network model by using the video sequence samples, and determine an article vector corresponding to each video;
a user vector determining unit 23, configured to determine a user vector corresponding to each user according to an arrangement order of each video in the video sequence of each user and an item vector corresponding to each video;
and the recommending unit 34 is configured to sequentially determine similarity between the item vector corresponding to each video and the user vector corresponding to each user, and recommend similar videos to the user according to the sequence from high similarity to low similarity of the user vectors corresponding to the users.
Optionally, the item vector determining unit 22 is specifically configured to train the item2vec model with video sequence samples; expressing the hidden space parameters of the trained item2vec model as item vectors corresponding to the videos;
the number of components contained in the item vector is smaller than the number of videos contained in the video sequence sample.
Reliably, the user vector determination unit 23 determines the user vector using the following formula:
represents a user vector, <' > or>An item vector, w, representing the jth video j Represents a temporal decay weight for the jth video;
wherein the time decay weight is determined using the following formula:
alpha is the time attenuation coefficient and order is the ordering of the video in the video sequence.
Optionally, the recommending unit 24 determines the similarity between the item vector and the user vector by using the following formula:
wherein sim (i) j ,u j ) Representing the similarity of the item vector and the user vector,represents a item vector, <' > is>Representing a user vector.
The video recommendation device provided by the embodiment of the invention determines a plurality of video sequences with front and back relevance in all user historical behavior data as samples for training a neural network model. After training the neural network model using such video sequence samples, an item vector for each video may be determined, such that each video is represented by a vector. The user vectors are related to the item vectors, which shows that each user tends to watch which videos, and considering that the real-time interest of the user is related to time, the relevance between the current interest of the user and recently watched videos is possibly far greater than that of the videos watched earlier, so that the user vectors corresponding to the users are determined according to the videos watched by the users from near to far and the item vectors corresponding to the videos according to time, which not only shows which videos the users tend to watch, but also shows the real-time interest of the users. Therefore, the similar sequence of the videos interested by each user can be determined according to the similarity degree of the object vector corresponding to each video and the user vector corresponding to the user, the videos recommended to the user by the sequence are more in line with the tendency and real-time interest of the user, the accuracy of video recommendation is improved, and the user experience is improved.
In a third aspect of the embodiments of the present invention, there is provided an information processing apparatus, as shown in fig. 3, an information processing apparatus including:
a memory for storing program instructions;
a processor for calling the program instructions stored in the memory and executing according to the obtained program: determining a plurality of video sequence samples and video sequences of videos watched by each user according to the time sequence from near to far according to historical behavior data of all users in a database; training the set neural network model by adopting a video sequence sample, and determining an article vector corresponding to each video; determining a user vector corresponding to each user according to the video sequence of each user and the item vector of each video; sequentially determining the similarity between the item vector corresponding to each video and the user vector corresponding to each user, and recommending similar videos to the user according to the sequence from high similarity to low similarity of the user vectors corresponding to the users;
wherein there is a correlation between each video in the sample of the video sequence and the neighboring video of the video.
In a fourth aspect of the embodiments of the present invention, a computer-readable non-volatile storage medium is provided, in which computer-executable instructions are stored, and the computer-executable instructions are used for causing a computer to execute any one of the video recommendation methods described above.
The video recommendation method, the device thereof, the information processing equipment and the storage medium provided by the invention determine a plurality of video sequence samples and video sequences of videos watched by each user according to the time sequence from near to far according to the historical behavior data of all users in the database; training a set neural network model by adopting a video sequence sample, and determining an article vector corresponding to each video; determining a user vector corresponding to each user according to the arrangement sequence of each video in the video sequence of each user and the article vector corresponding to each video; and sequentially determining the similarity between the item vector corresponding to each video and the user vector corresponding to each user, and recommending similar videos to the user according to the sequence from high similarity to low similarity of the user vectors corresponding to the users. And determining a plurality of video sequences with contextual relevance in all the user historical behavior data as samples for training a neural network model. After training the neural network model using such video sequence samples, an item vector for each video may be determined, such that each video is represented by a vector. The user vectors are related to the item vectors, which shows that each user tends to watch which videos, and considering that the real-time interest of the user is related to time, the relevance between the current interest of the user and recently watched videos is possibly far greater than that of the videos watched earlier, so that the user vectors corresponding to the users are determined according to the videos watched by the users from near to far according to time and the item vectors corresponding to the videos, which not only can show which videos the user tends to watch, but also can show the real-time interest of the users. Therefore, the similar sequence of the videos interested by each user can be determined according to the similarity degree of the item vectors corresponding to the videos and the user vectors corresponding to the users, the videos recommended to the users by the sequence are more in line with the tendency and the real-time interest of the users, the accuracy of video recommendation is improved, and the user experience is improved.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, implement the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.
Claims (11)
1. A method for video recommendation, comprising:
determining a plurality of video sequence samples and video sequences of videos watched by each user according to the time sequence from near to far according to historical behavior data of all users in a database; each video in the video sequence sample has correlation with the adjacent video of the video;
training an item2vec model by using the video sequence samples, and determining an item vector corresponding to each video;
determining a user vector corresponding to each user according to the arrangement sequence of each video in the video sequence of each user and the item vector corresponding to each video;
and sequentially determining the similarity between the item vector corresponding to each video and the user vector corresponding to each user, and recommending similar videos to the user according to the sequence from high similarity to low similarity of the user vectors corresponding to the users.
2. The method of claim 1, wherein the training an item2vec model using the video sequence samples to determine an item vector corresponding to each of the videos comprises:
training the item2vec model by using the video sequence sample;
representing the implicit space parameters of the item2vec model after training as item vectors corresponding to the videos;
wherein the number of components contained by the item vector is less than the number of videos contained by the video sequence samples.
3. The method of claim 1, wherein the user vector is determined using the following formula:
4. The method of claim 3, wherein the time decay weight is determined using the following equation:
wherein, w j Representing a temporal attenuation weight, alpha representing a temporal attenuation coefficient, order representing the ordering of the video in said video sequence, and m representing the number of users.
6. A video recommendation apparatus, comprising:
the video sequence determining unit is used for determining a plurality of video sequence samples and video sequences of videos watched by each user according to the time sequence from near to far according to historical behavior data of all users in the database; correlation exists between each video in the video sequence sample and the video adjacent to the video;
the item vector determining unit is used for training an item2vec model by adopting the video sequence samples and determining an item vector corresponding to each video;
the user vector determining unit is used for determining a user vector corresponding to each user according to the arrangement sequence of each video in the video sequence of each user and the item vector corresponding to each video;
and the recommending unit is used for sequentially determining the similarity between the item vector corresponding to each video and the user vector corresponding to each user and recommending similar videos to the user according to the sequence from high similarity to low similarity of the user vectors corresponding to the users.
7. The apparatus according to claim 6, wherein the item vector determination unit, in particular for training the item2vec model with the video sequence samples; representing the implicit space parameters of the trained item2vec model as item vectors corresponding to the videos;
wherein the number of components contained by the item vector is less than the number of videos contained by the video sequence samples.
8. The apparatus of claim 6, wherein the user vector determination unit determines the user vector using the following equation:
represents the user vector, <' > based on>An item vector, w, representing the jth video j Represents the time attenuation weight of the jth video, and m represents the number of users;
wherein the time decay weight is determined using the following formula:
α denotes the temporal attenuation coefficient, order denotes the ordering of the video in the video sequence, and m denotes the number of users.
9. The apparatus of claim 6, wherein the recommendation unit determines the similarity of the item vector to the user vector using the following formula:
10. An information processing apparatus characterized by comprising:
a memory for storing program instructions;
a processor for calling the program instructions stored in the memory, and executing according to the obtained program: determining a plurality of video sequence samples and video sequences of videos watched by each user according to the time sequence from near to far according to historical behavior data of all users in a database; training an item2vec model by adopting the video sequence samples, and determining an article vector corresponding to each video; determining a user vector corresponding to each user according to the video sequence of each user and the item vector of each video; sequentially determining the similarity between the item vector corresponding to each video and the user vector corresponding to each user, and recommending similar videos to the user according to the sequence from high similarity to low similarity of the user vectors corresponding to the users;
wherein there is a correlation between each video in the video sequence sample and the video adjacent to the video.
11. A computer-readable non-volatile storage medium having computer-executable instructions stored thereon for causing a computer to perform the video recommendation method of any one of claims 1-5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910014819.6A CN109558514B (en) | 2019-01-08 | 2019-01-08 | Video recommendation method, device thereof, information processing equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910014819.6A CN109558514B (en) | 2019-01-08 | 2019-01-08 | Video recommendation method, device thereof, information processing equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109558514A CN109558514A (en) | 2019-04-02 |
CN109558514B true CN109558514B (en) | 2023-04-11 |
Family
ID=65872545
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910014819.6A Active CN109558514B (en) | 2019-01-08 | 2019-01-08 | Video recommendation method, device thereof, information processing equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109558514B (en) |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111782925A (en) * | 2019-04-04 | 2020-10-16 | 阿里巴巴集团控股有限公司 | Item recommendation method, device, equipment, system and readable storage medium |
CN112000872A (en) * | 2019-05-27 | 2020-11-27 | 北京地平线机器人技术研发有限公司 | Recommendation method based on user vector, training method of model and training device |
CN110971973A (en) * | 2019-12-03 | 2020-04-07 | 北京奇艺世纪科技有限公司 | Video pushing method and device and electronic equipment |
CN111125428B (en) * | 2019-12-17 | 2021-11-05 | 东北大学 | Time-dependent movie recommendation method based on score prediction function fitting structure |
CN111813992A (en) * | 2020-07-14 | 2020-10-23 | 四川长虹电器股份有限公司 | Sorting system and method for movie recommendation candidate set |
CN112528071A (en) * | 2020-10-30 | 2021-03-19 | 百果园技术(新加坡)有限公司 | Video data sorting method and device, computer equipment and storage medium |
CN113065067A (en) * | 2021-03-31 | 2021-07-02 | 达而观信息科技(上海)有限公司 | Article recommendation method and device, computer equipment and storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9471314B1 (en) * | 2015-12-15 | 2016-10-18 | International Business Machines Corporation | Auxiliary perceptron branch predictor with magnitude usage limit |
CN106227793A (en) * | 2016-07-20 | 2016-12-14 | 合网络技术(北京)有限公司 | A kind of video and the determination method and device of Video Key word degree of association |
TW201713131A (en) * | 2015-06-11 | 2017-04-01 | Sony Corp | Control device, method, and computer program |
CN108304440A (en) * | 2017-11-01 | 2018-07-20 | 腾讯科技(深圳)有限公司 | Method, apparatus, computer equipment and the storage medium of game push |
CN108830543A (en) * | 2018-04-24 | 2018-11-16 | 平安科技(深圳)有限公司 | A kind of trip based reminding method and terminal device |
CN109104620A (en) * | 2018-07-26 | 2018-12-28 | 腾讯科技(深圳)有限公司 | A kind of short video recommendation method, device and readable medium |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9535897B2 (en) * | 2013-12-20 | 2017-01-03 | Google Inc. | Content recommendation system using a neural network language model |
-
2019
- 2019-01-08 CN CN201910014819.6A patent/CN109558514B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TW201713131A (en) * | 2015-06-11 | 2017-04-01 | Sony Corp | Control device, method, and computer program |
US9471314B1 (en) * | 2015-12-15 | 2016-10-18 | International Business Machines Corporation | Auxiliary perceptron branch predictor with magnitude usage limit |
CN106227793A (en) * | 2016-07-20 | 2016-12-14 | 合网络技术(北京)有限公司 | A kind of video and the determination method and device of Video Key word degree of association |
CN108304440A (en) * | 2017-11-01 | 2018-07-20 | 腾讯科技(深圳)有限公司 | Method, apparatus, computer equipment and the storage medium of game push |
CN108830543A (en) * | 2018-04-24 | 2018-11-16 | 平安科技(深圳)有限公司 | A kind of trip based reminding method and terminal device |
CN109104620A (en) * | 2018-07-26 | 2018-12-28 | 腾讯科技(深圳)有限公司 | A kind of short video recommendation method, device and readable medium |
Non-Patent Citations (3)
Title |
---|
Recoverable recommended keypoint-aware visual tracking using coupled-layer appearance modelling;Ran Duan 等;《2016 IEEE/RSJ International Conference on Intelligent Robots and Systems》;20161201;4085-4091 * |
基于时间效应的推荐算法研究;刘恒友;《中国优秀硕士学位论文全文数据库 信息科技辑》;20150215(第(2015)2期);I138-1524 * |
基于深度神经网络的视频个性化推荐系统研究;高睿;《中国优秀硕士学位论文全文数据库 信息科技辑》;20170715(第(2017)7期);I140-75 * |
Also Published As
Publication number | Publication date |
---|---|
CN109558514A (en) | 2019-04-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109558514B (en) | Video recommendation method, device thereof, information processing equipment and storage medium | |
CN108776676B (en) | Information recommendation method and device, computer readable medium and electronic device | |
CN107463701B (en) | Method and device for pushing information stream based on artificial intelligence | |
CN110717099B (en) | Method and terminal for recommending film | |
CN112800097A (en) | Special topic recommendation method and device based on deep interest network | |
CN109800328B (en) | Video recommendation method, device thereof, information processing equipment and storage medium | |
EP4181026A1 (en) | Recommendation model training method and apparatus, recommendation method and apparatus, and computer-readable medium | |
CN110489574B (en) | Multimedia information recommendation method and device and related equipment | |
CN112905839A (en) | Model training method, model using device, storage medium and equipment | |
CN112507163B (en) | Duration prediction model training method, recommendation method, device, equipment and medium | |
CN110751030A (en) | Video classification method, device and system | |
EP4092545A1 (en) | Content recommendation method and device | |
CN114359563B (en) | Model training method, device, computer equipment and storage medium | |
CN112395496A (en) | Information recommendation method and device, electronic equipment and storage medium | |
CN112199582B (en) | Content recommendation method, device, equipment and medium | |
CN110730369B (en) | Video recommendation method and server | |
CN112163614A (en) | Anchor classification method and device, electronic equipment and storage medium | |
CN111241381A (en) | Information recommendation method and device, electronic equipment and computer-readable storage medium | |
CA3111094C (en) | Noise contrastive estimation for collaborative filtering | |
CN112182281B (en) | Audio recommendation method, device and storage medium | |
CN111026910B (en) | Video recommendation method, device, electronic equipment and computer readable storage medium | |
CN113836406A (en) | Information flow recommendation method and device | |
CN110990578A (en) | Method and device for constructing rewriting model | |
CN113220974A (en) | Click rate prediction model training and search recall method, device, equipment and medium | |
CN110933504B (en) | Video recommendation method, device, server and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |