CN111918104A - Video data recall method and device, computer equipment and storage medium - Google Patents

Video data recall method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN111918104A
CN111918104A CN202010747164.6A CN202010747164A CN111918104A CN 111918104 A CN111918104 A CN 111918104A CN 202010747164 A CN202010747164 A CN 202010747164A CN 111918104 A CN111918104 A CN 111918104A
Authority
CN
China
Prior art keywords
user
video data
sequence
video
vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010747164.6A
Other languages
Chinese (zh)
Inventor
王友
朱众志
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
You Peninsula Beijing Information Technology Co ltd
Original Assignee
You Peninsula Beijing Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by You Peninsula Beijing Information Technology Co ltd filed Critical You Peninsula Beijing Information Technology Co ltd
Priority to CN202010747164.6A priority Critical patent/CN111918104A/en
Publication of CN111918104A publication Critical patent/CN111918104A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/251Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/4508Management of client data or end-user data
    • H04N21/4532Management of client data or end-user data involving end-user characteristics, e.g. viewer profile, preferences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/466Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/466Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • H04N21/4667Processing of monitored end-user data, e.g. trend analysis based on the log file of viewer selections
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/466Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • H04N21/4668Learning process for intelligent management, e.g. learning user preferences for recommending movies for recommending content, e.g. movies

Abstract

The embodiment of the invention provides a video data recall method, a video data recall device, computer equipment and a storage medium, wherein the method comprises the following steps: the method comprises the steps of taking a first user and video data as nodes, taking the correlation between the first user and the video data, the correlation between the first users and the correlation between the video data as edges, establishing a target sequence, selecting partial nodes along the edges in the target sequence, and generating a user sequence and/or video data, wherein the user sequence comprises a plurality of first users with correlations, the video sequence comprises a plurality of video data with correlations, receiving a request from a second user, recalling the video data adapted to the second user by taking the user sequence and/or the video sequence as a recall path in response to the request, and realizing mixed composition from the dimensions of the plurality of correlations, so that the density of a graph is improved, the content of the user sequence and/or the video sequence is enriched, and the quantity of the recalled video data is greatly increased.

Description

Video data recall method and device, computer equipment and storage medium
Technical Field
The embodiment of the invention relates to the technical field of multimedia processing, in particular to a video data recall method and device, computer equipment and a storage medium.
Background
Along with the rapid development of the network, the amount of video data in the network is increased sharply, and each video website provides services for users, screens valuable video data from massive video data and pushes the video data to the users.
In the process, generally recalling and then sequencing, wherein the recalling is used for finding out video data which a user may like as a candidate in a huge data pool; the sorting is used for sorting the recalled video data, and selecting the optimal n video data from the video data to be pushed to the user.
Common recall approaches include Collaborative Filtering (CF), which requires gathering user preferences to find similar users (i.e., UserCF) or information (i.e., ItemCF).
The collaborative filtering algorithm needs to push video data depending on historical information of each user, when a user operates only a small part of available video data in a database, the historical information of the user is sparse, so that the recalled video data is insufficient, the adaptation degree between the pushed video data and the user is poor, and the problems of obvious sparsity and insufficient recall are particularly prominent for large-scale video data such as short video.
Disclosure of Invention
The embodiment of the invention provides a video data recall method and device, computer equipment and a storage medium, and aims to solve the problems that the recalled video data is insufficient by using collaborative filtering and the relevance between the pushed video data and a user is poor under the condition that the historical information of the user is sparse.
In a first aspect, an embodiment of the present invention provides a method for recalling video data, including:
establishing a target sequence by taking a first user and video data as nodes and taking the correlation between the first user and the video data, the correlation between the first users and the correlation between the video data as edges;
selecting a portion of said nodes along said edge in said target sequence, thereby generating a user sequence and/or video data, said user sequence comprising a plurality of said first users having a correlation, said video sequence comprising a plurality of said video data having a correlation;
receiving a request from a second user;
recalling video data adapted to the second user in response to the request, with the user sequence and/or the video sequence as a path for recall.
In a second aspect, an embodiment of the present invention further provides an apparatus for recalling video data, including:
the target sequence establishing module is used for establishing a target sequence by taking a first user and video data as nodes and taking the correlation between the first user and the video data, the correlation between the first user and the video data and the correlation between the video data as edges;
a node selection module for selecting a portion of the nodes along the edge in the target sequence, thereby generating a user sequence and/or video data, the user sequence including a plurality of the first users having a correlation, the video sequence including a plurality of the video data having a correlation;
a request receiving module, configured to receive a request from a second user;
and the video data recalling module is used for recalling the video data adapted to the second user by taking the user sequence and/or the video sequence as a recalling path in response to the request.
In a third aspect, an embodiment of the present invention further provides a computer device, where the computer device includes:
one or more processors;
a memory for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of recalling video data as described in the first aspect.
In a fourth aspect, the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the method for recalling video data according to the first aspect.
In this embodiment, a target sequence is established with a first user and video data as nodes, a correlation between the first user and the video data, a correlation between the first users, and a correlation between the video data as edges, a partial node is selected along the edges in the target sequence, so as to generate a user sequence and/or video data, the user sequence includes a plurality of first users with correlations, the video sequence includes a plurality of video data with correlations, a request from a second user is received, in response to the request, the video data adapted to the second user is recalled with the user sequence and/or the video sequence as a recall path, a mixed composition is realized from multiple correlation dimensions with the correlation between the first user and the video data, the correlation between the first user and the video data as a composition, and the mixed composition is realized from multiple correlation dimensions with the first user and the video data, The first users and the video data can be communicated, so that the density of the graph is improved, the content of user sequences and/or video sequences is enriched, and the quantity of recalled video data is greatly increased when the video data is recalled based on the user sequences and/or the video sequences.
Drawings
Fig. 1 is a flowchart of a method for recalling video data according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating an example of a target sequence according to an embodiment of the present invention;
fig. 3 is a flowchart of a method for recalling video data according to a second embodiment of the present invention;
fig. 4A is an exemplary diagram of a user behavior sequence according to a second embodiment of the present invention;
fig. 4B is an exemplary diagram of a video co-occurrence sequence according to a second embodiment of the present invention;
FIG. 4C is a diagram illustrating an example of a user's social sequence according to a second embodiment of the present invention;
fig. 5 is a schematic structural diagram of a video data recall apparatus according to a third embodiment of the present invention;
fig. 6 is a schematic structural diagram of a computer device according to a fourth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Example one
Fig. 1 is a flowchart of a method for recalling video data according to an embodiment of the present invention, where the embodiment is applicable to a case where information and a structure of a graph in video data are enriched to recall the video data, and the method may be executed by a video data recall apparatus, where the video data recall apparatus may be implemented by software and/or hardware, and may be configured in a computer device, such as a server, a personal computer, and the like, and specifically includes the following steps:
step 101, a target sequence is established by taking a first user and video data as nodes, and taking the correlation between the first user and the video data, the correlation between the first users and the correlation between the video data as edges.
In the video website, a large amount of video data is stored, the form of the video data may include short videos, live programs, dramas, movies, animations, and the like, and the video data may be created and uploaded by a user, or may be uploaded by a technician, which is not limited in this embodiment.
In addition, the video data may be used as a data pool, and may be sent to the user after being searched and hit by the user with the keyword, or pushed to the user after being recalled, which is not limited in this embodiment.
In this embodiment, for the traversal correlation between the first user and the video data, the closeness degree between the first user and the video data, between the first user and the first user, and between the video data and the video data is mined.
In one aspect, video data is provided for viewing by a first user, and the first user may perform various actions on the video data, such that a correlation is generated between the video data and the first user.
On the other hand, in an interaction system provided by the video website, the first user and the first user can interact with each other, the behaviors of the first user and the first user are the same or similar, and the correlation generated between the first user and the first user can be mined in a social relationship, user co-occurrence, Deep Neural Networks) and other modes.
In yet another aspect, the video data and the attribute (such as producer, country or region of the producer, language, etc.), content, behavior of the first user, etc. are the same or similar dimension, and the correlation generated between the video data and the video data can be mined by means of video co-occurrence, Item2vec, FastText, DNN, etc.
In the computer device, the first user may be a registered user or a non-registered user, and may be represented by a user identifier such as a user ID, a user account, an International Mobile Equipment Identity (IMEI), or the like, in general, the first user is a user browsing video data, and in some cases, the first user may also be a user creating and uploading video data besides the user browsing video data, and at this time, the first user is also called a shooting guest, an UP master (uploader), or the like, that is, the first user may also browse video data simultaneously in addition to creating and uploading video data.
In the present embodiment, as shown in fig. 2, a graph is generated based on the above-described correlation, and the graph is represented as a target sequence in which the first user (Uid) and the video data (Vid) are nodes, and the correlation between the first user and the video data, the correlation between the first users, and the correlation between the video data are edges.
A portion of the nodes are selected along the edges in the target sequence, thereby generating user sequence and/or video data, step 102.
In the target sequence, the nodes with correlation are selected partially according to the correlation relationship to form a new sequence.
If the selected node is a first user, a user sequence may be generated, the user sequence comprising a plurality of first users having a relevance, i.e. in the user sequence the first user is a node and the relevance between the first user and the first user is an edge.
In the user sequence, the correlation between the first user and the first user may refer to a correlation between the current first user and an adjacent first user in the target sequence, or may refer to a correlation between one or more video data and other first users in the target sequence at intervals of the current first user, which is not limited in this embodiment.
If the selected node is video data, a video sequence may be generated, the video sequence comprising a plurality of video data having a correlation, i.e. in the video sequence, the video data is a node, and the correlation between the video data and the video data is an edge.
In the video sequence, the correlation between the video data and the video data may refer to a correlation between the current video data and the adjacent video data in the target sequence, or may refer to a correlation between one or more first users and other video data in the target sequence at intervals of the current video data, which is not limited in this embodiment.
Step 103, receiving a request from a second user.
In this embodiment, the client where the second user is located sends a request to the computer device, and notifies the computer device to push the adapted video data for the second user.
In some cases, the request is a dependent request, i.e. the request may inform the computer device to perform other business operations in addition to informing the computer device to push adapted video data for the second user.
For example, the second user opens a certain page in the client, such as a home page, a page of a certain guest, and the like, at this time, the client sends a request to the computer device to notify the computer device to load the page, and at the same time, notifies the computer device to push the adapted video data for the second user, so as to display information (such as a title, a thumbnail, and the like) of the video data on the page.
In some cases, the request is a separate request, i.e., the request is used to notify the computer device to push adapted video data for the second user.
In the computer device, the second user may be a registered user or a non-registered user, and may be represented by a user identifier such as a user ID, a user account, and an IMEI.
It should be noted that the second user and the first user may be the same user, that is, the second user (the first user) may be a node in the user sequence.
And step 104, recalling the video data adapted to the second user by taking the user sequence and/or the video sequence as a recalling path in response to the request.
In this embodiment, in response to a request of the second user, the video data that may be preferred by the second user is recalled by using the user sequence and/or the structure in the video sequence as a recall path, that is, by using the second user as a starting point, the video data is recalled by using the correlation between the first user and/or the correlation between the video data and the video data.
Of course, in addition to the user sequence and/or the video sequence, the video data may be recalled according to different service requirements (e.g., recall high-quality video data, recall video data meeting personalized requirements of a second user, etc.), for example, online recall (recall video data (e.g., live program) made by a first user online), subscription recall (recall video data in a program (e.g., a certain game, a meal, etc.) subscribed by a second user), same-country recall (recall video data made by a first user in the same country as the second user), same-language recall (recall video data in the same language as the second user), and so on, which is not limited in this embodiment.
In the embodiment, a target sequence is established by taking a first user and video data as nodes, taking the correlation between the first user and the video data, the correlation between the first users and the correlation between the video data as edges, part of the nodes are selected along the edges in the target sequence, so that a user sequence and/or the video data is generated, the user sequence comprises a plurality of first users with correlations, the video sequence comprises a plurality of video data with correlations, a request from a second user is received, the video data adapted to the second user is recalled by taking the user sequence and/or the video sequence as a recall path in response to the request, and mixed composition is realized from multiple correlation dimensions by taking the correlation between the first user and the video data, the correlation between the first users and the correlation between the video data as composition, so that the density of the graph is improved, the content of the user sequence and/or the video sequence is enriched, and when video data is recalled based on the user sequence and/or the video sequence, the quantity of the recalled video data is greatly increased.
In this embodiment, a target sequence is established with a first user and video data as nodes, a correlation between the first user and the video data, a correlation between the first users, and a correlation between the video data as edges, a partial node is selected along the edges in the target sequence, so as to generate a user sequence and/or video data, the user sequence includes a plurality of first users with correlations, the video sequence includes a plurality of video data with correlations, a request from a second user is received, in response to the request, the video data adapted to the second user is recalled with the user sequence and/or the video sequence as a recall path, a mixed composition is realized from multiple correlation dimensions with the correlation between the first user and the video data, the correlation between the first user and the video data as a composition, and the mixed composition is realized from multiple correlation dimensions with the first user and the video data, The first users and the video data can be communicated, so that the density of the graph is improved, the content of user sequences and/or video sequences is enriched, and the quantity of recalled video data is greatly increased when the video data is recalled based on the user sequences and/or the video sequences.
Example two
Fig. 3 is a flowchart of a video data recall method according to a second embodiment of the present invention, where the second embodiment is based on the foregoing embodiments, and the operations of establishing a target sequence, generating a user sequence and/or video data, and recalling video data are further refined by applying Graph Embedding (also referred to as network representation learning), where Graph Embedding is used to map nodes in a network into a low-dimensional vector based on characteristics of the network, so that correlation between nodes can be quantitatively measured, and the method is more convenient to apply. The method specifically comprises the following steps:
step 301, establishing a user behavior sequence by using the first user and the video data as nodes and the behavior of the first user for browsing the video data as edges.
In this embodiment, a user behavior sequence is generated by traversing the correlation between the first user and the video data, in the user behavior sequence, the first user and the video data are used as nodes, and the behavior of the first user browsing the video data is used as an edge, so in the user behavior sequence, a node adjacent to the first user is the video data, and a node adjacent to the video data is the first user.
For example, if the first user Uid _1 browses the video data Vid _1, Vid _2, and Vid _3, and the first user Uid _2 also browses the video data Vid _1, Vid _2, and Vid _3, a user behavior sequence as shown in fig. 4A may be established.
In a manner of establishing a user behavior sequence, a preset active rule may be used as a screening condition to search for a first user in an active state, where the active rule may be set by a technician according to a service condition, for example, there is a login behavior within a preset time period, a frequency of performing a behavior on video data within the preset time period exceeds a preset frequency threshold, and the like, which is not limited in this embodiment.
And counting behaviors triggered by the first user when browsing the video data, wherein the behaviors are generally operations representing positive emotions of the user and are different for different forms of video data, such as praise, comment, share, concern about users who make or upload video data, play video data completely, send virtual articles and the like.
And establishing a user behavior sequence by taking the first user and the video data as nodes and taking the behaviors as edges.
Further, if the edge of the user behavior sequence has a weight, a sub-weight may be configured for each behavior triggered by the first user when browsing the video data, for example, a sub-weight configured for approval is 2, a sub-weight configured for sharing is 2, a sub-weight configured for comment is 1, and so on, and all the sub-weights are accumulated to obtain the weight of the edge between the first user and the video data.
If the edge of the user behavior sequence has no weight, the edge between the first user and the video data can be directly established under the condition that the first user triggers the behavior when browsing the video data.
Of course, the above-mentioned manner of establishing the user behavior sequence is only an example, and when the embodiment of the present invention is implemented, other manners of establishing the user behavior sequence may be set according to an actual situation, for example, statistics is performed on all behaviors triggered by the first user when browsing the video data, and when the frequency and/or the weight of the behavior exceeds a preset threshold, the user behavior sequence is established by taking the first user and the video data as nodes and the behavior as edges, and the like. In addition, besides the above-mentioned ways of establishing user behavior sequences, those skilled in the art may also adopt other ways of establishing user behavior sequences according to actual needs, and the embodiment of the present invention is not limited to this.
And step 302, establishing a video co-occurrence sequence by taking the video data as nodes and taking the content relation among the video data as edges.
In this embodiment, a video co-occurrence sequence is generated by traversing the correlation between the video data and the video data, in the video co-occurrence sequence, the video data is used as a node, and the content relationship between the video data and the video data is used as an edge, so that in the video co-occurrence sequence, the node adjacent to the video data is the video data.
For example, if the content of the video data Vid _1 is similar to that of the video data Vid _2, Vid _4, Vid _5, and Vid _6, the content of the video data Vid _7 is similar to that of the video data Vid _3, Vid _4, and Vid _6, and the content of the video data Vid _5 is similar to that of the video data Vid _3 and Vid _4, a video co-occurrence sequence as shown in fig. 4B may be established.
In a manner of establishing a video co-occurrence sequence, a preset active rule may be used as a screening condition to search for a first user in an active state.
The video data browsed by the first user are sequenced according to a time sequence, and a browsing sequence is established, that is, one or more video data are arranged in the browsing sequence according to the time sequence, and the video data can be represented by information such as a video identifier (such as a video ID and the like), a video title, a video description and the like.
And coding the browsing sequence by using word2vec and other modes to obtain a vector representing the content of the video data as a content vector.
Sequentially traversing the video data, for each video data, algorithms such as K-Nearest Neighbor (KNN), Euclidean Distance (Euclidean Distance), and the like may be called, and other video data similar to the current video data may be retrieved by using the content vector, where the similar relationship may refer to K (K is a positive integer) other video data with the highest similarity to the current video data.
And establishing a video co-occurrence sequence by taking the video data as nodes and taking the similar relation as an edge.
Further, if the edge of the video co-occurrence sequence has a weight, the weight of the edge between the video data and the video data may be set as the similarity (e.g., distance) between the video data and the video data.
If the edge of the video co-occurrence sequence has no weight, the edge between the video data and the video data can be directly established under the condition that the video data and the video data have similar relation.
Of course, the above-mentioned manner of establishing the video co-occurrence sequence is only an example, and when implementing the embodiment of the present invention, other manners of establishing the video co-occurrence sequence may be set according to actual situations, for example, similar video data is calculated for a text (such as a tag, a subtitle, a comment, and the like) of the video data, the video co-occurrence sequence is established with the video data as a node and with the similar relationship as an edge, and the like, which is not limited in this embodiment of the present invention. In addition, besides the above-mentioned way of establishing the video co-occurrence sequence, a person skilled in the art may also adopt other ways of establishing the video co-occurrence sequence according to actual needs, and the embodiment of the present invention is not limited thereto.
Step 303, establishing a user social sequence with the first users as nodes and the social relationships between the first users as edges.
In this embodiment, a user social sequence is generated by traversing the correlation between the first user and the first user, in the user social sequence, the first user is used as a node, and the social relationship between the first user and the first user is used as an edge, so that in the user social sequence, the node adjacent to the first user is the first user.
For example, if the first user Uid _1 has a social relationship with the first users Uid _2, Uid _4, Uid _5, the first user Uid _4 has a social relationship with the first users Uid _2, Uid _3, Uid _5, and the first user Uid _2 has a social relationship with the first user Uid _3, a user social sequence as shown in fig. 4C may be established.
In a manner of establishing a video co-occurrence sequence, a preset active rule may be used as a screening condition to search for a first user in an active state.
And traversing the video data in sequence, and searching other first users having social relations with the current first user aiming at each first user, wherein the social relations comprise friend relations, concern relations, subscription relations, relations of members in the same group, friend relations of third-party applications and the like in the address list.
And establishing a user social sequence by taking the first user as a node and the social relationship as an edge.
Further, if the edge of the social sequence of the user has a weight, a sub-weight may be configured for the social relationship between the first user and the first user, for example, the weight configured for the friend relationship in the address book is 2, the weight configured for the attention relationship is 1, the sub-weight configured for the relationship of the same group member is 0.5, and the like, and all the sub-weights are accumulated to obtain the weight of the edge between the first user and the first user.
If the edge of the user behavior sequence has no weight, the edge between the first user and the first user can be directly established under the condition that the social relationship between the first user and the first user is determined.
Of course, the above-mentioned manner of establishing the user social sequence is only an example, and when the embodiment of the present invention is implemented, other manners of establishing the user social sequence may be set according to an actual situation, for example, the behaviors of the first user are encoded by using word2vec and the like to obtain a behavior vector, for each first user, an algorithm such as K nearest neighbor, euclidean distance and the like is invoked, the behavior vector is used to search for other first users similar to the current first user, the first user is used as a node, and a similar relationship is used as an edge to establish the user social sequence, and the like. In addition, besides the above-mentioned ways of establishing the user social sequence, those skilled in the art may also adopt other ways of establishing the user social sequence according to actual needs, and the embodiment of the present invention is not limited thereto.
And step 304, combining the user behavior sequence, the video co-occurrence sequence and the user social sequence into a target sequence.
In this embodiment, the user behavior sequence, the video co-occurrence sequence and the user social sequence have partially same nodes, and therefore, in the case of maintaining edges, the same nodes are merged, so that the user behavior sequence, the video co-occurrence sequence and the user social sequence are combined to generate a mixed graph, that is, a target sequence.
In a specific implementation, on one hand, the same video data is searched in the user behavior sequence and the video co-occurrence sequence to serve as a target video, that is, the target video is a node where the user behavior sequence and the video co-occurrence sequence are merged, and on the other hand, the same first user is searched in the user behavior sequence and the user social sequence to serve as a target user, that is, the target user is a node where the user behavior sequence and the user social sequence are merged.
And respectively merging the user behavior sequence and the video co-occurrence sequence at a target video, and merging the user behavior sequence and the user social sequence at a target user to obtain a target sequence.
For example, for the user behavior sequence shown in fig. 4A and the video co-occurrence sequence shown in fig. 4B, the same nodes are video data Vid _1, video data Vid _2, and video data Vid _3, and for the user behavior sequence shown in fig. 4A and the user social sequence shown in fig. 4C, the same nodes are first user Uid _1 and first user Uid _2, and then the user behavior sequence shown in fig. 4A and the video co-occurrence sequence shown in fig. 4B are merged by using the video data Vid _1, the video data Vid _2, and the video data Vid _3 as merged nodes, and the user behavior sequence shown in fig. 4A and the user social sequence shown in fig. 4C are merged by using the first user Uid _1 and the first user Uid _2 as merged nodes, so as to obtain the target sequence shown in fig. 2.
Step 305, performing random walk in the target sequence to remove the video data or the first user in the output sequence, and obtaining a user sequence or a video sequence.
In this embodiment, Random Walk (Random Walk) can be implemented in the target sequence on the euler framework, spark + redis, etc., so as to output a new sequence.
Random walk is a mathematical statistical model, which is composed of a series of trajectories, each of which is random and can be used to represent irregular patterns of variation.
Process S of random walktFollowing geometric brownian motion, the differential equation is satisfied:
dSt=uStdt+σStdWt
dSt/St=udt+σdWt
setting a preliminary test State S0From the ita integral, one can solve:
St=S0exp((u-σ2/2)t+σWt)
further, the hop probability for random walks, i.e. the arriving node viThen, go through v in the next stepiAdjacent point v ofjThe probability of (c).
If the target sequence is a directed weighted graph, then the slave node viJump to node vjThe probability of (c) is defined as follows:
Figure BDA0002608765790000121
wherein N is+(vi) Is node viSet of all outgoing edges, MijIs node viTo node vjThe weight of the edge of (2).
If the target sequence is an undirected weightless graph, then the slave node viJump to node vjThe probability of being that the target sequence is a special case of a directed weighted graph, i.e. the weight MijIs a constant number 1, and N+(vi) Is node viAll edge sets, not all outgoing edge sets.
Both the video data and the first user may be included in the new sequence, and therefore, in this embodiment, configuration parameters including the retention of the first user or the video data may be set in advance.
If the first user is reserved, when random walk is carried out in the target sequence according to the configuration parameters, video data can be removed from the output sequence, and new edges are generated for nodes positioned in front of the video data and nodes positioned behind the video data, so that a user sequence is obtained.
For example, after setting to reserve the first user, randomly wandering in the target sequence as shown in fig. 2, and outputting the sequence of Uid _5-Uid _1-Vid _2-Uid _2-Vid _3-Uid _1-Uid _4, video data Vid _2 and Vid _3 are deleted in the output sequence according to the setting of the reserved first user, thereby obtaining the user sequence of Uid _5-Uid _1-Uid _2-Uid _1-Uid _ 4.
If the video data is reserved, when random walk is carried out in the target sequence according to the configuration parameters, the first user can be sent to the output sequence, and new edges are generated for nodes positioned in front of the first user and nodes positioned behind the first user, so that the video sequence is obtained.
For example, after the reserved video data is set, the video data randomly walks in the target sequence as shown in fig. 2, the output sequence is Vid _7-Vid _4-Vid _5-Vid _3-Uid _1-Uid _2-Vid _1, and then the first users Uid _1 and Uid _2 are deleted in the output sequence according to the setting of the reserved video data, so that the video sequence Vid _7-Vid _4-Vid _5-Vid _3-Vid _2-Vid _1 is obtained.
Step 306, receiving a request from a second user.
Step 307, in response to the request, encoding the user sequence or the video sequence to obtain a user vector of the first user or a video vector of the video data.
In this embodiment, a word vector tool such as word2vec may be used to encode a user sequence, so as to generate a user vector embedding of a first user, or a word vector tool such as word2vec may be used to encode a video sequence, so as to generate a video vector embedding of video data.
For the user vector after training, a user index user kvmodel may be established, and in the user index, a user identifier uid of the first user may be used as a key, and a user vector embedding of the first user may be used as a value, so as to generate a key-value pair, i.e., key: uid, value: embedding.
For the trained video vector, a video index user kvmodel can be established, and in the video index, a key value pair, namely key: vid, value: embedding, can be generated by taking a video identifier kid of the video data as a key and a video vector embedding of the video data as a value.
And step 308, recalling the video data adapted to the second user by using the user vector and/or the video vector with the user similarity and/or the video similarity as a recalling target.
In this embodiment, with the second user as a starting point, an algorithm such as K nearest neighbor, euclidean distance, etc. is invoked, a similar first user is retrieved in the user sequence using the user vector (i.e., the user is similar), and/or, with the algorithm such as K nearest neighbor, euclidean distance, etc., similar video data is retrieved in the user sequence using the video vector (i.e., the video is similar), the similar first user and/or the similar video data is taken as a target recalled in the user sequence and/or the video sequence, and finally the target is used to direct the video data adapted to the second user by other means or directly.
It should be noted that the video data adapted to the second user may be a target sequence or video data in a video sequence, or may be video data outside the target sequence or the video sequence, which is not limited in this embodiment.
In one example of the embodiment of the present invention, the video data adapted to the second user may be recalled by:
1. item recall
In this embodiment, based on the user identifier Uid of the second user, the video data (i.e., Item entry) browsed by the second user is searched in the history information recorded for the user identifier Uid, as seed data.
Further, since the amount of the video data browsed by the second user is large, the video data browsed by the second user may be divided into a plurality of queues based on a behavior triggered by the second user when browsing the video data as a dimension of classification, such as a complied queue, a commented queue, a shared queue, and the like, a specified amount (e.g., 200) of video data is extracted from all the queues, a score representing quality is calculated for the extracted video data, and a plurality (e.g., 50) of video data with the highest score is intercepted as seed data under the condition that each queue is guaranteed to have the video data as seed data.
In the video index, algorithms such as K nearest neighbor and Euclidean distance are called, video data similar to the content of seed data are searched to serve as candidate video data, wherein video vectors of the seed data are similar to video vectors of the candidate video data, the similarity relation is represented by similarity, and if the number of the candidate video data is K, K video data with the highest similarity are selected to serve as the candidate video data.
At this time, a part of the candidate video data may be selected as the video data adapted to the second user.
In an alternative approach, the portion of the candidate video data is selected based on quality as video data adapted to the second user.
Specifically, the weight configured for the candidate video data may be queried in a preset database, and the weight may be calculated based on the behavior cost of the user, so that the weight and the behavior cost meet a positive correlation relationship, and are used to characterize the quality of the candidate video data.
And inquiring the similarity between the candidate video data and the seed data as a first similarity.
And calculating a first product between the weight and the first similarity as the first suitability, and if the candidate video data has a plurality of first suitability, summing the plurality of first suitability to obtain the latest first suitability.
N (n is a positive integer) candidate video data with the highest first fitness are selected as video data adapted to the second user.
Of course, besides the quality, the candidate video data may be selected in other manners, for example, n candidate video data are randomly selected, and the embodiment is not limited thereto.
2. Similar user recall
In this manner, the second user, as browsing video data once, may have its user vector recorded in the user index, and may query its user vector based on the user identifier Uid of the second user.
Therefore, in the user index, an algorithm such as K nearest neighbor and euclidean distance is called to search for a first user with behavior similar to that of a second user as a candidate user, wherein the user vector of the second user is similar to the user vector of the candidate user, the similarity relation is represented by similarity, and if the number of the candidate users is K, K first users with the highest similarity are selected as the candidate users.
If the candidate user plays a role of a photographer (also called UP master, anchor, etc.) and uploads video data once, part of the video data uploaded by the candidate user can be selected as video data adapted to the second user.
Further, in order to ensure the quality of the video data, if the quality of the video data uploaded by the candidate user is low, such as the browsing amount is lower than a preset threshold (e.g. 10), the video data is ignored.
Further, in order to ensure the diversity of the video data, it can be ensured that each candidate user has video data selected to be adapted to the second user.
3. Clap visitor and recall in coordination
In this way, based on the user identifier Uid of the second user, the video data browsed by the second user is searched in the history information recorded for the user identifier Uid, and is used as the seed data.
And searching a first user uploading seed data to be used as a guest-shooting user.
And calling K nearest neighbor, Euclidean distance and other algorithms in the user index, searching first users similar to the behavior of the guest-shooting user as candidate users, wherein the user vector of the guest-shooting user is similar to the user vector of the candidate users, the similar relation is represented by similarity, and if the number of the candidate users is K, selecting the K first users with the highest similarity as the candidate users.
And selecting the part of the video data uploaded by the candidate user as the video data adapted to the second user.
4. Similar user behavior recall
In this manner, the second user, as browsing video data once, may have its user vector recorded in the user index, and may query its user vector based on the user identifier Uid of the second user.
Therefore, in the user index, an algorithm such as K nearest neighbor and euclidean distance is called to search for a first user with behavior similar to that of a second user as a candidate user, the user vector of the second user is similar to the user vector of the candidate user, the similarity relation is represented by similarity, and if the number of the candidate users is K, K first users with the highest similarity are selected as the candidate users.
And searching the video data browsed by the candidate user in the historical information recorded for the candidate user to serve as seed data.
Further, since the number of the video data browsed by the candidate user is large, the video data browsed by the candidate user can be divided into a plurality of queues based on a behavior triggered by the candidate user when browsing the video data as a dimension of classification, for example, a queue already approved, a queue already commented, a queue already shared, and the like, a specified number (e.g., 200) of video data is extracted from all the queues, a score representing quality is calculated for the extracted video data, and a plurality (e.g., 50) of video data with the highest score is intercepted as seed data under the condition that it is ensured that each queue has the video data as the seed data.
At this time, a part of the seed data may be selected as the video data adapted to the second user.
In an alternative approach, a portion of the seed data is selected based on quality as video data adapted to the second user.
Specifically, the weight configured for the seed data may be queried in a preset database, and the weight may be calculated based on the behavior cost of the user, so that the weight and the behavior cost meet a positive correlation relationship, and are used to characterize the quality of the candidate video data.
And querying the similarity between the candidate user and the second user as a second similarity.
And calculating a second product between the weight and the second similarity to serve as a second fitness, and if the seed data has a plurality of second fitness, summing the plurality of second fitness to obtain the latest second fitness.
M (m is a positive integer) seed data with the highest second fitness are selected as the video data adapted to the second user.
Of course, the above-mentioned manner of recalling video data is only an example, and when implementing the embodiment of the present invention, other manners of recalling video data may be set according to actual situations, which is not limited in this embodiment of the present invention. In addition, besides the above-mentioned manner of recalling video data, those skilled in the art may also adopt other manners of recalling video data according to actual needs, and the embodiment of the present invention is not limited thereto.
It should be noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the illustrated order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments of the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
EXAMPLE III
Fig. 5 is a block diagram of a video data recall apparatus according to a third embodiment of the present invention, which may specifically include the following modules:
a target sequence establishing module 501, configured to establish a target sequence by using a first user and video data as nodes, and using a correlation between the first user and the video data, a correlation between the first users, and a correlation between the video data as edges;
a node selection module 502, configured to select a part of the nodes along the edge in the target sequence, so as to generate a user sequence and/or video data, where the user sequence includes a plurality of the first users with correlation, and the video sequence includes a plurality of the video data with correlation;
a request receiving module 503, configured to receive a request from a second user;
a video data recall module 504, configured to recall, in response to the request, video data adapted to the second user with the user sequence and/or the video sequence as a recalled path.
In an embodiment of the present invention, the target sequence establishing module 501 includes:
the user behavior sequence establishing submodule is used for establishing a user behavior sequence by taking a first user and video data as nodes and taking the behavior of the first user for browsing the video data as an edge;
the video co-occurrence sequence establishing submodule is used for establishing a video co-occurrence sequence by taking video data as nodes and taking the relation among the video data in content as an edge;
the user social sequence establishing sub-module is used for establishing a user social sequence by taking the first users as nodes and taking the social relations among the first users as edges;
and the sequence merging submodule is used for merging the user behavior sequence, the video co-occurrence sequence and the user social sequence into a target sequence.
In an embodiment of the present invention, the user behavior sequence creating sub-module includes:
the first active user searching unit is used for searching a first user in an active state;
the behavior counting unit is used for counting the behaviors triggered by the first user when browsing the video data;
and the user behavior sequence generating unit is used for establishing a user behavior sequence by taking the first user and the video data as nodes and taking the behaviors as edges.
In one embodiment of the present invention, the video co-occurrence sequence establishing sub-module includes:
the second active user searching unit is used for searching the first user in an active state;
a browsing sequence establishing unit, configured to establish a browsing sequence for the video data browsed by the first user;
a content vector encoding unit, configured to encode the browsing sequence to obtain a vector representing the content of the video data as a content vector;
a similar video retrieval unit, configured to retrieve, for each of the video data, other video data similar to the current video data using the content vector;
and the video co-occurrence sequence generating unit is used for establishing a video co-occurrence sequence by taking the video data as a node and the similar relation as an edge.
In one embodiment of the present invention, the user social sequence establishing sub-module includes:
the third active user searching unit is used for searching the first user in an active state;
the social user searching unit is used for searching other first users having social relations with the current first user aiming at each first user;
and the user social sequence generating unit is used for establishing a user social sequence by taking the first user as a node and the social relationship as an edge.
In one embodiment of the present invention, the sequence merging submodule includes:
the target video searching unit is used for searching the same video data in the user behavior sequence and the video co-occurrence sequence to be used as a target video;
the target user searching unit is used for searching the same first user in the user behavior sequence and the user social sequence to serve as a target user;
and the node merging unit is used for respectively merging the user behavior sequence and the video co-occurrence sequence at the target video and merging the user behavior sequence and the user social sequence at the target user to obtain a target sequence.
In one embodiment of the present invention, the node selection module 502 comprises:
and the random walk sub-module is used for performing random walk in the target sequence so as to remove the video data or the first user in the output sequence and obtain a user sequence or a video sequence.
In one embodiment of the present invention, the video data recall module 504 comprises:
a vector encoding sub-module, configured to encode the user sequence or the video sequence in response to the request, to obtain a user vector of the first user or a video vector of the video data;
and the sequence recalling submodule is used for recalling the video data adapted to the second user by using the user vector and/or the video vector as a recalling target.
In one embodiment of the present invention, the sequence recall submodule includes:
the first seed data searching unit is used for searching the video data browsed by the second user as seed data;
a similar content searching unit, configured to search for video data similar to the content of the seed data as candidate video data, where a video vector of the seed data is similar to a video vector of the candidate video data;
a first adapted video selecting unit for selecting a part of the candidate video data as video data adapted to the second user.
In one embodiment of the present invention, the first adaptive video selecting unit includes:
a first weight query subunit, configured to query a weight configured for the candidate video data;
a first similarity query subunit, configured to query a similarity between the candidate video data and the seed data as a first similarity;
a first fitness calculator subunit configured to calculate a first product between the weight and the first similarity as a first fitness;
and the first adaptation degree selection subunit is used for selecting the n candidate video data with the highest first adaptation degree as the video data adapted to the second user.
In one embodiment of the present invention, the sequence recall submodule includes:
a first similar behavior searching unit, configured to search, as a candidate user, a first user whose behavior is similar to that of the second user, where a user vector of the second user is similar to a user vector of the candidate user;
and the second adaptive video selection unit is used for selecting part of the video data uploaded by the candidate user as the video data adaptive to the second user.
In one embodiment of the present invention, the sequence recall submodule includes:
the second seed data searching unit is used for searching the video data browsed by the second user as seed data;
the guest-shooting user searching unit is used for searching a first user uploading the seed data to be used as a guest-shooting user;
a second similar behavior searching unit, configured to search, as a candidate user, a first user whose behavior is similar to that of the guest user, where a user vector of the guest user is similar to a user vector of the candidate user;
and the third adaptive video selection unit is used for selecting and selecting part of the video data uploaded by the candidate user as the video data adaptive to the second user.
In one embodiment of the present invention, the sequence recall submodule includes:
a third similar behavior searching unit, configured to search, as a candidate user, a first user whose behavior is similar to that of the second user, where a user vector of the second user is similar to a user vector of the candidate user;
a third sub data search unit, configured to search for video data browsed by the candidate user as seed data;
a fourth adapted video selecting unit for selecting a part of the seed data as video data adapted to the second user.
In one embodiment of the present invention, the fourth adaptive video selecting unit includes:
the second weight inquiry subunit is used for inquiring the weight configured for the seed data;
a second similarity querying subunit, configured to query a similarity between the candidate user and the second user as a second similarity;
a second fitness operator unit for calculating a second product between the weight and the second similarity as a second fitness;
and the second adaptation degree selecting subunit is used for selecting the m seed data with the highest second adaptation degree as the video data adapted to the second user.
The video data recall device provided by the embodiment of the invention can execute the video data recall method provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.
Example four
Fig. 6 is a schematic structural diagram of a computer device according to a fourth embodiment of the present invention. FIG. 6 illustrates a block diagram of an exemplary computer device 12 suitable for use in implementing embodiments of the present invention. The computer device 12 shown in FIG. 6 is only an example and should not bring any limitations to the functionality or scope of use of embodiments of the present invention.
As shown in FIG. 6, computer device 12 is in the form of a general purpose computing device. The components of computer device 12 may include, but are not limited to: one or more processors or processing units 16, a system memory 28, and a bus 18 that couples various system components including the system memory 28 and the processing unit 16.
Bus 18 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, micro-channel architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Computer device 12 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by computer device 12 and includes both volatile and nonvolatile media, removable and non-removable media.
The system memory 28 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM)30 and/or cache memory 32. Computer device 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 6, and commonly referred to as a "hard drive"). Although not shown in FIG. 6, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to bus 18 by one or more data media interfaces. Memory 28 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention.
A program/utility 40 having a set (at least one) of program modules 42 may be stored, for example, in memory 28, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may comprise an implementation of a network environment. Program modules 42 generally carry out the functions and/or methodologies of the described embodiments of the invention.
Computer device 12 may also communicate with one or more external devices 14 (e.g., keyboard, pointing device, display 24, etc.), with one or more devices that enable a first user to interact with computer device 12, and/or with any devices (e.g., network card, modem, etc.) that enable computer device 12 to communicate with one or more other computing devices. Such communication may be through an input/output (I/O) interface 22. Also, computer device 12 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network such as the Internet) via network adapter 20. As shown, network adapter 20 communicates with the other modules of computer device 12 via bus 18. It should be understood that although not shown in the figures, other hardware and/or software modules may be used in conjunction with computer device 12, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
The processing unit 16 executes various functional applications and data processing, such as implementing a video data recall method provided by an embodiment of the present invention, by executing programs stored in the system memory 28.
EXAMPLE five
An embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the video data recall method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
A computer readable storage medium may include, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (17)

1. A method for recalling video data, comprising:
establishing a target sequence by taking a first user and video data as nodes and taking the correlation between the first user and the video data, the correlation between the first users and the correlation between the video data as edges;
selecting a portion of said nodes along said edge in said target sequence, thereby generating a user sequence and/or video data, said user sequence comprising a plurality of said first users having a correlation, said video sequence comprising a plurality of said video data having a correlation;
receiving a request from a second user;
recalling video data adapted to the second user in response to the request, with the user sequence and/or the video sequence as a path for recall.
2. The method according to claim 1, wherein the establishing a target sequence with the first user, video data as nodes, and the correlation between the first user and the video data, the correlation between the first users, and the correlation between the video data as edges comprises:
establishing a user behavior sequence by taking a first user and video data as nodes and taking the behavior of the first user for browsing the video data as an edge;
establishing a video co-occurrence sequence by taking video data as nodes and taking the relation among the video data in content as an edge;
establishing a user social sequence by taking first users as nodes and taking social relations among the first users as edges;
merging the user behavior sequence, the video co-occurrence sequence, and the user social sequence into a target sequence.
3. The method of claim 2, wherein the establishing a user behavior sequence with the first user, video data as nodes, and the behavior of the first user browsing the video data as edges comprises:
searching a first user in an active state;
counting behaviors triggered by the first user when browsing video data;
and establishing a user behavior sequence by taking the first user and the video data as nodes and taking the behaviors as edges.
4. The method according to claim 2, wherein the establishing a video co-occurrence sequence with the video data as nodes and the content relation between the video data as edges comprises:
searching a first user in an active state;
establishing a browsing sequence for the video data browsed by the first user;
coding the browsing sequence to obtain a vector representing the content of the video data as a content vector;
for each of the video data, retrieving other of the video data that is similar to the current video data using the content vector;
and establishing a video co-occurrence sequence by taking the video data as a node and the similar relation as an edge.
5. The method of claim 2, wherein establishing a user social sequence with the first users as nodes and the social relationships between the first users as edges comprises:
searching a first user in an active state;
for each first user, searching other first users having social relations with the current first user;
and establishing a user social sequence by taking the first user as a node and the social relationship as an edge.
6. The method of claim 2, wherein merging the sequence of user behaviors, the sequence of video co-occurrences, and the sequence of user socializes into a target sequence comprises:
searching the same video data in the user behavior sequence and the video co-occurrence sequence to be used as a target video;
searching the same first user in the user behavior sequence and the user social sequence to serve as a target user;
and respectively merging the user behavior sequence and the video co-occurrence sequence at the target video, and merging the user behavior sequence and the user social sequence at the target user to obtain a target sequence.
7. The method of claim 1, wherein said selecting a portion of said nodes along said edge in said target sequence to generate user sequence and/or video data comprises:
and carrying out random walk in the target sequence to remove the video data or the first user in the output sequence to obtain a user sequence or a video sequence.
8. The method according to any one of claims 1-7, wherein the recalling video data adapted to the second user in response to the request with the user sequence and/or the video sequence as a recalled path comprises:
in response to the request, encoding the user sequence or the video sequence, obtaining a user vector of the first user or a video vector of the video data;
and recalling the video data adapted to the second user by using the user vector and/or the video vector with the user similarity and/or the video similarity as a recall target.
9. The method according to claim 8, wherein the recalling the video data adapted to the second user by using the user vector and/or the video vector with the target of user similarity and/or video similarity as a recall comprises:
searching the video data browsed by the second user as seed data;
searching video data similar to the content of the seed data as candidate video data, wherein the video vector of the seed data is similar to the video vector of the candidate video data;
selecting a portion of the candidate video data as video data adapted to the second user.
10. The method according to claim 9, wherein the selecting a portion of the candidate video data as video data adapted to the second user comprises:
querying weights configured for the candidate video data;
inquiring the similarity between the candidate video data and the seed data as a first similarity;
calculating a first product between the weight and the first similarity as a first fitness;
and selecting the n candidate video data with the highest first adaptability as the video data adaptive to the second user.
11. The method according to claim 8, wherein the recalling the video data adapted to the second user by using the user vector and/or the video vector with the target of user similarity and/or video similarity as a recall comprises:
searching a first user with similar behavior to the second user as a candidate user, wherein the user vector of the second user is similar to the user vector of the candidate user;
and selecting part of the video data uploaded by the candidate user as video data adapted to the second user.
12. The method according to claim 8, wherein the recalling the video data adapted to the second user by using the user vector and/or the video vector with the target of user similarity and/or video similarity as a recall comprises:
searching the video data browsed by the second user as seed data;
searching a first user uploading the seed data as a guest-shooting user;
searching a first user with similar behavior to the guest shooting user as a candidate user, wherein the user vector of the guest shooting user is similar to the user vector of the candidate user;
and selecting part of the video data uploaded by the candidate user as video data adapted to the second user.
13. The method according to claim 8, wherein the recalling the video data adapted to the second user by using the user vector and/or the video vector with the target of user similarity and/or video similarity as a recall comprises:
searching a first user with similar behavior to the second user as a candidate user, wherein the user vector of the second user is similar to the user vector of the candidate user;
searching the video data browsed by the candidate user as seed data;
selecting a portion of the seed data as video data adapted to the second user.
14. The method of claim 13, wherein said selecting a portion of said seed data as video data adapted for said second user comprises:
inquiring the weight configured for the seed data;
querying the similarity between the candidate user and the second user as a second similarity;
calculating a second product between the weight and the second similarity as a second fitness;
and selecting the m seed data with the highest second adaptability as the video data adapted to the second user.
15. An apparatus for recalling video data, comprising:
the target sequence establishing module is used for establishing a target sequence by taking a first user and video data as nodes and taking the correlation between the first user and the video data, the correlation between the first user and the video data and the correlation between the video data as edges;
a node selection module for selecting a portion of the nodes along the edge in the target sequence, thereby generating a user sequence and/or video data, the user sequence including a plurality of the first users having a correlation, the video sequence including a plurality of the video data having a correlation;
a request receiving module, configured to receive a request from a second user;
and the video data recalling module is used for recalling the video data adapted to the second user by taking the user sequence and/or the video sequence as a recalling path in response to the request.
16. A computer device, characterized in that the computer device comprises:
one or more processors;
a memory for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement a method of recall of video data according to any of claims 1-14.
17. A computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the method of recalling video data according to any one of claims 1 to 14.
CN202010747164.6A 2020-07-29 2020-07-29 Video data recall method and device, computer equipment and storage medium Pending CN111918104A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010747164.6A CN111918104A (en) 2020-07-29 2020-07-29 Video data recall method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010747164.6A CN111918104A (en) 2020-07-29 2020-07-29 Video data recall method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111918104A true CN111918104A (en) 2020-11-10

Family

ID=73287970

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010747164.6A Pending CN111918104A (en) 2020-07-29 2020-07-29 Video data recall method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111918104A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112948626A (en) * 2021-05-14 2021-06-11 腾讯科技(深圳)有限公司 Video processing method and device, electronic equipment and computer readable storage medium
CN113254707A (en) * 2021-06-10 2021-08-13 北京达佳互联信息技术有限公司 Model determination method and device and associated media resource determination method and device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120260280A1 (en) * 2011-04-06 2012-10-11 Aaron Harsh Method and system for detecting non-powered video playback devices
US20140098986A1 (en) * 2012-10-08 2014-04-10 The Procter & Gamble Company Systems and Methods for Performing Video Analysis
CN109710805A (en) * 2018-12-13 2019-05-03 百度在线网络技术(北京)有限公司 Video interactive method and device based on interest cluster
CN110008375A (en) * 2019-03-22 2019-07-12 广州新视展投资咨询有限公司 Video is recommended to recall method and apparatus
CN110072129A (en) * 2018-01-22 2019-07-30 上海鹰信智能技术有限公司 Based on vehicle-mounted interconnection push, save the method for playing record, a kind of playback method
CN111400546A (en) * 2020-03-18 2020-07-10 腾讯科技(深圳)有限公司 Video recall method and video recommendation method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120260280A1 (en) * 2011-04-06 2012-10-11 Aaron Harsh Method and system for detecting non-powered video playback devices
US20140098986A1 (en) * 2012-10-08 2014-04-10 The Procter & Gamble Company Systems and Methods for Performing Video Analysis
CN110072129A (en) * 2018-01-22 2019-07-30 上海鹰信智能技术有限公司 Based on vehicle-mounted interconnection push, save the method for playing record, a kind of playback method
CN109710805A (en) * 2018-12-13 2019-05-03 百度在线网络技术(北京)有限公司 Video interactive method and device based on interest cluster
CN110008375A (en) * 2019-03-22 2019-07-12 广州新视展投资咨询有限公司 Video is recommended to recall method and apparatus
CN111400546A (en) * 2020-03-18 2020-07-10 腾讯科技(深圳)有限公司 Video recall method and video recommendation method and device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112948626A (en) * 2021-05-14 2021-06-11 腾讯科技(深圳)有限公司 Video processing method and device, electronic equipment and computer readable storage medium
CN113254707A (en) * 2021-06-10 2021-08-13 北京达佳互联信息技术有限公司 Model determination method and device and associated media resource determination method and device

Similar Documents

Publication Publication Date Title
CN110321422B (en) Method for training model on line, pushing method, device and equipment
US11593894B2 (en) Interest recommendation method, computer device, and storage medium
CN109086439B (en) Information recommendation method and device
WO2017181612A1 (en) Personalized video recommendation method and device
Qian et al. Social media based event summarization by user–text–image co-clustering
Shi et al. Learning-to-rank for real-time high-precision hashtag recommendation for streaming news
CN108776676B (en) Information recommendation method and device, computer readable medium and electronic device
CN111259192B (en) Audio recommendation method and device
CN110909182A (en) Multimedia resource searching method and device, computer equipment and storage medium
CN112052387B (en) Content recommendation method, device and computer readable storage medium
EP4113329A1 (en) Method, apparatus and device used to search for content, and computer-readable storage medium
US11126682B1 (en) Hyperlink based multimedia processing
CN113779381B (en) Resource recommendation method, device, electronic equipment and storage medium
CN111918104A (en) Video data recall method and device, computer equipment and storage medium
Zhuang et al. Data summarization with social contexts
CN114186130A (en) Big data-based sports information recommendation method
CN112579822A (en) Video data pushing method and device, computer equipment and storage medium
Schinas et al. Mgraph: multimodal event summarization in social media using topic models and graph-based ranking
CN111523053A (en) Information flow processing method and device, computer equipment and storage medium
CN114490923A (en) Training method, device and equipment for similar text matching model and storage medium
CN113590898A (en) Data retrieval method and device, electronic equipment, storage medium and computer product
CN110275986B (en) Video recommendation method based on collaborative filtering, server and computer storage medium
CN109756759B (en) Bullet screen information recommendation method and device
Dong et al. When Newer is Not Better: Does Deep Learning Really Benefit Recommendation From Implicit Feedback?
CN112035740A (en) Project use duration prediction method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination