CN110941740A - Video recommendation method and computer-readable storage medium - Google Patents

Video recommendation method and computer-readable storage medium Download PDF

Info

Publication number
CN110941740A
CN110941740A CN201911088281.XA CN201911088281A CN110941740A CN 110941740 A CN110941740 A CN 110941740A CN 201911088281 A CN201911088281 A CN 201911088281A CN 110941740 A CN110941740 A CN 110941740A
Authority
CN
China
Prior art keywords
video
node
user
videos
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911088281.XA
Other languages
Chinese (zh)
Other versions
CN110941740B (en
Inventor
刘祺
谢若冰
刘书凯
张博
林乐宇
陈磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Yayue Technology Co ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201911088281.XA priority Critical patent/CN110941740B/en
Publication of CN110941740A publication Critical patent/CN110941740A/en
Application granted granted Critical
Publication of CN110941740B publication Critical patent/CN110941740B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/7867Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title and artist information, manually generated time, location and usage information, user ratings

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Library & Information Science (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the application discloses a video recommendation method and device. The method comprises the following steps: triggering a selected target video according to a terminal homepage, and skipping to a video page associated with the target video by the terminal; acquiring a recommended label for video recommendation of the target video, wherein the recommended label is obtained by sorting candidate labels contained in the target video according to historical watching behavior data of a user and screening the candidate labels according to a sorting result; displaying the recommended label on the video page; and jumping to a recommended video page associated with the recommended label according to the recommended label selected by triggering on the video page. According to the technical scheme, video recommendation is performed on the target video according to the candidate tag sequence of the target video, the user interest degree is considered, and accurate personalized recommendation can be performed on the target video.

Description

Video recommendation method and computer-readable storage medium
Technical Field
The application relates to the technical field of video recommendation, in particular to a video recommendation method and device, electronic equipment and a computer-readable storage medium.
Background
In a video recommendation system, videos are usually attached with video tags which highlight the subject content of the videos from different aspects and granularities, and the video tags based on the videos can effectively discover the association between the videos, so that the recommendation of the associated videos can be realized according to the video tags, for example, the recommendation of videos containing the same video tags is recommended
For a video in a video recommendation system, a candidate tag set is generally generated for the video according to information such as content and title of the video, for example, the content of the video may be an image and audio in the video, and then candidate tags in the candidate tag set are ranked and the ranked candidate tags are obtained as video tags of the video.
Therefore, the generation process of the video tag is more focused on static video content mining, and personalized information related to the user in the video recommendation system is ignored, so that the interestingness of the user in different aspects is difficult to capture in the video recommendation process according to the video tag, and accurate personalized video recommendation is difficult to realize.
Disclosure of Invention
In order to solve the technical problem that precise personalized video recommendation is difficult to implement in the prior art, embodiments of the present application provide a video recommendation method, apparatus, electronic device, and computer readable storage medium. According to the method and the device, the video tags are sorted and screened based on the historical watching behaviors of the user, and video recommendation is performed according to the screened video tags, so that the accuracy of personalized video recommendation is improved to a great extent.
Wherein, the technical scheme who this application adopted does:
a video recommendation method, comprising: triggering a selected target video according to a terminal homepage, and skipping to a video page associated with the target video by the terminal; acquiring a recommended label for video recommendation of the target video, wherein the recommended label is obtained by sorting candidate labels contained in the target video according to historical watching behavior data of a user and screening the candidate labels according to a sorting result; displaying the recommended label on the video page; and jumping to a recommended video page associated with the recommended label according to the recommended label selected by triggering on the video page.
A video recommendation apparatus comprising: the first page skipping module is used for skipping the terminal to a video page related to a target video according to the target video triggered on the terminal homepage; the recommendation label obtaining module is used for obtaining a recommendation label for video recommendation of the target video, and the recommendation label is obtained by sorting candidate labels contained in the target video according to the historical watching behavior data of the user and screening the candidate video labels according to a sorting result; the recommended label display module is used for displaying the recommended label on the video page; and the second page skipping module is used for skipping to the recommendation video page related to the recommendation label according to the recommendation label selected by triggering on the video page.
An electronic device comprising a processor and a memory, the memory having stored thereon computer readable instructions which, when executed by the processor, implement a video recommendation method as described above.
A computer readable storage medium having stored thereon computer readable instructions which, when executed by a processor of a computer, cause the computer to perform a video recommendation method as described above.
The technical scheme adopted by the application has the following beneficial effects:
in the technical scheme, recommended tags for video recommendation of the target video are displayed in a video page corresponding to the target video, the recommended tags are obtained by sequencing candidate tags contained in the target video according to historical watching behavior data of the user and screening the candidate tags according to a sequencing result, so that the recommended tags can be in accordance with the interest degree of the user for watching the video to a great extent, and in a video recommendation process according to the recommended tags, the obtained recommended video can be matched with the interest degree of the user for watching the video, so that accurate personalized video recommendation can be realized.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application. It is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings can be derived from these drawings without inventive effort for a person skilled in the art. In the drawings:
FIG. 1 is a schematic illustration of an implementation environment to which the present application relates;
FIG. 2 is a flow diagram illustrating a method of video recommendation in accordance with an exemplary embodiment;
FIG. 3 is a diagram illustrating a terminal page in accordance with an exemplary embodiment;
FIG. 4 is a flow chart of one embodiment of step 120 in the embodiment shown in FIG. 2;
FIG. 5 is a flow chart of one embodiment of step 220 in the embodiment shown in FIG. 4;
FIG. 6 is a flow chart of step 120 in another embodiment of the embodiment shown in FIG. 2;
FIG. 7 is a flow chart of one embodiment of step 320 in the embodiment of FIG. 6;
FIG. 8 is a flowchart of one embodiment of step 322 in the embodiment shown in FIG. 7;
FIG. 9 is a flowchart of one embodiment of step 323 in the embodiment of FIG. 7;
FIG. 10 is a flowchart of one embodiment of step 324 in the embodiment of FIG. 7;
FIG. 11 is a schematic diagram of an exemplary implementation scenario according to the present application;
FIG. 12 is a block diagram illustrating a video recommendation device in accordance with an exemplary embodiment;
fig. 13 is a diagram illustrating a hardware configuration of an electronic device according to an exemplary embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the following exemplary embodiments do not represent all implementations consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
Referring to fig. 1, fig. 1 is a schematic diagram of an implementation environment related to the present application, where the implementation environment is a video recommendation system, and the video recommendation system includes a terminal 100 and a server 200.
The terminal 100 and the server 200 are connected to each other in advance through a network, and data transmission is performed through the network, for example, transmission of information required in a video recommendation process is performed. The network may be a wired network or a wireless network, and is not limited herein.
The terminal 100 is provided with a video recommendation client, a plurality of videos are displayed in a homepage of the video recommendation client, when a user clicks a certain video in the homepage, the user jumps to a video page associated with the video, the video page simultaneously displays a video tag of the video, when the user clicks a certain video tag, the video recommendation client jumps to a recommendation video page associated with the video tag, a recommendation video associated with the video tag clicked by the user is displayed in the recommendation video page, and the user clicks the recommendation video, so that the corresponding recommendation video can be played. The terminal 100 may be any electronic device capable of installing and running a video recommendation client, such as a smart phone, a tablet computer, and a computer.
The server 200 is used to provide data support for personalized video recommendation in the video playing client. The server 200 may be a single server, or may be a server cluster composed of a plurality of servers, and this is not limited here.
As described above, in the existing video recommendation system, it is difficult to capture the interests of the user in different aspects in the video recommendation process according to the video tags, and it is difficult to ensure the accuracy in personalized video recommendation. In order to solve the technical problem, the present application provides a video recommendation method on one hand and a video recommendation device on the other hand, so as to implement accurate personalized video recommendation.
Referring to fig. 2, fig. 2 is a flow chart illustrating a video recommendation method according to an exemplary embodiment, which is applicable to the terminal 100 in the implementation environment shown in fig. 1. As shown in fig. 2, in an exemplary embodiment, the video recommendation method at least includes the following steps:
and step 110, according to the target video selected by triggering on the terminal homepage, the terminal jumps to the video page associated with the target video.
The terminal homepage is usually displayed with a video list, and the video list can be changed along with page turning operation or sliding operation triggered by a user so as to display more videos for the user.
When a user selects one of the videos in the video list through clicking or other operation triggers, the video is determined to be a target video, and the video page related to the target video is jumped into. The video page associated with the target video is usually a playing page corresponding to the target video, and when the terminal jumps to the corresponding playing page, the target video can be automatically played, or the target video is played after a playing operation triggered by a user is detected, or the target video is automatically played after the terminal jumps to the playing page for a period of time, which is not limited in this place.
And 120, acquiring a recommended label for video recommendation of the target video, wherein the recommended label is obtained by sorting candidate labels contained in the target video according to the historical watching behavior data of the user and screening the candidate labels according to a sorting result.
It should be noted that, in the first place, the candidate tags of the target video are video tags of the target video, which are known and describe the main content of the target video from different aspects and granularities, and therefore, the number of the candidate tags is usually plural.
The candidate tags of the target video may be all video tags of the target video, or may be partial video tags obtained by primarily screening all video tags of the target video in a certain manner, which is not limited in this place.
The historical watching behavior data of the user is recorded on the historical behavior of the user for watching the video on the terminal, so that the video watched by the user can be obtained according to the historical watching behavior data of the user, and the obtained personalized information such as the video watching interest of the user can be mined.
Therefore, the candidate tags contained in the target video are sorted according to the historical watching behavior data of the user, and the candidate tags are screened according to the sorting result, so that the obtained recommended tags are substantially partial video tags in the video tags contained in the target video, and the recommended tags can be matched with personalized information such as video watching interest of the user to a great extent.
In this embodiment, the recommendation tag is used for video recommendation for the target video. The recommended label may be obtained from the terminal itself, that is, the recommended label is obtained by the terminal sorting and screening candidate labels contained in the target video according to the historical viewing behavior data of the user, or the recommended label is obtained by the terminal from a server, which is not limited here.
And step 130, displaying the recommendation label on the video page.
And 140, jumping to a recommended video page associated with the recommended label according to the recommended label triggered and selected on the video page.
The recommendation videos associated with the recommendation tags are displayed in a recommendation video page, and all the recommendation videos contain the recommendation tags selected by triggering.
As described above, since the recommendation tag is matched with the user personalized information such as the video watching interest of the user to a great extent, the recommendation video displayed in the recommendation video page skipped by the terminal can be matched with the user interest, that is, the recommendation video displayed in the recommendation video page is a video in which the user is interested. Compared with the process of recommending the same video tag for the target video directly according to the video tag contained in the target video in the conventional video recommendation, the video recommendation process carried out according to the recommendation tag fully considers the user interest, and the accurate personalized video recommendation can be realized.
It should also be mentioned that, in the embodiment, the recommendation tag is displayed on the video page associated with the target video, so that the user can trigger to select the recommendation tag interested in the user to obtain the recommendation video, and therefore the user interest is further considered in the video recommendation process, and the accuracy of video recommendation is further improved. In order to explain the video recommendation method provided by the present embodiment more clearly, the video recommendation method is described below in an exemplary application scenario.
Referring to fig. 3, fig. 3 is a schematic diagram of a terminal interface according to an exemplary embodiment. Wherein, fig. 3a is a terminal homepage on which a plurality of videos are displayed. When the user clicks "video 4" in the terminal homepage, the terminal jumps to the video page shown in fig. 3b, which shows the video title and three recommendation tags at the same time. The recommended labels are all or part of the video labels contained in the video 4 are sorted according to the user history watching behavior data of the user corresponding to the terminal, and then the video labels with the top three ranks are selected.
When the user clicks the recommendation tag 2 in the video page shown in the graph (3b), the terminal jumps to the video recommendation page shown in the graph (3c), a plurality of recommendation videos recommended for the video 4 according to the recommendation tag 2 are displayed in the video recommendation page, and the recommendation videos all contain the recommendation tag 2 and are matched with the interests of the user.
It should be noted that, in the video page shown in fig. 3b, after the user clicks any one of the recommendation tags, the user jumps into the video recommendation page corresponding to the recommendation tag clicked by the user, and videos displayed in the video recommendation pages are all matched with the user interests, so that accurate personalized video recommendation can be realized.
Referring to fig. 4, fig. 4 is a flowchart of step 120 in the embodiment shown in fig. 2, in one embodiment, obtaining a video tag for video recommendation of a target video may include the following steps:
step 210, determining the feature representation of the candidate label in the target video.
The feature representation of the candidate tag means that the corresponding features and contents of the candidate tag are represented in the form of a feature vector. In one embodiment, a heterogeneous network is established in advance for the video recommendation system, and the heterogeneous network contains feature representations of video tags of all videos in the video recommendation system, so that the feature representation of each candidate tag can be directly determined according to the heterogeneous network for a target video to be subjected to video recommendation.
Of course, the feature representation of the candidate tag of the target video may also be determined in other manners, for example, the video recommendation system generates the video tag for the video and also generates the feature representation of each video tag accordingly, so that the video tag of the target video and the feature representation of each video tag may be directly determined for the target video.
And step 220, acquiring a video tag in the video watched by the user according to the historical watching behavior data of the user, and constructing a user characteristic representation according to the video tag.
As described above, the user history viewing behavior data is a record of the history behavior of the user viewing the video on the terminal, and therefore, the video viewed by the user can be acquired according to the user history viewing behavior data, and the video tag in the video viewed by the user can be further acquired.
Since the video tags describe the main content of the video in different aspects and granularities, in combination with the historical viewing behavior of the user, the user interest hidden in the video can be mined from the video tags, and the user feature representation is constructed accordingly, so that the video viewing interest of the user is described through the constructed user feature representation. The video recommendation is carried out based on the video watching interest of the user, and the accuracy of personalized video recommendation can be effectively improved.
In step 230, the similarity between the user feature representation and the feature representation of each candidate tag is calculated.
As described above, the user feature representation and the feature representation of the candidate tag are feature vector representations for the user information and the candidate tag information, respectively, and therefore, in the present embodiment, calculating the similarity between the user feature representation and the feature representation of each candidate tag is to substantially calculate the similarity between two feature vectors.
Since the video watching interest of the user is described in the user feature representation, and the feature representation of the candidate tag describes the characteristics and the content of the candidate tag, the similarity between the two feature representations reflects the matching degree between the video associated with the candidate tag and the user interest. The video associated with the candidate tag refers to the video containing the candidate tag.
The higher the similarity between the user feature representation and the feature representation of the candidate tag is, the more interesting the user is to the videos associated with the candidate tag, and if the videos are recommended to the user, the higher the interest degree of the user is to the recommended videos, and the higher the accuracy of video recommendation is.
And 240, sorting the candidate tags according to the obtained similarity, and screening the candidate tags according to a sorting result to obtain recommended tags.
As described above, the higher the similarity obtained in step 230 is, the higher the accuracy of video recommendation for videos related to candidate tags of the target video is, so that the candidate tags of the target video need to be sorted according to the similarity obtained in step 230, the recommended tags need to be obtained by screening from the candidate tags according to the sorting result of the obtained candidate tags, and video recommendation is performed for the target video according to the obtained recommended tags, so that the accuracy of personalized video recommendation can be effectively improved.
Exemplarily, in the video page shown in fig. 3 (b), the video 4 is a target video to be subjected to video recommendation, after feature representation of candidate tags in the target video is determined, user feature representation is constructed according to video tags of videos watched by a user, then similarity between the user feature representation and each candidate tag is calculated, the candidate tags are ranked according to the similarity, and the candidate tags ranked as the top three are obtained as recommendation tags to be displayed. The user can obtain the recommended video which is interested by the user by clicking any one of the recommended tags displayed in the graph (3b), so that the video recommendation method provided by the embodiment can perform accurate personalized recommendation on the target video.
In the prior art implementation, the video tag ranking is applied in the process of video tag generation, and no video tag ranking is applied in the actual video recommendation. The embodiment aims to generate an accurate and personalized ordered list of video tags for videos based on the video tags, attract users to click the video tags and jump to video pages corresponding to the video tags, and the users can watch more videos related to the video tags, so that indexes such as click rate, playing time length and page turning amount of the whole video recommendation are improved.
FIG. 5 is a flow chart of one embodiment of step 220 in the embodiment shown in FIG. 4. As shown in fig. 5, in an exemplary embodiment, obtaining video tags in videos watched by users according to the historical watching behavior data of the users, and constructing a user feature representation according to the video tags may include the following steps:
step 221, obtaining the video watched by the user and the video tags in the watched video from the user historical watching behavior data.
Step 222, determining the weight of the video label according to the attribution relationship between the video label and the video watched by the user, the integrity degree of the video watched by the user and the watching time.
The attribution relationship between the video label and the video watched by the user refers to the corresponding relationship between the video and the video label, and the same video label can also be attributed to different videos at the same time. Illustratively, if video tag C is included in both video a and video B, video tag C belongs to both video a and video B.
The integrity degree of the video watched by the user refers to the ratio of the playing time of the video to the total time of the video, and the higher the integrity degree of the video watched by the user is, the higher the interest degree of the user in the video is.
The watching time of the video watched by the user refers to the natural time of the video watched by the user, namely the historical playing time of the video. The historical playing time of the video can also highlight the user's short-term interest to some extent, e.g., the user is typically more interested in the most recently viewed video than in the most recently viewed video.
In the process of watching historical videos by a user, the interestingness of the user to different videos is different, and the interestingness of the user to different videos can be reflected by the completeness of the videos watched by the user and the watching time, so that the reflecting degrees of different video tags to the user interests are different based on the attribution relationship between the video tags and the videos, and therefore different video tags should have different weights. Therefore, video tags which can reflect user interests can be screened according to the weight of the video tags, and user feature representations can be constructed more accurately according to the screened video tags.
In one embodiment, the video sequence obtained by sequencing videos watched by the user according to the watching time from the historical watching behavior data of the user is expressed as { v }1,v2,……,vmAnd removing the video tags of all videos in the video sequence to obtain a plurality of video tags, wherein a calculation formula for the weight of a certain video tag is as follows:
Figure BDA0002266082910000091
wherein i represents the ith video label in the obtained video labels, j represents the jth video in the video sequence, and only if the ith video label belongs to the video vjWhen x (ij) is 1, otherwise it is 0; completejRepresenting a user watching a video vjDegree of completeness; timejThen represents the video vjThe time decay factor of (c).
Time attenuation factor timejWatching video v with userjIs related to the viewing time, illustratively, the time decay factor timejThe calculation formula of (a) is as follows:
timej=η*timej+1
wherein η represents a time factor, η ∈ (0,1), and timem1. It can be seen that the longer the watching time corresponding to the video is, the smaller the time attenuation factor corresponding to the video is, and the weight of the video label contained in the video is also influenced to a certain extent, so that the short-term interest of the user can be highlighted.
And 223, screening the video tags according to the obtained weights to obtain a user tag set.
As mentioned above, the weights of the video tags represent the interest of the user in watching the video, so the video tags may be filtered according to the weights obtained in step 222 to obtain the user tag set. The video tags contained in the user tag set are all video tags with large weights, and the video tags can reflect the user interest better, so that the user feature representation can be constructed more accurately according to the video tags.
For example, the video tag set with the weight greater than the preset weight threshold may be selected as the user tag set, or the video tags are sorted according to the weight, and the video tag set with the specified rank is selected as the user tag set, which is not limited herein.
And 224, constructing a user feature representation according to the feature representation and the weight of each video label in the user label set.
The feature representation of each video tag in the user tag set may be obtained from a pre-constructed heterogeneous network, or may be directly obtained in other manners, which is not described herein again.
For each video label in the user label set, the weight factor of each video label needs to be calculated according to the weight of each video label, and the calculation formula is as follows:
Figure BDA0002266082910000101
wherein, TuRepresenting a set of user tags, ti∈TuRepresenting a set of user tags TuThe ith video tag of (1).
According to the weight factor of each video label in the user label set and the feature representation of each video label, a user feature representation can be constructed, and the calculation formula is as follows:
Figure BDA0002266082910000102
wherein, tiRepresenting a set of user tags TuThe characteristic representation of the ith video tag in (1).
According to the calculation formula, the user characteristic expression fuses the characteristic expression and the weight factor of each video label in the user label set, and the user interest is reflected by the weight factor of the video label, and the main content of the video is reflected by the characteristic expression of the video label from different aspects and granularities, so that the user characteristic expression fuses various interests of the user, and the accuracy in the personalized video recommendation process is improved.
In another exemplary embodiment, a heterogeneous network is also built in advance for the video recommendation system, so as to determine the feature representation of each candidate tag in the target video directly from the built heterogeneous network, and determine the feature representation of each video tag in the user tag set.
As shown in fig. 6, in an exemplary embodiment, the method further includes the steps of:
and step 310, constructing a heterogeneous network by taking heterogeneous information related to the video in the video recommendation system as nodes and taking the incidence relation between the heterogeneous information as edges.
It should be noted that the video recommendation system described in this embodiment generally refers to any video system supporting video playing, and refers to video recommendation in the process of playing video to recommend relevant playing video for a user, for example, a video playing client installed in the terminal 100 in the implementation environment shown in fig. 1.
The video in the video recommendation system not only includes the target video to be subjected to video recommendation described in step 210, but also includes the video watched by the user obtained according to the user historical watching behavior data described in step 220.
In one embodiment, the heterogeneous information related to the video in the video recommendation system includes videos published in the video recommendation system, video tags of the videos published by the video recommendation system, media accounts for publishing the videos, and user groups of the video recommendation system. Therefore, the heterogeneous network constructed in this embodiment will include four different types of nodes, namely, video nodes, tag nodes, media nodes, and user group nodes.
The video nodes are main nodes in the heterogeneous network, and other nodes in the heterogeneous network are obtained by expanding the video nodes. The tag nodes contain the physical content of the video as well as the probabilistic content, which represents the different potential interests of the user for the video from different aspects and granularities. Because the interaction frequency between the single users is low, in order to alleviate the sparsity problem of the user-related interaction in the heterogeneous network, the user group is selected as the node in the heterogeneous network, and the single user is not selected as the node in the embodiment.
The user group is obtained by clustering a single user in the video recommendation system, and for example, in this embodiment, different user groups of the video recommendation system can be obtained by clustering a single user in the video recommendation system according to 3 conditions of the gender, age, and location of the user.
Correspondingly, the association relationship between the heterogeneous information includes an association playing relationship between videos, an affiliation relationship between a video and a video tag, a publishing relationship between a video and a media account, an effective watching relationship between a video and a user group, and a common affiliation relationship between video tags. These associations also represent the interactivity of heterogeneous information to date.
The related playing relationship between the videos means that a certain relationship exists between different videos in the video playing process, for example, a certain two videos are videos that are continuously played, and the related playing relationship between the two videos reflects the preference behavior of the user to a certain extent. In one embodiment, the associative play relationship between videos may be determined according to the following steps:
the method comprises the steps of determining effectively watched videos in a video recommendation system according to the completeness of the watched videos, sequencing the effectively watched videos according to the watching time of the videos, and determining that two adjacent sequenced videos have an associated playing relation.
For example, videos watched by more than 30% of the total duration of the videos in the video recommendation system are determined as effectively watched videos, and the videos are sorted according to the watching time of the videos to obtain effectively watched video sequences, so that it can be determined that two videos which appear adjacent to each other in the video sequences have an associated playing relationship, and a relationship edge should exist between corresponding video nodes in a heterogeneous network.
For any video in the video recommendation system, each video label contained in the video should have an attribution relationship with the video, and therefore video nodes corresponding to the video in the heterogeneous network respectively have relationship edges with label nodes corresponding to the video labels.
Similarly, a distribution relation exists between the media account for distributing any one video and the video distributed by the media account in the video recommendation system, and a relation edge should exist between the corresponding media node and the video node in the heterogeneous network. If the video is effectively watched by a certain user group, the corresponding video nodes in the heterogeneous network and the user group nodes should have relationship edges. If two video labels belong to the same video together, corresponding label nodes in the heterogeneous network also have relationship edges.
In one embodiment, if the number of times that the video is effectively viewed by the user group within the preset time length exceeds a preset value, it indicates that the video is effectively viewed by the user group.
In another embodiment, the heterogeneous information used for constructing the nodes in the heterogeneous network may further include more text information contained in a video title, portrait information of the user, video categories in which the user is interested, and the like, so as to construct a more complex heterogeneous network, and this does not represent a limitation on the specific form of the heterogeneous information.
The heterogeneous network constructed by the embodiment can capture the interactive relations among videos, video tags, users and media in the video recommendation system, and the interactive relations also contain rich user preference information, so that the feature representation of the video tags determined based on the heterogeneous network also carries the user preference information, and further contributes to the accuracy of the personalized video recommendation process.
And step 320, determining the feature representation of each node according to the feature information of each node and the neighbor nodes in the heterogeneous network.
A neighbor node refers to another node that has a relational edge with a node in the heterogeneous network, which should have at least one neighbor node for any node in the heterogeneous network.
As described above, the interaction relationship between nodes in the heterogeneous network also includes rich user preference information, so that the feature representation of each node determined according to the feature information of each node and its neighboring nodes in the heterogeneous network will be merged with the user preference information.
The feature representations of the candidate tags of the target video directly determined from the heterogeneous network in step 210 and the feature representations of the video tags in the user tag set directly determined from the heterogeneous network in step 224 will contain user preference information, so that the personalized information of the user is considered to the greatest extent possible in the similarity ranking obtained by step 240 based on these feature representations, thereby achieving accurate personalized video recommendation.
FIG. 7 is a flow chart of one embodiment of step 320 in the embodiment of FIG. 6. As shown in fig. 7, in an exemplary embodiment, determining the feature representation of each node according to the feature information of each node and its neighboring nodes in the heterogeneous network may include the following steps:
step 321, projecting each node in the heterogeneous network to the same semantic space, and obtaining feature codes of each node on the semantic space.
The method for projecting the nodes in the heterogeneous network to the same semantic space includes the steps of extracting semantic features of the nodes in the heterogeneous network through the same semantic model, and accordingly obtaining feature codes of the nodes on the same semantic space.
For example, the feature coding of each node in the semantic space may be one-hot (one coding mode) vector representation of each node, or may be vector representation obtained by coding in other coding manners, which is not described herein again.
Step 322, the original feature vector of each node is obtained by serially connecting feature codes of neighbor nodes of each node.
The feature codes of all nodes in the heterogeneous network carry semantic information of all nodes, so that the original feature vectors obtained by connecting the feature codes of the neighbor nodes in series aiming at any node in the heterogeneous network carry the semantic information of all neighbor nodes.
In one embodiment, as shown in fig. 8, step 322 may specifically include the following steps:
step 3221, for each node in the heterogeneous network, the neighboring nodes need to be grouped according to the node type corresponding to the neighboring node of the node.
First, it should be noted that, in this embodiment, a process of acquiring an original feature vector of one node in a heterogeneous network is described, and the process of acquiring original feature vectors of other nodes in the heterogeneous network is the same, which is not described in detail in this embodiment.
The node types corresponding to the nodes corresponding to the neighboring nodes correspond to the node types included in the heterogeneous network, and as described above, in one embodiment, the node types of the neighboring nodes include a video node, a tag node, a media node, and a user group node.
Thus, for node k in the heterogeneous network, n neighbor nodes { h ] of node k can be determined1,h2,……,hnAnd then, according to the node type corresponding to each neighbor node, dividing the neighbor nodes corresponding to the same node type into a group. In a general case, the neighbor nodes may be divided into a video node group, a tag node group, a media node group, and a user group node group.
Step 3222, by concatenating the feature codes of the neighboring nodes in each group, a concatenated code corresponding to each group is obtained.
And (3) connecting the feature codes of the neighbor nodes in each group in series, namely splicing the feature codes of each neighbor node in each group to obtain the serial codes corresponding to each group.
The corresponding concatenation code of each packet can be expressed as
Figure BDA0002266082910000141
Wherein
Figure BDA0002266082910000142
Indicating the corresponding concatenated codes of the video nodes in the neighboring nodes,
Figure BDA0002266082910000143
indicating the corresponding concatenated codes of the video nodes in the neighboring nodes,
Figure BDA0002266082910000144
indicating the corresponding concatenated codes of the video nodes in the neighboring nodes,
Figure BDA0002266082910000145
indicating the tandem coding corresponding to the video node in the neighbor node.
And 3223, connecting the serial codes corresponding to the packets in series to obtain the original feature vectors of the nodes.
For a node k in the heterogeneous network, the serial codes corresponding to the groups are connected in series, and the obtained original feature vectors are represented as follows:
Figure BDA0002266082910000146
where "|" represents a concatenation operation between encodings.
As can be seen from the above, in this embodiment, the original feature vectors carrying the semantic information of all the neighbor nodes can be obtained by grouping the neighbor nodes in the heterogeneous network, then respectively concatenating the feature codes of the neighbor nodes in each group, and finally concatenating the concatenated codes of each group.
Step 323, aggregating the original feature vectors of the neighboring nodes of each node to obtain the aggregated feature vector of each node.
Aiming at each node in the heterogeneous network, the original feature vectors of the neighbor nodes are aggregated, and the obtained aggregated feature vectors contain information of mutual influence between different neighbor nodes under different node types.
In one embodiment, as shown in FIG. 9, step 323 may include the steps of:
step 3231, according to a preset first weight matrix, calculating attention of each node in the heterogeneous network with respect to a neighboring node corresponding to each node type.
It should be noted that, this embodiment will also describe an obtaining process of the aggregated feature vector of one node in the heterogeneous network, and the obtaining processes of the aggregated feature vectors of other nodes in the heterogeneous network are similar and are not described herein again.
Still for the node k in the heterogeneous network, the preset first weight matrix includes weight vectors corresponding to different node types, and the first weight matrix can be expressed as
Figure BDA0002266082910000147
Wherein the content of the first and second substances,
Figure BDA0002266082910000148
a weight vector corresponding to the video node type is represented,
Figure BDA0002266082910000149
represents the weight vector corresponding to the type of label node,
Figure BDA00022660829100001410
a weight vector representing the correspondence of the media node type,
Figure BDA00022660829100001411
and representing the weight vector corresponding to the user group node type.
For one node type, for example, a video node type, a weight vector corresponding to the video node type is first required
Figure BDA00022660829100001412
Compute node k and ith neighbor node in video node type (i.e., video v)i) Degree of importance of correlation
Figure BDA0002266082910000151
The calculation formula is as follows:
Figure BDA0002266082910000152
where T denotes the transpose operation of the matrix.
Then according to the obtained importance degree
Figure BDA0002266082910000153
Compute node k attention to the ith neighbor node in the video node type
Figure BDA0002266082910000154
The calculation formula is as follows:
Figure BDA0002266082910000155
according to the calculation process, the attention of each node in the heterogeneous network relative to the neighbor node corresponding to each node type can be calculated.
Step 3232, aggregating the attention corresponding to the same node type and the original feature vector of each neighboring node to obtain a type aggregated feature vector of each node under different node types.
Still taking the node k in the heterogeneous network as an example, and the node k contains n neighbor nodes corresponding to the video node type, the process of aggregating the attention corresponding to each neighbor node in the video node type and the original feature vector of each neighbor under the video node type can be represented by the following formula:
Figure BDA0002266082910000156
it can be seen that the type aggregation feature vector of the node k under the video node type can be obtained by calculating the product of the attention and the original feature vector corresponding to each neighbor node in the video node type respectively and then summing the products corresponding to all the neighbor nodes.
In a similar way, the type aggregation characteristic vectors of each node in the heterogeneous network under different node types can be obtained through calculation respectively.
Step 3233, according to a preset second weight matrix, aggregating the type aggregation eigenvectors of each node under different node types to obtain an aggregation eigenvector of each node.
The aggregation process of the aggregated feature vectors can also be represented by the following formula:
Figure BDA0002266082910000157
wherein the content of the first and second substances,
Figure BDA0002266082910000158
represents the aggregate feature vector of node k, Relu represents the Relu nonlinear activation function, WneighA second weight matrix is represented that represents a second weight matrix,
Figure BDA0002266082910000159
a type-aggregated feature vector representing node k under the label node type,
Figure BDA00022660829100001510
a type aggregation feature vector representing node k under the media node type,
Figure BDA00022660829100001511
and the type aggregation characteristic vector represents the type of the node k under the user group node type.
The type aggregation feature vectors of other nodes in the heterogeneous network are also obtained in the same manner, which is not described herein again.
Step 324, the feature representation of each node is obtained by aggregating the aggregated feature vector and the original feature vector of each node.
The aggregation characteristic vector of each node in the heterogeneous network is obtained, the aggregation characteristic vector and the original characteristic vector of each node are aggregated, and the obtained characteristic representation of each node combines the characteristic information of the node and the characteristic information of the neighbor nodes, so that each node contains more comprehensive characteristic information, and the obtained characteristic representation of each node is more accurate.
In one embodiment, as shown in FIG. 10, step 324 may include the steps of:
step 3241, performing weighting operation on the original eigenvector of each node according to a preset third weight matrix to obtain a weighted eigenvector of each node;
step 3242, obtaining feature representation of each node by aggregating the weighted feature vector and the aggregated feature vector of each node.
Wherein, still taking node k as an example, if the third weight matrix is represented as WselfThe original feature vector of node k is expressed as
Figure BDA0002266082910000161
Weighted feature vector for node k
Figure BDA0002266082910000162
The calculation formula of (2) is as follows:
Figure BDA0002266082910000163
weighted feature vector for node k
Figure BDA0002266082910000164
And aggregating feature vectors
Figure BDA0002266082910000165
The calculation formula for carrying out the polymerization treatment is as follows:
Figure BDA0002266082910000166
the above process of representing the features of each node in the heterogeneous network is a machine learning process, λ is a hyper-parameter, and it needs to be optimized in a learning process of continuously representing the features of each node in the heterogeneous network, so as to aggregate the weighted feature vector and the aggregated feature vector of each node by using the optimal hyper-parameter, and obtain an optimal value of representing the features of each node.
In an exemplary embodiment, the loss function J employed by the learning process for the feature representation of each node in the heterogeneous network is as follows:
Figure BDA0002266082910000167
in the heterogeneous network, a node i and a node j are neighbor nodes, a node i and a node h are non-neighbor nodes, f represents the last learned feature representation, and NiRepresents a set of neighbor nodes for node i, and σ (-) represents an s-type activation function.
From this loss function, it can be derived that the process of machine learning is such that the feature representation of the aggregated node i is similar to the feature representation of its neighbor nodes and different from the feature representation of its non-neighbor nodes.
Therefore, the feature representation of each node in the heterogeneous network obtained according to the embodiment combines the feature information of the node and the feature information of the neighbor nodes, and the feature representation of each node also has richer user preference information, which is very beneficial to realizing accurate personalized video recommendation.
The video method provided by the present application will be described below with a specific implementation scenario.
As shown in fig. 11, in an exemplary video recommendation scenario, video recommendation includes two processes of offline network construction and online tag ranking.
The off-line network construction process is that videos issued in a video recommendation system, video labels of the issued videos, media accounts of the issued videos and user groups acquired in a clustering mode are taken as nodes, a heterogeneous network is constructed by taking the associated playing relationship among the videos, the attribution relationship among the videos and the video labels, the issuing relationship among the videos and the media accounts, the effective watching relationship among the videos and the user groups and the common attribution relationship among the video labels as sides, and then a machine learning model is adopted to extract feature representations of all the nodes in the constructed heterogeneous network, so that a feature graph corresponding to the heterogeneous network is obtained, wherein the feature graph should contain feature representations corresponding to all the nodes in the heterogeneous network. For a process of extracting feature representations of each node in the heterogeneous network by using the machine learning model, please refer to the embodiments shown in fig. 7 to fig. 10, which is not described herein again. And because the construction of the heterogeneous network and the acquisition of the characteristic diagram have huge calculation amount, the construction of the heterogeneous network and the acquisition of the characteristic diagram need to be carried out off-line so as to meet the real-time requirement of personalized video recommendation.
The online tag sorting process is that in the process of online video recommendation, user feature representations corresponding to users are extracted according to historical viewing behavior data of the users, the user feature representations contain information related to interests of the users, the similarity between the user feature representations and feature representations of candidate tags of a target video is calculated, and therefore the candidate tags of the target video are sorted according to the obtained similarity, and recommended tags of the target video for video recommendation are further determined according to sorting results. The feature representation of the video tag required to be used in the online process can be obtained from the feature map of the heterogeneous network.
It should be noted that, in the present application, the machine learning model uses the method described in the embodiments shown in fig. 7 to fig. 10 to extract the feature representation of each node in the heterogeneous network, which is an optimal implementation manner obtained through experiments.
In the experiment, the machine learning model respectively adopts a Random mode, a Deep Walk (one algorithm in the Graph neural network) mode, a Graph Sage (another algorithm in the Graph neural network) mode, and an HGAT + user (contents described in the embodiments shown in fig. 7 to 10 in this application) mode as video recommendation systems to perform video recommendation, and the statistical experimental data are shown in tables 1 and 2.
Experimental mode Click rate Coefficient of inversion
Random 0.420 0.210
Deep Walk 0.593 0.330
Graph Sage 0.596 0.335
HGAT+user 0.632 0.354
TABLE 1
Table 1 shows the experimental data obtained by applying the above experimental modes to the video recommendation system offline, and it can be seen that the click rate and the inverted system in the HGAT + user mode are higher than those in other modes.
Table 2 shows that the click data and the video viewing data of the video tag in the HGAT + user mode are both optimal, and therefore the video recommendation method provided by the present application can effectively improve the click rate, the playing time, the page flipping amount, and other indexes of the entire video recommendation.
Figure BDA0002266082910000181
TABLE 2
Fig. 12 is a block diagram illustrating a video recommendation apparatus suitable for use with the terminal 100 in the implementation environment shown in fig. 1 according to an example embodiment. As shown in fig. 12, the apparatus includes a first page jump module 410, a recommended tag acquisition module 420, a recommended tag presentation module 430, and a second page jump module 440.
The first page jump module 410 is configured to trigger the selected target video according to the home page of the terminal, so that the terminal jumps to a video page associated with the target video.
The recommended label obtaining module 420 is configured to obtain recommended labels for video recommendation of the target video, where the recommended labels are obtained by sorting candidate labels included in the target video according to the historical viewing behavior data of the user and screening the candidate video labels according to a sorting result.
The recommended label presentation module 430 is used for presenting recommended labels on a video page.
The second page jump module 440 is configured to jump to a recommended video page associated with a recommended tag according to the recommended tag triggered and selected on the video page.
In another exemplary embodiment, the recommended label obtaining module 420 specifically includes a candidate label feature determining module, a user feature representation constructing module, a similarity calculating module, and a similarity calculating module.
The candidate tag feature determination module is used for determining feature representation of candidate tags in the target video.
The user characteristic representation building module is used for obtaining video tags in videos watched by the users according to the historical watching behavior data of the users and building user characteristic representations according to the video tags.
And the similarity calculation module is used for calculating the similarity between the user characteristic representation and the characteristic representation of each candidate label.
And the candidate tag sorting module is used for sorting the candidate tags according to the obtained similarity and screening the candidate tags according to a sorting result to obtain recommended tags.
In another exemplary embodiment, the user feature representation construction module includes a video tag acquisition unit, a weight acquisition unit, a user tag set acquisition unit, and a feature construction unit.
The video tag acquisition unit is used for acquiring videos watched by the user and video tags in the watched videos from the historical watching behavior data of the user.
The weight obtaining unit is used for determining the weight of the video label according to the affiliation relationship between the video label and the video watched by the user, the integrity degree of the video watched by the user and the watching time.
And the user tag set acquisition unit is used for screening the video tags according to the weight to obtain a user tag set.
The characteristic construction unit is used for constructing user characteristic representation according to the characteristic representation and the weight of each video label in the user label set.
In another exemplary embodiment, the apparatus further includes a heterogeneous network construction module and a node characteristic determination module.
The heterogeneous network construction module is used for constructing a heterogeneous network by taking heterogeneous information related to videos in the video recommendation system as nodes and taking the incidence relation between the heterogeneous information as edges, and the heterogeneous information contains candidate tags and video tags.
The node characteristic determining module is used for determining the characteristic representation of each node according to the characteristic information of each node and the neighbor nodes in the heterogeneous network.
In another exemplary embodiment, the heterogeneous information related to the video in the video recommendation system includes videos published in the video recommendation system, video tags of the published videos, media accounts for publishing the videos, and user groups in the video recommendation system, where the user groups are obtained by clustering users in the video recommendation system.
In another exemplary embodiment, the association relationship between the heterogeneous information includes an association play relationship between videos, an affiliation relationship between a video and a video tag, a publishing relationship between a video and a media account, an effective viewing relationship between a video and a user group, and a common affiliation relationship between video tags, wherein the association play relationship between videos is determined by:
the method comprises the steps of determining effectively watched videos in a video recommendation system according to the completeness of the watched videos, sequencing the effectively watched videos according to the watching time of the videos, and determining that two adjacent sequenced videos have an associated playing relation.
In another exemplary embodiment, the node feature determination module includes a semantic space mapping unit, a feature coding concatenation unit, an original feature vector concatenation unit, and a vector aggregation unit.
The semantic space mapping unit is used for projecting each node in the heterogeneous network to the same semantic space to obtain the feature code of each node on the semantic space.
The feature code serial unit is used for obtaining the original feature vector of each node by serially connecting the feature codes of the neighbor nodes of each node.
The original feature vector serial unit is used for aggregating original feature vectors of neighbor nodes of each node to obtain an aggregated feature vector of each node.
The vector aggregation unit is used for aggregating the aggregation characteristic vector and the original characteristic vector of each node to obtain the characteristic representation of each node.
In another exemplary embodiment, the feature encoded concatenation unit includes a node grouping subunit, an intra-group feature concatenation subunit, and an extra-group feature concatenation subunit.
The node grouping subunit is used for grouping the neighbor nodes of the nodes according to the node types corresponding to the neighbor nodes aiming at each node in the heterogeneous network.
The intra-group characteristic serial subunit is used for serially connecting the characteristic codes of the neighbor nodes in each group to obtain the serial codes corresponding to each group.
The out-of-group characteristic serial subunit is used for serially connecting the serial codes corresponding to the groups to obtain the original characteristic vectors of the nodes.
In another exemplary embodiment, the original feature vector concatenation unit includes an attention calculation subunit, a type aggregation feature vector acquisition subunit, and an aggregation feature vector acquisition subunit.
The attention calculation subunit is used for calculating the attention of each node in the heterogeneous network relative to the neighbor node corresponding to each node type according to a preset first weight matrix, and the first weight matrix comprises a weight vector corresponding to each node type.
The type aggregation feature vector acquisition subunit is used for aggregating the attention corresponding to the same node type and the original feature vector of each neighbor node to acquire the type aggregation feature vectors of each node under different node types.
The aggregation feature vector obtaining subunit is configured to aggregate the type aggregation feature vectors of the nodes in different node types according to a preset second weight matrix, and obtain an aggregation feature vector of each node.
In another exemplary embodiment, the vector aggregation unit includes a weighting operation subunit and a feature vector aggregation subunit.
The weighting operation subunit is configured to perform weighting operation on the original feature vector of each node according to a preset third weight matrix, so as to obtain a weighted feature vector of each node.
And the feature vector aggregation subunit is used for performing aggregation processing on the weighted feature vector of each node and the aggregated feature vector to obtain feature representation of each node.
It should be noted that the apparatus provided in the foregoing embodiment and the method provided in the foregoing embodiment belong to the same concept, and specific ways for each module and unit to perform operations have been described in detail in the method embodiment, and are not described again here.
Another aspect of the present application also provides an electronic device, including a processor and a memory, where the memory has stored thereon computer readable instructions, which when executed by the processor, implement the video recommendation method as described above.
Referring to fig. 13, fig. 13 is a schematic diagram illustrating a hardware structure of an electronic device according to an exemplary embodiment.
It should be noted that the device is only an example adapted to the application and should not be considered as providing any limitation to the scope of use of the application. The device is also not to be construed as necessarily dependent on or having one or more components of the exemplary electronic device illustrated in fig. 13.
The hardware structure of the apparatus may have a large difference due to a difference in configuration or performance, as shown in fig. 13, and the apparatus includes: a power supply 510, an interface 530, at least one memory 550, and at least one Central Processing Unit (CPU) 570.
The power supply 510 is used to provide operating voltage for each hardware device on the device.
The interface 530 includes at least one wired or wireless network interface 531, at least one serial-to-parallel conversion interface 533, at least one input/output interface 535, and at least one USB interface 537, etc. for communicating with external devices.
The memory 550 is used as a carrier for resource storage, and may be a read-only memory, a random access memory, a magnetic disk or an optical disk, etc., and the resources stored thereon include an operating system 551, an application program 553, data 555, etc., and the storage manner may be a transient storage or a permanent storage. The operating system 551 is used to manage and control hardware devices and application programs 553 on the device, so as to implement the calculation and processing of the mass data 555 by the central processing unit 570, which may be windows server, Mac OS XTM, unix, linux, etc. The application programs 553 are computer programs that perform at least one particular task on top of the operating system 551, and may include at least one module, each of which may contain a respective series of computer-readable instructions for the device.
Central processor 570 may include one or more processors and is configured to communicate with memory 550 via a bus for computing and processing mass data 555 in memory 550.
As described in detail above, a video recommendation device to which the present application is applicable will read a series of computer readable instructions stored in the memory 550 by the central processor 570 to complete the video recommendation method as described above.
Furthermore, the present application can also be implemented by hardware circuitry or by a combination of hardware circuitry and software instructions, and thus, the implementation of the present application is not limited to any specific hardware circuitry, software, or combination of both.
Another aspect of the present application also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the video recommendation method as described above. The computer-readable storage medium may be included in the electronic device described in the above embodiment, or may exist separately without being incorporated in the electronic device.
The above description is only a preferred exemplary embodiment of the present application, and is not intended to limit the present application, and a person skilled in the art can easily make various changes or modifications according to the main concept and spirit of the present application, so the protection scope of the present application shall be subject to the protection scope claimed in the claims.

Claims (11)

1. A method for video recommendation, comprising:
triggering a selected target video according to a terminal homepage, and skipping to a video page associated with the target video by the terminal;
acquiring a recommended label for video recommendation of the target video, wherein the recommended label is obtained by sorting candidate labels contained in the target video according to historical watching behavior data of a user and screening the candidate labels according to a sorting result;
displaying the recommended label on the video page;
and jumping to a recommended video page associated with the recommended label according to the recommended label selected by triggering on the video page.
2. The method of claim 1, wherein obtaining a recommendation tag for video recommendation for the target video comprises:
determining feature representations of candidate tags in the target video;
acquiring a video label in a video watched by a user according to the historical watching behavior data of the user, and constructing a user characteristic representation according to the video label;
calculating the similarity between the user feature representation and the feature representation of each candidate label;
and sorting the candidate tags according to the similarity, and screening the candidate tags according to a sorting result to obtain the recommended tags.
3. The method according to claim 2, wherein the obtaining video tags in videos watched by users according to the historical watching behavior data of users and constructing user feature representations according to the video tags comprises:
acquiring videos watched by the user and video tags in the watched videos from the historical watching behavior data of the user;
determining the weight of the video label according to the affiliation relationship between the video label and the video watched by the user, the integrity degree of the video watched by the user and the watching time;
screening the video tags according to the weight to obtain a user tag set;
and constructing the user characteristic representation according to the characteristic representation and the weight of each video label in the user label set.
4. The method of claim 2 or 3, wherein prior to determining the feature representation of the candidate tag in the target video, the method further comprises:
taking heterogeneous information related to videos in a video recommendation system as nodes, and taking an incidence relation between the heterogeneous information as an edge to construct a heterogeneous network, wherein the heterogeneous information contains the candidate label and the video label;
and determining the characteristic representation of each node according to the characteristic information of each node and the neighbor nodes thereof in the heterogeneous network.
5. The method of claim 4, wherein the heterogeneous information related to videos in the video recommendation system comprises videos published in the video recommendation system, video tags of the published videos, media accounts for publishing the videos, and user groups in the video recommendation system, wherein the user groups are obtained by clustering users in the video recommendation system.
6. The method according to claim 5, wherein the association relationship between the heterogeneous information includes an association play relationship between videos, an affiliation relationship between videos and video tags, a distribution relationship between videos and media accounts, an effective viewing relationship between videos and a user group, and a common affiliation relationship between video tags, wherein the association play relationship between videos is determined by:
and determining the videos which are effectively watched in the video recommendation system according to the watching integrity of the videos, sequencing the videos which are effectively watched according to the watching time of the videos, and determining that the two adjacent sequenced videos have the associated playing relation.
7. The method of claim 4, wherein the determining the feature representation of each node according to the feature information of each node and its neighboring nodes in the heterogeneous network comprises:
projecting each node in the heterogeneous network to the same semantic space to obtain the feature code of each node on the semantic space;
acquiring an original feature vector of each node through the feature codes of the neighbor nodes of each node in series;
aggregating the original feature vectors of the neighbor nodes of each node to obtain the aggregated feature vectors of each node;
and aggregating the aggregated feature vector and the original feature vector of each node to obtain the feature representation of each node.
8. The method according to claim 7, wherein the obtaining of the original feature vector of each node by concatenating feature codes of neighboring nodes of each node comprises:
for each node in the heterogeneous network, grouping the neighbor nodes according to the node types corresponding to the neighbor nodes of the node;
the characteristic codes of the neighbor nodes in each group are connected in series to obtain the serial codes corresponding to each group;
and connecting the serial codes corresponding to the groups in series to obtain the original characteristic vector of the node.
9. The method of claim 8, wherein the aggregating original feature vectors of neighboring nodes of each node to obtain an aggregated feature vector of each node comprises:
according to a preset first weight matrix, calculating the attention of each node in the heterogeneous network relative to a neighbor node corresponding to each node type, wherein the first weight matrix comprises a weight vector corresponding to each node type;
aggregating the attention corresponding to the same node type and the original feature vector of each neighbor node to obtain type aggregation feature vectors of each node under different node types;
and according to a preset second weight matrix, aggregating the type aggregation eigenvectors of the nodes under different node types to obtain the aggregation eigenvectors of the nodes.
10. The method of claim 7, wherein obtaining the feature representation of each node by aggregating the aggregated feature vector and the original feature vector of each node comprises:
performing weighting operation on the original characteristic vector of each node according to a preset third weight matrix to obtain a weighted characteristic vector of each node;
and acquiring the feature representation of each node by performing aggregation processing on the weighted feature vector and the aggregated feature vector of each node.
11. A computer-readable storage medium having computer-readable instructions stored thereon, which, when executed by a processor of a computer, cause the computer to perform the method of any one of claims 1-10.
CN201911088281.XA 2019-11-08 2019-11-08 Video recommendation method and computer-readable storage medium Active CN110941740B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911088281.XA CN110941740B (en) 2019-11-08 2019-11-08 Video recommendation method and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911088281.XA CN110941740B (en) 2019-11-08 2019-11-08 Video recommendation method and computer-readable storage medium

Publications (2)

Publication Number Publication Date
CN110941740A true CN110941740A (en) 2020-03-31
CN110941740B CN110941740B (en) 2023-07-14

Family

ID=69907463

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911088281.XA Active CN110941740B (en) 2019-11-08 2019-11-08 Video recommendation method and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN110941740B (en)

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111709246A (en) * 2020-04-27 2020-09-25 北京百度网讯科技有限公司 Node feature generation method and device, electronic equipment and storage medium
CN111722766A (en) * 2020-06-04 2020-09-29 北京达佳互联信息技术有限公司 Multimedia resource display method and device
CN111767439A (en) * 2020-06-28 2020-10-13 百度在线网络技术(北京)有限公司 Recommendation method, device and medium based on page classification label
CN111783001A (en) * 2020-06-29 2020-10-16 北京达佳互联信息技术有限公司 Page display method and device, electronic equipment and storage medium
CN111859160A (en) * 2020-08-07 2020-10-30 成都理工大学 Method and system for recommending session sequence based on graph neural network
CN112087667A (en) * 2020-09-10 2020-12-15 北京字节跳动网络技术有限公司 Information processing method and device and computer storage medium
CN112291609A (en) * 2020-09-15 2021-01-29 北京达佳互联信息技术有限公司 Video display and push method, device, storage medium and system thereof
CN112417207A (en) * 2020-11-24 2021-02-26 未来电视有限公司 Video recommendation method, device, equipment and storage medium
CN112579913A (en) * 2020-12-30 2021-03-30 上海众源网络有限公司 Video recommendation method, device, equipment and computer-readable storage medium
CN112653907A (en) * 2020-12-15 2021-04-13 泰康保险集团股份有限公司 Video recommendation method and device
CN112948626A (en) * 2021-05-14 2021-06-11 腾讯科技(深圳)有限公司 Video processing method and device, electronic equipment and computer readable storage medium
CN113051481A (en) * 2021-04-22 2021-06-29 北京百度网讯科技有限公司 Content recommendation method and device, electronic equipment and medium
CN113127778A (en) * 2021-03-17 2021-07-16 北京达佳互联信息技术有限公司 Information display method and device, server and storage medium
CN113157965A (en) * 2021-05-07 2021-07-23 杭州网易云音乐科技有限公司 Audio visual model training and audio visual method, device and equipment
CN113190758A (en) * 2021-05-21 2021-07-30 聚好看科技股份有限公司 Server and media asset recommendation method
CN113254707A (en) * 2021-06-10 2021-08-13 北京达佳互联信息技术有限公司 Model determination method and device and associated media resource determination method and device
CN114138158A (en) * 2020-09-03 2022-03-04 腾讯科技(深圳)有限公司 Method and device for detecting visibility of view and computing equipment
CN114154066A (en) * 2021-12-03 2022-03-08 腾讯科技(深圳)有限公司 Information recommendation method and device and storage medium
CN114449331A (en) * 2022-02-16 2022-05-06 北京字跳网络技术有限公司 Video display method and device, electronic equipment and storage medium
CN114637888A (en) * 2022-05-18 2022-06-17 深圳市华曦达科技股份有限公司 Video pushing method and device
CN114780867A (en) * 2022-05-10 2022-07-22 杭州网易云音乐科技有限公司 Recommendation method, medium, device and computing equipment
CN115238173A (en) * 2022-06-30 2022-10-25 山东省玖玖医养健康产业有限公司 Behavior analysis and medical service pushing method, equipment and medium based on big data
WO2023005575A1 (en) * 2021-07-26 2023-02-02 北京字跳网络技术有限公司 Processing method and apparatus based on interest tag, and device and storage medium
WO2023142520A1 (en) * 2022-01-26 2023-08-03 北京沃东天骏信息技术有限公司 Information recommendation method and apparatus
CN116821475A (en) * 2023-05-19 2023-09-29 广州蜜糖网络科技有限公司 Video recommendation method and device based on client data and computer equipment
CN117290543A (en) * 2023-10-13 2023-12-26 广东乐阳智能设备有限公司 Short video recommendation system based on AR interaction
CN117459798A (en) * 2023-12-22 2024-01-26 厦门众联世纪股份有限公司 Big data-based information display method, device, equipment and storage medium
CN114154066B (en) * 2021-12-03 2024-07-12 腾讯科技(深圳)有限公司 Information recommendation method, device and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105404698A (en) * 2015-12-31 2016-03-16 海信集团有限公司 Education video recommendation method and device
WO2016125166A1 (en) * 2015-02-05 2016-08-11 Ankan Consulting Ltd. Systems and methods for analyzing video and making recommendations
WO2017107453A1 (en) * 2015-12-23 2017-06-29 乐视控股(北京)有限公司 Video content recommendation method, device, and system
CN110059271A (en) * 2019-06-19 2019-07-26 达而观信息科技(上海)有限公司 With the searching method and device of label knowledge network
CN110287372A (en) * 2019-06-26 2019-09-27 广州市百果园信息技术有限公司 Label for negative-feedback determines method, video recommendation method and its device
CN110413837A (en) * 2019-05-30 2019-11-05 腾讯科技(深圳)有限公司 Video recommendation method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016125166A1 (en) * 2015-02-05 2016-08-11 Ankan Consulting Ltd. Systems and methods for analyzing video and making recommendations
WO2017107453A1 (en) * 2015-12-23 2017-06-29 乐视控股(北京)有限公司 Video content recommendation method, device, and system
CN105404698A (en) * 2015-12-31 2016-03-16 海信集团有限公司 Education video recommendation method and device
CN110413837A (en) * 2019-05-30 2019-11-05 腾讯科技(深圳)有限公司 Video recommendation method and device
CN110059271A (en) * 2019-06-19 2019-07-26 达而观信息科技(上海)有限公司 With the searching method and device of label knowledge network
CN110287372A (en) * 2019-06-26 2019-09-27 广州市百果园信息技术有限公司 Label for negative-feedback determines method, video recommendation method and its device

Cited By (42)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111709246A (en) * 2020-04-27 2020-09-25 北京百度网讯科技有限公司 Node feature generation method and device, electronic equipment and storage medium
CN111722766A (en) * 2020-06-04 2020-09-29 北京达佳互联信息技术有限公司 Multimedia resource display method and device
CN111767439A (en) * 2020-06-28 2020-10-13 百度在线网络技术(北京)有限公司 Recommendation method, device and medium based on page classification label
CN111767439B (en) * 2020-06-28 2023-12-15 百度在线网络技术(北京)有限公司 Recommendation method, device and medium based on page classification labels
CN111783001B (en) * 2020-06-29 2024-01-09 北京达佳互联信息技术有限公司 Page display method, page display device, electronic equipment and storage medium
CN111783001A (en) * 2020-06-29 2020-10-16 北京达佳互联信息技术有限公司 Page display method and device, electronic equipment and storage medium
CN111859160A (en) * 2020-08-07 2020-10-30 成都理工大学 Method and system for recommending session sequence based on graph neural network
CN111859160B (en) * 2020-08-07 2023-06-16 成都理工大学 Session sequence recommendation method and system based on graph neural network
CN114138158B (en) * 2020-09-03 2024-02-20 腾讯科技(深圳)有限公司 Method and device for detecting visibility of view and computing equipment
CN114138158A (en) * 2020-09-03 2022-03-04 腾讯科技(深圳)有限公司 Method and device for detecting visibility of view and computing equipment
CN112087667A (en) * 2020-09-10 2020-12-15 北京字节跳动网络技术有限公司 Information processing method and device and computer storage medium
CN112291609A (en) * 2020-09-15 2021-01-29 北京达佳互联信息技术有限公司 Video display and push method, device, storage medium and system thereof
CN112417207A (en) * 2020-11-24 2021-02-26 未来电视有限公司 Video recommendation method, device, equipment and storage medium
CN112417207B (en) * 2020-11-24 2023-02-21 未来电视有限公司 Video recommendation method, device, equipment and storage medium
CN112653907A (en) * 2020-12-15 2021-04-13 泰康保险集团股份有限公司 Video recommendation method and device
CN112653907B (en) * 2020-12-15 2022-11-15 泰康保险集团股份有限公司 Video recommendation method and device
CN112579913A (en) * 2020-12-30 2021-03-30 上海众源网络有限公司 Video recommendation method, device, equipment and computer-readable storage medium
CN113127778A (en) * 2021-03-17 2021-07-16 北京达佳互联信息技术有限公司 Information display method and device, server and storage medium
CN113127778B (en) * 2021-03-17 2023-10-03 北京达佳互联信息技术有限公司 Information display method, device, server and storage medium
CN113051481A (en) * 2021-04-22 2021-06-29 北京百度网讯科技有限公司 Content recommendation method and device, electronic equipment and medium
CN113157965A (en) * 2021-05-07 2021-07-23 杭州网易云音乐科技有限公司 Audio visual model training and audio visual method, device and equipment
CN112948626A (en) * 2021-05-14 2021-06-11 腾讯科技(深圳)有限公司 Video processing method and device, electronic equipment and computer readable storage medium
CN113190758B (en) * 2021-05-21 2023-01-20 聚好看科技股份有限公司 Server and media asset recommendation method
CN113190758A (en) * 2021-05-21 2021-07-30 聚好看科技股份有限公司 Server and media asset recommendation method
CN113254707B (en) * 2021-06-10 2021-12-07 北京达佳互联信息技术有限公司 Model determination method and device and associated media resource determination method and device
CN113254707A (en) * 2021-06-10 2021-08-13 北京达佳互联信息技术有限公司 Model determination method and device and associated media resource determination method and device
WO2023005575A1 (en) * 2021-07-26 2023-02-02 北京字跳网络技术有限公司 Processing method and apparatus based on interest tag, and device and storage medium
CN114154066B (en) * 2021-12-03 2024-07-12 腾讯科技(深圳)有限公司 Information recommendation method, device and storage medium
CN114154066A (en) * 2021-12-03 2022-03-08 腾讯科技(深圳)有限公司 Information recommendation method and device and storage medium
WO2023142520A1 (en) * 2022-01-26 2023-08-03 北京沃东天骏信息技术有限公司 Information recommendation method and apparatus
CN114449331A (en) * 2022-02-16 2022-05-06 北京字跳网络技术有限公司 Video display method and device, electronic equipment and storage medium
CN114449331B (en) * 2022-02-16 2023-11-21 北京字跳网络技术有限公司 Video display method and device, electronic equipment and storage medium
CN114780867A (en) * 2022-05-10 2022-07-22 杭州网易云音乐科技有限公司 Recommendation method, medium, device and computing equipment
CN114780867B (en) * 2022-05-10 2023-11-03 杭州网易云音乐科技有限公司 Recommendation method, medium, device and computing equipment
CN114637888A (en) * 2022-05-18 2022-06-17 深圳市华曦达科技股份有限公司 Video pushing method and device
CN115238173B (en) * 2022-06-30 2023-06-02 山东省玖玖医养健康产业有限公司 Behavior analysis and medical service pushing method, equipment and medium based on big data
CN115238173A (en) * 2022-06-30 2022-10-25 山东省玖玖医养健康产业有限公司 Behavior analysis and medical service pushing method, equipment and medium based on big data
CN116821475A (en) * 2023-05-19 2023-09-29 广州蜜糖网络科技有限公司 Video recommendation method and device based on client data and computer equipment
CN116821475B (en) * 2023-05-19 2024-02-02 广州蜜糖网络科技有限公司 Video recommendation method and device based on client data and computer equipment
CN117290543A (en) * 2023-10-13 2023-12-26 广东乐阳智能设备有限公司 Short video recommendation system based on AR interaction
CN117459798A (en) * 2023-12-22 2024-01-26 厦门众联世纪股份有限公司 Big data-based information display method, device, equipment and storage medium
CN117459798B (en) * 2023-12-22 2024-03-08 厦门众联世纪股份有限公司 Big data-based information display method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN110941740B (en) 2023-07-14

Similar Documents

Publication Publication Date Title
CN110941740B (en) Video recommendation method and computer-readable storage medium
Karimi et al. News recommender systems–Survey and roads ahead
CN111538912B (en) Content recommendation method, device, equipment and readable storage medium
EP3717984B1 (en) Method and apparatus for providing personalized self-help experience
US8326777B2 (en) Supplementing a trained model using incremental data in making item recommendations
CN111259263B (en) Article recommendation method and device, computer equipment and storage medium
Shi et al. Local representative-based matrix factorization for cold-start recommendation
CN111241311A (en) Media information recommendation method and device, electronic equipment and storage medium
CN111737582B (en) Content recommendation method and device
Chang et al. Using groups of items for preference elicitation in recommender systems
CN111931062A (en) Training method and related device of information recommendation model
US10949000B2 (en) Sticker recommendation method and apparatus
CN109993583B (en) Information pushing method and device, storage medium and electronic device
CN109471978B (en) Electronic resource recommendation method and device
Rokach et al. Initial profile generation in recommender systems using pairwise comparison
CN112052387A (en) Content recommendation method and device and computer readable storage medium
CN112100221A (en) Information recommendation method and device, recommendation server and storage medium
US20150161634A1 (en) Visitor session classification based on clickstreams
Rawat et al. A comprehensive study on recommendation systems their issues and future research direction in e-learning domain
CN111445280A (en) Model generation method, restaurant ranking method, system, device and medium
CN112989174A (en) Information recommendation method and device, medium and equipment
Lu et al. Computational creativity based video recommendation
CN115344774A (en) User account screening method and device and server
Geyik et al. User clustering in online advertising via topic models
CN112035740A (en) Project use duration prediction method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40022613

Country of ref document: HK

TA01 Transfer of patent application right

Effective date of registration: 20221110

Address after: 1402, Floor 14, Block A, Haina Baichuan Headquarters Building, No. 6, Baoxing Road, Haibin Community, Xin'an Street, Bao'an District, Shenzhen, Guangdong 518000

Applicant after: Shenzhen Yayue Technology Co.,Ltd.

Address before: 518000 Tencent Building, No. 1 High-tech Zone, Nanshan District, Shenzhen City, Guangdong Province, 35 Floors

Applicant before: TENCENT TECHNOLOGY (SHENZHEN) Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant