CN110598045B - Video recommendation method and device - Google Patents

Video recommendation method and device Download PDF

Info

Publication number
CN110598045B
CN110598045B CN201910843794.0A CN201910843794A CN110598045B CN 110598045 B CN110598045 B CN 110598045B CN 201910843794 A CN201910843794 A CN 201910843794A CN 110598045 B CN110598045 B CN 110598045B
Authority
CN
China
Prior art keywords
video
candidate
initial
videos
attribute
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910843794.0A
Other languages
Chinese (zh)
Other versions
CN110598045A (en
Inventor
袁两胜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201910843794.0A priority Critical patent/CN110598045B/en
Publication of CN110598045A publication Critical patent/CN110598045A/en
Application granted granted Critical
Publication of CN110598045B publication Critical patent/CN110598045B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/735Filtering based on additional data, e.g. user or group profiles

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application relates to the technical field of videos, in particular to a video recommendation method and device. The recommendation method comprises the following steps: acquiring first candidate videos matched with each initial video in the initial video set to obtain a first candidate video set; calculating the matching degree of the initial video set and each first candidate video on each attribute; determining second candidate videos corresponding to the initial videos on the attributes from the first candidate video set according to the matching degree of the initial video set and the first candidate videos on the attributes to obtain a second candidate video set; determining a set of videos to be recommended for the initial set of videos based on the second candidate set of videos. According to the technical scheme of the embodiment of the application, the recommended videos can be diversified, and the freshness of the user is enhanced.

Description

Video recommendation method and device
Technical Field
The application relates to the technical field of videos, in particular to a video recommendation method and device.
Background
In the technical field of video recommendation, for example, in a scene in which a video APP recommends a user to watch a video, videos similar to the video watched by the user are generally recommended to the user with reference to videos watched by the user last several times. For example, after a user watches a video segment of a tv series in the video APP, the video APP recommends a large number of videos identical or similar to the tv series for the user. However, how to recommend more diversified videos which are enjoyed by the user to the user is a technical problem to be solved urgently.
Disclosure of Invention
The embodiment of the application provides a video recommendation method and device, and further videos recommended to a user can be diversified to a certain extent
Other features and advantages of the present application will be apparent from the following detailed description, or may be learned by practice of the application.
According to an aspect of an embodiment of the present application, there is provided a video recommendation method, including: acquiring first candidate videos matched with each initial video in the initial video set to obtain a first candidate video set; calculating the matching degree of the initial video set and each first candidate video on each attribute; determining second candidate videos corresponding to the initial videos on the attributes from the first candidate video set according to the matching degree of the initial video set and the first candidate videos on the attributes to obtain a second candidate video set; determining a set of videos to be recommended for the initial set of videos based on the second candidate set of videos.
According to an aspect of an embodiment of the present application, there is provided a video recommendation apparatus including: the acquisition unit is used for acquiring first candidate videos matched with all the initial videos in the initial video set to obtain a first candidate video set; a calculating unit, configured to calculate matching degrees of the initial video set and each of the first candidate videos on each attribute; a first determining unit, configured to determine, according to matching degrees of the initial video set and each of the first candidate videos on each attribute, a second candidate video corresponding to each of the initial videos on each attribute from the first candidate video set, so as to obtain a second candidate video set; a second determining unit, configured to determine a set of videos to be recommended for the initial set of videos based on the second candidate set of videos.
In some embodiments of the present application, based on the foregoing solution, the calculation unit includes: a first calculating unit, configured to calculate an average value or a median value of attribute scores of the initial video set on each attribute, and an absolute value of a difference between attribute scores of each of the first candidate videos on the corresponding attribute; and a third determining unit, configured to determine, based on the absolute difference value, a matching degree of the initial video set and each of the first candidate videos on each attribute.
In some embodiments of the present application, based on the foregoing solution, the calculation unit includes: a first calculating unit, configured to calculate an average value or a median value of attribute scores of the initial video set on each attribute, and an absolute value of a difference between attribute scores of each of the first candidate videos on the corresponding attribute; a second calculating unit, configured to calculate video similarities between the initial videos and the matched first candidate videos; and a fourth determining unit, configured to determine, based on the absolute difference value and the video similarity, a matching degree of the initial video set and each of the first candidate videos on each attribute.
In some embodiments of the present application, based on the foregoing solution, the second calculating unit is configured to: acquiring a plurality of unit video sets, wherein one unit video set comprises videos clicked by a user within one unit time in history; calculating the video similarity between each initial video and each matched first candidate video by the following formula:
Figure BDA0002194540360000021
wherein, VciRepresenting an ith initial video in the set of initial videos; vhjRepresenting and said initial video VciMatching jth first candidate video; r (V)ci,Vhj) Representing the initial video VciAnd the first candidate video VhjVideo similarity between them; n is a radical of(ci,hj)Representing simultaneous inclusion of said initial video VciAnd the first candidate video VhjThe number of unit video sets of (a); n is a radical ofciRepresenting the occurrence of the initial video V in the plurality of unit video setsciThe number of times of (c); n is a radical ofciRepresenting the occurrence of the first candidate video V in the plurality of unit video setshjThe number of times.
In some embodiments of the present application, based on the foregoing solution, the second calculating unit is configured to: in a blockchain system composed of a plurality of nodes, the plurality of unit video sets are acquired.
In some embodiments of the present application, based on the foregoing scheme, the fourth determining unit is configured to: calculating the matching degree by the following formula:
Pe(Vci,Vhj)=k*(1-R(Vci,Vhj))+q*Mecihj
wherein, VciRepresenting an ith initial video in the set of initial videos; vhjRepresenting and said initial video VciMatching jth first candidate video; pe(Vci,Vhj) Representing the initial video VciAnd the first candidate video VhjA degree of match on the e-th attribute; mecihjRepresenting the attribute score mean or attribute score median of the initial video set on the e-th attribute and the jth first candidate video VhjThe absolute value of the difference between the attribute scores on the e-th attribute; r (V)ci,Vhj) Representing the initial video VciAnd the first candidate video VhjVideo similarity between them; k and q both represent preset parameters for the formula, where k + q is 1.
In some embodiments of the present application, based on the foregoing solution, the video recommendation apparatus further includes: a loop process executing unit, configured to execute a loop process until the number of loops satisfies a predetermined number of times, according to the following steps:
acquiring a third candidate video set matched with each video to be recommended in the video set to be recommended to obtain a plurality of third candidate video sets; calculating the matching degree of the video set to be recommended and each third candidate video in each third candidate video set on each attribute; determining a fourth candidate video set of each video to be recommended on each attribute from the third candidate video set according to the matching degree of each video to be recommended on each attribute of the third candidate video set and each third candidate video in each third candidate video set; and determining a video set to be recommended based on the fourth candidate video set.
In some embodiments of the present application, based on the foregoing scheme, the second determining unit is configured to: determining a video set to be recommended for the initial video set by any one of the following methods:
determining a video set to be recommended from the second candidate video set based on a decomposed multi-objective evolutionary algorithm;
determining a video set to be recommended from the second candidate video set based on a pareto dominant multi-objective evolutionary algorithm;
and determining a video set to be recommended from the second candidate video set by using an index-based multi-objective evolutionary algorithm.
In some embodiments of the present application, based on the foregoing scheme, the second determining unit is configured to: determining a video set to be recommended for the initial video set from a union of the second candidate video set and the initial video set based on the second candidate video set and the initial video set.
According to an aspect of embodiments of the present application, there is provided a computer-readable medium on which a computer program is stored, the computer program, when executed by a processor, implementing a video recommendation method as described in the above embodiments.
According to an aspect of an embodiment of the present application, there is provided an electronic device including: one or more processors; a storage device for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the video recommendation method as described in the above embodiments.
In the technical solutions provided by some embodiments of the present application, a first candidate video set is obtained by matching an initial video in an initial video set with first candidate videos, then a matching degree of the initial video set and each of the first candidate videos on each attribute is calculated, a second candidate video corresponding to each of the initial videos on each attribute is determined from the first candidate video set according to the calculated matching degree, a second candidate video set is obtained, and finally a video set to be recommended for the initial video set is determined from the second candidate video set. In the technical scheme, the obtained second candidate video set is determined from the first candidate video set according to the matching degree of the initial video set and each first candidate video on each attribute, so that the videos in the second candidate video set are ensured to have diversity, and further, the videos to be recommended determined from the second candidate video set are also enabled to have diversity. Therefore, the technical problem that more diversified videos cannot be recommended to the user in the prior art can be solved by the technical scheme.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application. It is obvious that the drawings in the following description are only some embodiments of the application, and that for a person skilled in the art, other drawings can be derived from them without inventive effort. In the drawings:
FIG. 1 shows a schematic diagram of an exemplary system architecture to which aspects of embodiments of the present application may be applied;
fig. 2 shows a schematic implementation scenario of a technical solution according to an embodiment of the present application;
FIG. 3 shows a schematic diagram of an exemplary blockchain system architecture in accordance with an embodiment of the present application;
FIG. 4 illustrates a schematic diagram of determining a recommended video according to an embodiment of the present application;
FIG. 5 shows a flow diagram of a video recommendation method according to an embodiment of the present application;
FIG. 6 shows a detailed flow diagram for computing a degree of matching of an initial video set to a first candidate video on various attributes according to one embodiment of the present application;
FIG. 7 illustrates a detailed flow diagram for calculating a degree of matching of an initial video set to a first candidate video on various attributes according to one embodiment of the present application;
fig. 8 shows a pareto frontier diagram of a video to be recommended under three optimization objectives according to an embodiment of the present application;
FIG. 9 shows a flowchart of one loop process for determining recommended videos according to one embodiment of the present application;
FIG. 10 shows a block diagram of a video recommendation device according to an embodiment of the present application;
FIG. 11 illustrates a schematic structural diagram of a computer system suitable for use in implementing the electronic device of an embodiment of the present application.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the application. One skilled in the relevant art will recognize, however, that the subject matter of the present application can be practiced without one or more of the specific details, or with other methods, components, devices, steps, and so forth. In other instances, well-known methods, devices, implementations, or operations have not been shown or described in detail to avoid obscuring aspects of the application.
The block diagrams shown in the figures are functional entities only and do not necessarily correspond to physically separate entities. I.e. these functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor means and/or microcontroller means.
The flow charts shown in the drawings are merely illustrative and do not necessarily include all of the contents and operations/steps, nor do they necessarily have to be performed in the order described. For example, some operations/steps may be decomposed, and some operations/steps may be combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
Fig. 1 shows a schematic diagram of an exemplary system architecture to which the technical solution of the embodiments of the present application can be applied.
As shown in fig. 1, the system architecture may include a terminal device (e.g., one or more of a smartphone 101, a tablet computer 102, and a portable computer 103 shown in fig. 1, but may also be a desktop computer, etc.), a network 104, and a server 105. The network 104 serves as a medium for providing communication links between terminal devices and the server 105. Network 104 may include various connection types, such as wired communication links, wireless communication links, and so forth.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation. For example, server 105 may be a server cluster comprised of multiple servers, or the like.
The technical scheme of the application mainly relates to a video recommendation technology, and particularly provides an intelligent video recommendation technology. In essence, intelligent recommendation techniques are theories, methods, techniques, and application systems that utilize digital computers or machines controlled by digital computers to simulate, extend, and expand human intelligence, perceive the environment, acquire knowledge, and use the knowledge to obtain optimal results. By combining the specific technical scheme of the application, the video recommendation technology can be a technology for mining and utilizing historical record data of the user in the aspect of browsing videos and providing diversified recommended videos which meet the taste of the user for the user.
In one embodiment of the application, the user may request recommendations for more diverse videos from the server 105 using the terminal device. For example, in the implementation scenario diagram of the technical solution according to an embodiment of the present application as shown in fig. 2, if the user feels fatigue to the currently recommended video, a request for recommending more diverse videos may be sent to the server 105 by clicking the word "recommend in other ways" shown in the cell phone page 201, and after receiving this request, the server 105 starts to execute the technical solution disclosed in the present application, determines more diverse recommended videos for the user, and returns the determined recommended videos to the terminal device of the user to display the determined recommended videos on the screen of the device, for example, as shown in the cell phone page 202 in fig. 2.
It should be noted that the video recommendation method provided in the embodiment of the present application is generally executed by the server 105, and accordingly, the video recommendation apparatus is generally disposed in the server 105. However, in other embodiments of the present application, the terminal device may also have a similar function as the server, so as to execute the video recommendation scheme provided by the embodiments of the present application.
Fig. 3 shows a schematic diagram of an exemplary blockchain system architecture according to an embodiment of the present application.
As shown in fig. 3, in the architecture of the blockchain system, several nodes are included, wherein a node in the blockchain system may refer to each terminal device or server. Such as tablet 301, desktop 302, laptop 303, and smart phone 304 shown in fig. 3. In the technical scheme of the application, each node in the blockchain system can store video data and a history of videos watched by a user. In order to ensure data information intercommunication in the blockchain system, network connection can exist between each node in the system, and data information transmission can be carried out between the nodes through the network connection. For example, one node may obtain video data from another node along with a history of videos viewed by the user.
It should be understood that the number of terminal devices, networks, and servers in fig. 3 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation. Such as a server cluster that may be a plurality of servers, etc.
In addition, the system related to the embodiment of the present application may also be other distributed systems formed by connecting clients, a plurality of nodes (any form of computing devices in an access network, such as servers and user terminals) through a network communication form. In one embodiment of the present application, a user may request to recommend more diverse videos in one node in the blockchain system. For example, in an implementation scenario diagram of a technical solution according to an embodiment of the present application as shown in fig. 2, if a user who is using a smartphone is tired of a currently recommended video, a recommendation of a more diverse video may be requested by clicking a "recommend in other ways" word shown in a cell phone page 201, and the smartphone, such as the smartphone 304 in fig. 3, starts to execute the technical solution disclosed in the present application according to the request of the user, determines a more diverse recommended video for the user, and returns the determined recommended video to a terminal device of the user to display the determined recommended video on a screen of the device, such as shown in a cell phone page 202 in fig. 2.
In an embodiment of the present application, a device (e.g., the server 105 in fig. 2 or the smartphone 304 in fig. 3) executing the technical solution disclosed in the present application may determine the recommended video according to the content of the schematic diagram for determining the recommended video according to an embodiment of the present application as shown in fig. 4. Specifically, for example, the server 105 (the smartphone 304) may first obtain an initial video set 401, then match a first recommended video set 402 for each initial video in the initial video set, then determine a second recommended video set 403 from each first recommended video set, and finally determine a to-be-recommended video set 404 from the second recommended video set. Therefore, the recommended videos can be diversified, the freshness of the user is enhanced, and the technical problem that the diversified videos cannot be recommended to the user in the prior art is solved.
The implementation details of the technical solution of the embodiment of the present application are set forth in detail below:
according to a first aspect of the present disclosure, a video recommendation method is provided.
Referring to fig. 5, a flowchart of a video recommendation method according to an embodiment of the present application is shown, which may be performed by a device having a computing processing function, such as the server 105 shown in fig. 1, or the terminal device shown in fig. 3. As shown in fig. 5, the video recommendation method at least includes steps 510 to 540:
step 510, obtaining a first candidate video matched with each initial video in the initial video set, and obtaining a first candidate video set.
Step 520, calculating the matching degree of the initial video set and each first candidate video on each attribute.
Step 530, according to the matching degree of the initial video set and each of the first candidate videos on each attribute, determining a second candidate video corresponding to each of the initial videos on each attribute from the first candidate video set to obtain a second candidate video set.
Step 540, determining a set of videos to be recommended for the initial video set based on the second candidate video set.
The steps carried out as above will be explained in detail below:
in step 510, a first candidate video matching each initial video in the initial video set is obtained, so as to obtain a first candidate video set.
In the embodiment of the present application, before acquiring the first candidate video matching each initial video in the initial video set, the initial video set needs to be determined first. For example, a series of videos that the user viewed and/or commented on and/or collected over a past period of time may be obtained as an initial video set. Or a series of videos that other users have watched and/or commented on and/or collected in the past period of time can be obtained as the initial video set. It is also possible to randomly fetch a certain number of initial videos from a video database to determine the initial video set. It should be noted that the determination of the initial set of videos may be arbitrary and is not limited to those listed above.
After the initial video set is determined, first candidate videos matching each initial video in the initial video set are obtained. Specifically, multiple first candidate videos may be matched for each initial video in the initial video set, and further, multiple first candidate videos may be randomly matched for each initial video from the video database, for example, 8 first candidate videos are randomly matched for each initial video, and further, for example, 100 first candidate videos are randomly matched for each initial video. It should be noted that the number of randomly matched videos for each initial video may be arbitrary and is not limited to those listed above.
After a plurality of first candidate videos are matched for each initial video, a first candidate video set corresponding to each initial video in the initial video set is obtained. For example, 8 first candidate videos are matched for one initial video, that is, the first candidate video set corresponding to the one initial video includes the matched 8 initial videos.
It should be noted that the first candidate video set described in step 510 may refer to a plurality of first candidate video sets corresponding to the respective initial videos.
In step 520, the matching degree of the initial video set and each of the first candidate videos on each attribute is calculated.
In the present application, the videos (including the initial video, the first candidate video, and the like) each have a plurality of attributes, and each video may be subjected to attribute definition, classification, and evaluation from different angles. For example, a video may be evaluated from the attributes of comedy and tragedy, from the attributes of love, friendship, and familiarity, or from the attributes of suspicion, horror, ethics, science fiction, martial arts, and crime. It should be noted that the definition and classification of the video attributes and the setting of the number of attributes may be arbitrary and are not limited to those listed above. According to actual needs, a user can increase more interesting attributes of the video, can reduce the interesting attributes, can select the interesting attributes independently, and can dynamically adjust the attributes of the video according to the history of watching the video.
The evaluation of the video attributes may be embodied in the form of attribute scores. For example, for the television drama "laughing and ludging the river lake", the television drama can be evaluated from the attributes of love, friendship and familiarity, and the attribute scores can be "love 9 score", "familiarity 4 score" and "friendship 5 score". The evaluation can also be carried out from attributes of six aspects of suspicion, horror, ethics, science fiction, swordsmen and crime, and the attribute scores can be 'suspicion 2', 'horror 1', 'ethics 2', 'science fiction 0', 'swordsmen 9' and 'crime 5'. For example, the movie "spy charlock" may also be evaluated from six attributes of suspicion, horror, ethics, science fiction, martial arts, and crime, and the attribute scores may be "suspicion 8", "horror 6", "ethics 4", "science fiction 1", "martial arts 0", and "crime 9".
In an embodiment of the present application, calculating the matching degree of the initial video set and each of the first candidate videos on each attribute may be implemented by the steps as described in fig. 6.
Referring to fig. 6, a detailed flowchart for calculating matching degrees of the initial video set and the first candidate video on various attributes according to an embodiment of the present application is shown, which may specifically include steps 5201 to 5202:
step 5201, calculating the absolute value of the difference between the average value or the median value of the attribute scores of the initial video set on each attribute and the attribute score of each first candidate video on the corresponding attribute.
In a specific implementation of an embodiment, in order to make those skilled in the art more intuitive to understand the average value of the attribute scores or the median value of the attribute scores of the initial video set on each attribute, a specific example will be explained below with reference to table 1.
Initial video set Love Friendship Familiarity of the mother
Initial video 1 6 minutes 5 points of 9 minutes
Initial video 2 4 is divided into 3 points of 7 points of
Initial video 3 2 is divided into 1 minute (1) 8 is divided into
Initial video 4 9 minutes 6 minutes 6 minutes
Initial video 5 2 is divided into 2 is divided into 7 points of
Initial video 6 1 minute (1) 7 points of 5 points of
Initial video set attribute score average 4 is divided into 4 is divided into 7 points of
Median initial video set attribute score 3 points of 4 is divided into 7 points of
TABLE 1
As shown in table 1, the initial video set includes 6 initial videos, where the attributes corresponding to each of the initial videos are love, friendship, and familiarity, respectively. The attribute scores of the attributes corresponding to each initial video are shown in the table. For the average value or the median value of the attribute scores of the initial video set on the attributes, for example, the average value of the attribute scores of the initial video set on the love attribute, the attribute scores of the initial videos 1 to 6 on the love attribute are averaged, and the result is "4 points". For example, the median of the attribute scores of the initial video set on the love property is to find the median of the attribute scores of the initial videos 1 to 6 on the love property, and the result is "3 points".
In addition, in order to make it more intuitive for those skilled in the art to understand how to calculate the average value of the attribute scores or the median value of the attribute scores of the initial video set on each attribute and the absolute value of the difference between the attribute scores of each of the first candidate videos on the corresponding attribute, a specific example will be explained below with reference to table 2.
Video attributes Love Friendship Familiarity of the mother
First candidate video 1 9 minutes 6 minutes 6 minutes
Initial video set attribute score average 4 is divided into 4 is divided into 7 points of
Absolute value of difference 5 2 1
First candidate video 1 9 minutes 6 minutes 6 minutes
Median initial video set attribute score 3 points of 4 is divided into 7 points of
Absolute value of difference 6 2 1
TABLE 2
As shown in table 2, the first candidate video 1 is one video in the first candidate video set, where the attribute scores of the first candidate video 1 are: "love 9", "affection 6". The average value of the attribute scores of the initial video set on each attribute is respectively as follows: love 4, familiarity 4 and friendship 7. The median of the attribute scores of the initial video set on each attribute is respectively as follows: love 3 points, familiarity 4 points and friendship 7 points. Therefore, the absolute value of the difference between the average of the attribute scores of the initial video set on the love property and the attribute score of the first candidate video 1 on the love property is "5". Similarly, it can be seen that the absolute values of the differences between the average of the attribute scores of the initial video set on the friendship and familiarity attributes and the attribute scores of the first candidate video 1 on the friendship and familiarity attributes are "2" and "1", respectively. The absolute values of the differences between the median of the attribute scores of the initial video set on the love, friendship and kindness attributes and the attribute scores of the first candidate video 1 on the love, friendship and kindness attributes are "6", "2" and "1", respectively.
Step 5202, determining the matching degree of the initial video set and each of the first candidate videos on each attribute based on the absolute difference value.
In a specific implementation of one embodiment, the matching degree of each attribute of the initial video set and each of the first candidate videos is used to characterize the matching degree of each attribute of the initial video set and each of the first candidate videos. For example, in table 2, the degree of match of the initial set of videos with the first recommended video on the love attribute.
Specifically, on the other hand, the absolute difference value may be directly used as a matching degree between the initial video set and each of the first candidate videos on each attribute, and if the absolute difference value is smaller, it indicates that the matching degree is higher. In addition, the inverse of the absolute value of the difference may be used as the matching degree between the initial video set and each of the first candidate videos in each attribute, and if the absolute value of the difference is larger, the higher the matching degree is. The absolute value of the difference after the normalization process can be used as the matching degree of the initial video set and each first candidate video on each attribute.
In an embodiment of the present application, calculating the matching degree of the initial video set and each of the first candidate videos on each attribute may be implemented by the steps as described in fig. 7.
Referring to fig. 7, a detailed flowchart for calculating matching degrees of the initial video set and the first candidate video on various attributes according to an embodiment of the present application is shown, which specifically includes steps 5201, 5203, 5204:
step 5201, calculating the absolute value of the difference between the average value or the median value of the attribute scores of the initial video set on each attribute and the attribute score of each first candidate video on the corresponding attribute.
Step 5203, calculating video similarity between each initial video and each first candidate video matched.
In a specific implementation of an embodiment, calculating the video similarity between each initial video and each matched first candidate video may be implemented as follows:
firstly, acquiring a plurality of unit video sets, wherein one unit video set comprises videos clicked by a user within one unit time in history, and then calculating the video similarity between each initial video and each matched first candidate video through the following formula:
Figure BDA0002194540360000121
wherein, VciRepresenting an ith initial video in the set of initial videos; vhjRepresenting and said initial video VciMatching jth first candidate video; r (V)ci,Vhj) Representing the initial video VciAnd the first candidate video VhjVideo similarity between them; n is a radical of(ci,hj)Representing simultaneous inclusion of said initial video VciAnd the first candidate video VhjThe number of unit video sets of (a); n is a radical ofciRepresenting the occurrence of the initial video V in the plurality of unit video setsciThe number of times of (c); n is a radical ofciIs shown inOccurrence of the first candidate video V in a plurality of unit video setshjThe number of times.
In order to make it more intuitive for those skilled in the art to understand the calculation formula for calculating the video similarity between each initial video and each matched first candidate video as described above, a specific example will be explained below with reference to table 3.
Unit video set 1 V1、V3、V8、V7、V9
Unit video set 2 V3、V4、V9、V2、V7
Unit video set 3 V2、V8、V9、V8、V7
TABLE 3
As shown in table 3, a total of 3 unit video sets are listed, wherein each unit video set includes 5 videos. For example, if the video similarity between the video V8 and the video V9 needs to be calculated, it can be known from table 3 that the video V8 occurs 3 times in three unit video sets in the table, the video V9 occurs 3 times in three unit video sets in the table, and the unit video sets in which the video V8 and the video V9 occur simultaneously have 2 unit video sets including the unit video set 1 and the unit video set 2, so that the video similarity between the video V8 and the video V9 can be calculated as R (V) in the video similarity calculation methodci,Vhj)=2/9。
In a specific implementation of an embodiment, calculating the video similarity between each initial video and each matched first candidate video may be implemented as follows:
firstly, acquiring a plurality of unit video sets, wherein one unit video set comprises videos clicked by a user within one unit time in history, and then calculating the video similarity between each initial video and each matched first candidate video through the following formula:
Figure BDA0002194540360000131
wherein, VciRepresenting an ith initial video in the set of initial videos; vhjRepresenting and said initial video VciMatching jth first candidate video; r (V)ci,Vhj) Representing the initial video VciAnd the first candidate video VhjVideo similarity between them; n is a radical of(ci,hj)Representing simultaneous inclusion of said initial video VciAnd the first candidate video VhjThe number of unit video sets of (a); n represents the total number of acquired unit video sets.
Further, in two specific embodiments regarding calculating video similarities between the initial videos and the first candidate videos matched as described above, the unit video sets may be obtained in a blockchain system composed of a plurality of nodes, and in particular, the unit video sets may be obtained from terminal devices in the blockchain system.
In the above-described embodiment, since videos clicked or viewed within a unit time can be regarded as similar to a large extent, the advantage of calculating the similarity between videos by a plurality of unit video sets is that the accuracy and reality of the video similarity calculation result can be ensured.
Step 5204, determining matching degrees of the initial video set and each of the first candidate videos on each attribute based on the absolute difference value and the video similarity.
In a specific implementation of an embodiment, the determining, based on the absolute difference value and the video similarity, a matching degree of each attribute between the initial video set and each of the first candidate videos may be implemented as follows:
calculating the matching degree of the initial video set and each first candidate video on each attribute by the following formula:
Pe(Vci,Vhj)=k*(1-R(Vci,Vhj))+q*Mecihj
wherein, VciRepresenting an ith initial video in the set of initial videos; vhjRepresenting and said initial video VciMatching jth first candidate video; pe(Vci,Vhj) Representing the initial video VciAnd the first candidate video VhjA degree of match on the e-th attribute; mecihjRepresenting the attribute score mean or attribute score median of the initial video set on the e-th attribute and the jth first candidate video VhjThe absolute value of the difference between the attribute scores on the e-th attribute; r (Vci, V)hj) Representing the initial video VciAnd the first candidate video VhjVideo similarity between them; k and q both represent preset parameters for the formula, where k + q is 1.
In order to make it more intuitive for those skilled in the art to understand the calculation formula for determining the matching degree of the initial video set and each of the first candidate videos on each attribute as described above, a specific example will be explained below with reference to table 4.
Figure BDA0002194540360000141
TABLE 4
As shown in Table 3, the initial video VciAnd the first candidate video VhjThe video similarity between the two is 2/5, and the initial video VciAnd the first candidate video VhjThe absolute value of the difference between the love attributes is5. If the preset parameters k and q in the formula are 0.4 and 0.6 respectively, the initial video V isciAnd the first candidate video VhjThe matching degree on the first love property is as follows: pe (Vci, Vhj) × (0.4 × (1-2/5) +0.6 × 5 ═ 3.24.
The advantage of calculating the matching degree of the initial video set and each of the first candidate videos on each attribute through the formula is that the determined matching degree can be more accurate because two factors of video similarity and absolute value of difference are considered to determine the matching degree.
It should be noted that, in an actual implementation process of the technical solution of the present application, the attribute score average value or the attribute score median value is optional, that is, the attribute score average value may be selected to calculate the difference absolute value instead of the attribute score median value, or the attribute score median value may be selected to calculate the difference absolute value instead of the attribute score median value.
In another specific implementation of an embodiment, a certain mathematical process may be performed on the attribute scores or the absolute difference values in the above specific embodiments, for example, a normalization process is performed on the attribute scores of the videos, and for example, a normalization process is performed on the absolute difference values and/or the video similarities.
In step 530, according to the matching degree of the initial video set and each of the first candidate videos on each attribute, a second candidate video corresponding to each of the initial videos on each attribute is determined from the first candidate video set, so as to obtain a second candidate video set.
Specifically, according to the matching degree of the initial video set and each of the first candidate videos on each attribute, there may be a plurality of ways to determine the second candidate video corresponding to each of the initial videos on each attribute from the first candidate video set.
In an embodiment of the present application, the first candidate videos are first sorted according to the matching degree of the initial video set and each of the first candidate videos on each attribute, and then one of the first candidate videos with the best matching degree may be determined as the second candidate video.
In order to make the above embodiments more intuitive to those skilled in the art to understand, a specific example will be explained below with reference to table 5.
Figure BDA0002194540360000151
TABLE 5
As shown in the table, if a second candidate video of the initial video 1 on the love attribute needs to be determined in the first candidate video set matching with the initial video 1. According to this embodiment, a first candidate video with the highest matching degree is selected from the first candidate video set, and if the matching degree shown in the table is smaller, the matching degree is higher. Then the first candidate video 3 in table 5 may be taken as the second candidate video for the determined initial video 1 on the love property.
In another embodiment of the present application, the first candidate videos are first sorted according to the matching degree of the initial video set and each of the first candidate videos on each attribute, and then a plurality of first candidate videos with the best matching degree may be further determined as second candidate videos.
In step 540, a set of videos to be recommended for the initial set of videos is determined based on the second candidate set of videos.
Specifically, the determining of the set of videos to be recommended for the initial set of videos based on the second candidate set of videos may be understood as determining the set of videos to be recommended from the second candidate set of videos.
In an embodiment of the present application, the determining, based on the second candidate video set, a video set to be recommended for the initial video set may be performed by any one of the following manners:
and determining a video set to be recommended from the second candidate video set by a first multi-objective evolutionary algorithm based on decomposition.
And determining a video set to be recommended from the second candidate video set by a pareto dominance-based multi-objective evolutionary algorithm.
And thirdly, determining a video set to be recommended from the second candidate video set by using an index-based multi-objective evolutionary algorithm.
In particular, the multi-objective evolutionary algorithm is generally used in a multi-objective optimization problem, because there is generally a certain conflict between objectives, that is, optimizing one objective may cause another objective to be degraded, so that there is almost no optimal solution to make all objectives obtain optimal values. Therefore, the objective of multi-objective optimization is to obtain a set of solutions that reconcile the objectives to represent the Pareto Frontier (PF) so that it combines convergence (near the pareto frontier) and diversity (evenly distributed along the pareto frontier) in the target space
In the present application, the second candidate video in the second candidate video set may be used as a decision space (i.e., a candidate solution) in the multi-objective optimization problem. The videos to be recommended in the video set to be recommended can be used as a group of solutions (i.e. final solutions) for coordinating each target in the multi-target optimization problem. The attribute scores of the second candidate video on different attributes may be used as a target space, i.e., multiple targets, in the multi-target optimization problem. Or the absolute value of the difference of the second candidate video on different attributes can be used as a target space in the multi-target optimization problem. Or the matching degree of the second candidate video on different attributes can be used as a target space in the multi-target optimization problem. The attribute scores or absolute values of differences or standard values of the matching degrees (i.e., normalized values) of the second candidate video on different attributes as described above may also or alternatively be used as the target space in the multi-objective optimization problem.
In a specific implementation of an embodiment, the video to be recommended to coordinate each target may be determined from the second candidate video set to represent the Pareto Frontier (PF) based on the multi-objective evolutionary algorithm as described above, so that convergence (close to the pareto frontier) and diversity (evenly distributed along the pareto frontier) are compatible in the target space, for example, as shown in fig. 8.
Referring to fig. 8, a pareto frontier diagram of videos to be recommended under three optimization objectives (love, friendship, and familiarity) according to an embodiment of the present application is shown. As shown, three coordinates (f) of the video 801 to be recommended in the figure1、f2、f3) Representing three attributes (i.e. three optimization objectives) of the video to be recommended, it can be seen that the videos to be recommended (i.e. black dots) in fig. 8 are closely and approximately uniformly distributed on the Pareto Frontier (PF).
In order to better acquire a good solution in the first candidate video set, in the process of determining the video set to be recommended from the second candidate video set, an archiving strategy can be added, that is, an external archive is maintained based on a parallel grid coordinate system, so as to store the good solution in the evolution process.
In an embodiment of the present application, the determining a set of videos to be recommended for the initial set of videos based on the second candidate set of videos may be further performed by:
determining a video set to be recommended for the initial video set from a union of the second candidate video set and the initial video set based on the second candidate video set and the initial video set.
In an embodiment of the present application, after determining a set of videos to be recommended for the initial set of videos based on the second candidate set of videos, the method may further include: a loop process of the steps shown in fig. 9 is performed until the number of loops satisfies a predetermined number.
Referring to fig. 9, a flowchart illustrating a loop process for determining a recommended video according to an embodiment of the present application may specifically include steps 910 to 940:
step 910, obtaining a third candidate video set matched with each video to be recommended in the video set to be recommended, and obtaining a plurality of third candidate video sets.
Step 920, calculating the matching degree of the to-be-recommended video set and each third candidate video in each third candidate video set on each attribute.
Step 930, determining a fourth candidate video set of each to-be-recommended video on each attribute from the third candidate video set according to the matching degree of each to-be-recommended video set and each third candidate video in each third candidate video set on each attribute.
And 940, determining a video set to be recommended based on the fourth candidate video set.
After the step 940 is finished, whether the number of times of circulation meets the preset number of times is detected, if not, the step 910 is executed to continue the circulation.
The advantage of obtaining the video to be recommended through the above-mentioned cyclic process is that the diversity and high quality of the video to be recommended can be better ensured because the final video to be recommended is the result obtained through multiple screening and optimization.
In the technical solutions provided by some embodiments of the present application, a first candidate video set is obtained by matching an initial video in an initial video set with first candidate videos, then a matching degree of the initial video set and each of the first candidate videos on each attribute is calculated, a second candidate video corresponding to each of the initial videos on each attribute is determined from the first candidate video set according to the calculated matching degree, a second candidate video set is obtained, and finally a video set to be recommended for the initial video set is determined from the second candidate video set. In the technical scheme, the obtained second candidate video set is determined from the first candidate video set according to the matching degree of the initial video set and each first candidate video on each attribute, so that the videos in the second candidate video set are ensured to have diversity, and further, the videos to be recommended determined from the second candidate video set are also enabled to have diversity. When a user watches videos recommended by the conventional recommendation algorithm and is tired, the user can jump out of the current recommendation video to watch videos of other types. Therefore, the technical problem that more diversified videos cannot be recommended to the user in the prior art can be solved through the technical scheme.
The following describes embodiments of an apparatus of the present application, which may be used to perform the video recommendation method in the above embodiments of the present application. For details that are not disclosed in the embodiments of the apparatus of the present application, please refer to the embodiments of the video recommendation method described above in the present application.
FIG. 10 shows a block diagram of a video recommendation device according to an embodiment of the present application.
Referring to fig. 10, a video recommendation apparatus 1000 according to an embodiment of the present application includes: an acquisition unit 1001, a calculation unit 1002, a first determination unit 1003, and a second determination unit 1004.
The acquiring unit 1001 is configured to acquire a first candidate video that matches each initial video in the initial video set, so as to obtain a first candidate video set; a calculating unit 1002, configured to calculate matching degrees of the initial video set and the first candidate videos on the respective attributes; a first determining unit 1003, configured to determine, according to matching degrees of the initial video set and each of the first candidate videos on each attribute, a second candidate video corresponding to each of the initial videos on each attribute from the first candidate video set, so as to obtain a second candidate video set; a second determining unit 1004, configured to determine a set of videos to be recommended for the initial set of videos based on the second candidate set of videos.
In some embodiments of the present application, based on the foregoing solution, the calculating unit 1002 includes: a first calculating unit, configured to calculate an average value or a median value of attribute scores of the initial video set on each attribute, and an absolute value of a difference between attribute scores of each of the first candidate videos on the corresponding attribute; and a third determining unit, configured to determine, based on the absolute difference value, a matching degree of the initial video set and each of the first candidate videos on each attribute.
In some embodiments of the present application, based on the foregoing solution, the calculating unit 1002 includes: a first calculating unit, configured to calculate an average value or a median value of attribute scores of the initial video set on each attribute, and an absolute value of a difference between attribute scores of each of the first candidate videos on the corresponding attribute; a second calculating unit, configured to calculate video similarities between the initial videos and the matched first candidate videos; and a fourth determining unit, configured to determine, based on the absolute difference value and the video similarity, a matching degree of the initial video set and each of the first candidate videos on each attribute.
In some embodiments of the present application, based on the foregoing solution, the second calculating unit is configured to: acquiring a plurality of unit video sets, wherein one unit video set comprises videos clicked by a user within one unit time in history; calculating the video similarity between each initial video and each matched first candidate video by the following formula:
Figure BDA0002194540360000191
wherein, VciRepresenting an ith initial video in the set of initial videos; vhjRepresenting and said initial video VciMatching jth first candidate video; r (V)ci,Vhj) Representing the initial video VciAnd the first candidate video VhjVideo similarity between them; n is a radical of(ci,hj)Representing simultaneous inclusion of said initial video VciAnd the first candidate video VhjThe number of unit video sets of (a); n is a radical ofciRepresenting the occurrence of the initial video V in the plurality of unit video setsciThe number of times of (c); n is a radical ofciRepresenting the occurrence of the first candidate video V in the plurality of unit video setshjThe number of times.
In some embodiments of the present application, based on the foregoing solution, the second calculating unit is configured to: in a blockchain system composed of a plurality of nodes, the plurality of unit video sets are acquired.
In some embodiments of the present application, based on the foregoing scheme, the fourth determining unit is configured to: calculating the matching degree by the following formula:
Pe(Vci,Vhj)=k*(1-R(Vci,Vhj))+q*Mecihj
wherein, VciRepresenting an ith initial video in the set of initial videos; vhjRepresenting and said initial video VciMatching jth first candidate video; pe(Vci,Vhj) Representing the initial video VciAnd the first candidate video VhjA degree of match on the e-th attribute; mecihjRepresenting the attribute score mean or attribute score median of the initial video set on the e-th attribute and the jth first candidate video VhjThe absolute value of the difference between the attribute scores on the e-th attribute; r (V)ci,Vhj) Representing the initial video VciAnd the first candidate video VhjVideo similarity between them; k and q both represent preset parameters for the formula, where k + q is 1.
In some embodiments of the present application, based on the foregoing solution, the video recommendation apparatus further includes: a loop process executing unit, configured to execute a loop process until the number of loops satisfies a predetermined number of times, according to the following steps:
acquiring a third candidate video set matched with each video to be recommended in the video set to be recommended to obtain a plurality of third candidate video sets; calculating the matching degree of the video set to be recommended and each third candidate video in each third candidate video set on each attribute; determining a fourth candidate video set of each video to be recommended on each attribute from the third candidate video set according to the matching degree of each video to be recommended on each attribute of the third candidate video set and each third candidate video in each third candidate video set; and determining a video set to be recommended based on the fourth candidate video set.
In some embodiments of the present application, based on the foregoing scheme, the second determining unit 1004 is configured to: determining a video set to be recommended for the initial video set by any one of the following methods:
determining a video set to be recommended from the second candidate video set based on a decomposed multi-objective evolutionary algorithm;
determining a video set to be recommended from the second candidate video set based on a pareto dominant multi-objective evolutionary algorithm;
and determining a video set to be recommended from the second candidate video set by using an index-based multi-objective evolutionary algorithm.
In some embodiments of the present application, based on the foregoing scheme, the second determining unit 1004 is configured to: determining a video set to be recommended for the initial video set from a union of the second candidate video set and the initial video set based on the second candidate video set and the initial video set.
FIG. 11 illustrates a schematic structural diagram of a computer system suitable for use in implementing the electronic device of an embodiment of the present application.
It should be noted that the computer system 1100 of the electronic device shown in fig. 11 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 11, a computer system 1100 includes a Central Processing Unit (CPU)1101, which can perform various appropriate actions and processes, such as performing the methods described in the above embodiments, according to a program stored in a Read-Only Memory (ROM) 1102 or a program loaded from a storage section 1108 into a Random Access Memory (RAM) 1103. In the RAM 1103, various programs and data necessary for system operation are also stored. The CPU 1101, ROM 1102, and RAM 1103 are connected to each other by a bus 1104. An Input/Output (I/O) interface 1105 is also connected to bus 1104.
The following components are connected to the I/O interface 1105: an input portion 1106 including a keyboard, mouse, and the like; an output section 1107 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, a speaker, and the like; a storage section 1108 including a hard disk and the like; and a communication section 1109 including a Network interface card such as a LAN (Local Area Network) card, a modem, or the like. The communication section 1109 performs communication processing via a network such as the internet. A driver 1110 is also connected to the I/O interface 1105 as necessary. A removable medium 1111 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 1110 as necessary, so that a computer program read out therefrom is mounted into the storage section 1108 as necessary.
In particular, according to embodiments of the application, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, embodiments of the present application include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication portion 1109 and/or installed from the removable medium 1111. When the computer program is executed by a Central Processing Unit (CPU)1101, various functions defined in the system of the present application are executed.
It should be noted that the computer readable medium shown in the embodiments of the present application may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a Read-Only Memory (ROM), an Erasable Programmable Read-Only Memory (EPROM), a flash Memory, an optical fiber, a portable Compact Disc Read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. Each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present application may be implemented by software, or may be implemented by hardware, and the described units may also be disposed in a processor. Wherein the names of the elements do not in some way constitute a limitation on the elements themselves.
As another aspect, the present application also provides a computer-readable medium, which may be contained in the electronic device described in the above embodiments; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by an electronic device, cause the electronic device to implement the method described in the above embodiments.
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the application. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present application can be embodied in the form of a software product, which can be stored in a non-volatile storage medium (which can be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which can be a personal computer, a server, a touch terminal, or a network device, etc.) to execute the method according to the embodiments of the present application.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the embodiments disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (20)

1. A video recommendation method, characterized in that the recommendation method comprises:
acquiring first candidate videos matched with each initial video in the initial video set to obtain a first candidate video set;
calculating the matching degree of each first candidate video in the initial video set and the first candidate video set on each attribute by taking the initial video set as a whole;
determining second candidate videos corresponding to the initial videos on the attributes from the first candidate video set according to the matching degree of the initial video set and each first candidate video in the first candidate video set on the attributes to obtain a second candidate video set;
determining a set of videos to be recommended for the initial set of videos based on the second candidate set of videos.
2. The method of claim 1, wherein the calculating the degree of matching between the initial video set and each of the first candidate videos in the first candidate video set on each attribute comprises:
calculating the absolute value of the difference between the average value or the median value of the attribute scores of the initial videos in the initial video set on each attribute and the attribute scores of each first candidate video in the first candidate video set on the corresponding attribute;
and determining the matching degree of each attribute of the initial video set and each first candidate video in the first candidate video set based on the absolute difference value.
3. The method of claim 1, wherein the calculating the degree of matching between the initial video set and each of the first candidate videos in the first candidate video set on each attribute comprises:
calculating the absolute value of the difference between the average value or the median value of the attribute scores of the initial videos in the initial video set on each attribute and the attribute scores of each first candidate video in the first candidate video set on the corresponding attribute;
calculating video similarity between each initial video and each matched first candidate video;
and determining the matching degree of each attribute of each first candidate video in the initial video set and the first candidate video set based on the absolute difference value and the video similarity.
4. The method according to claim 3, wherein the calculating the video similarity between the initial videos and the matched first candidate videos comprises:
acquiring a plurality of unit video sets, wherein one unit video set comprises videos clicked by a user within one unit time in history;
calculating the video similarity between each initial video and each matched first candidate video by the following formula:
Figure FDA0002896001950000021
wherein, VciRepresenting an ith initial video in the set of initial videos; vhjRepresenting and said initial video VciMatching jth first candidate video; r (V)ci,Vhj) To representThe initial video VciAnd the first candidate video VhjVideo similarity between them; n is a radical of(ci,hj)Representing simultaneous inclusion of said initial video VciAnd the first candidate video VhjThe number of unit video sets of (a); n is a radical ofciRepresenting the occurrence of the initial video V in the plurality of unit video setsciThe number of times of (c); n is a radical ofciRepresenting the occurrence of the first candidate video V in the plurality of unit video setshjThe number of times.
5. The method of claim 4, wherein obtaining the plurality of sets of unitary videos comprises:
in a blockchain system composed of a plurality of nodes, the plurality of unit video sets are acquired.
6. The method of claim 3, wherein based on the absolute difference value and the video similarity, a matching degree of each attribute of the initial video set and each first candidate video in the first candidate video set is calculated by the following formula:
Pe(Vci,Vhj)=k*(1-R(Vci,Vhj))+q*Mecihj
wherein, VciRepresenting an ith initial video in the set of initial videos; vhjRepresenting and said initial video VciMatching jth first candidate video; pe(Vci,Vhj) Representing the initial video VciAnd the first candidate video VhjA degree of match on the e-th attribute; mecihjRepresenting the mean value of the attribute scores or the median value of the attribute scores of the initial videos in the initial video set on the e-th attribute and the jth first candidate video VhjThe absolute value of the difference between the attribute scores on the e-th attribute; r (V)ci,Vhj) Representing the initial video VciAnd the first candidate video VhjVideo similarity between them; k and q both represent preset parameters for the formula,wherein k + q is 1.
7. The method according to any one of claims 1 to 6, wherein after determining the set of videos to be recommended for the initial set of videos based on the second candidate set of videos, the method further comprises:
executing the following circulation process until the circulation times meet the preset times:
acquiring a third candidate video set matched with each video to be recommended in the video set to be recommended to obtain a plurality of third candidate video sets;
calculating the matching degree of the video set to be recommended and each third candidate video in each third candidate video set on each attribute by taking the video set to be recommended as a whole;
determining a fourth candidate video set of each video to be recommended on each attribute from the third candidate video set according to the matching degree of each video to be recommended on each attribute of the third candidate video set and each third candidate video in each third candidate video set;
and determining a video set to be recommended based on the fourth candidate video set.
8. The method according to any one of claims 1 to 6, wherein the determining a set of videos to be recommended for the initial set of videos based on the second candidate set of videos comprises any one of:
determining a video set to be recommended from the second candidate video set based on a decomposed multi-objective evolutionary algorithm;
determining a video set to be recommended from the second candidate video set based on a pareto dominant multi-objective evolutionary algorithm;
and determining a video set to be recommended from the second candidate video set by using an index-based multi-objective evolutionary algorithm.
9. The method according to any one of claims 1 to 6, wherein the determining a set of videos to be recommended for the initial set of videos based on the second candidate set of videos comprises:
determining a video set to be recommended for the initial video set from a union of the second candidate video set and the initial video set based on the second candidate video set and the initial video set.
10. A video recommendation apparatus, characterized in that the apparatus comprises:
the acquisition unit is used for acquiring first candidate videos matched with all the initial videos in the initial video set to obtain a first candidate video set;
a calculating unit, configured to calculate, with the initial video set as a whole, matching degrees of the initial video set and each of the first candidate videos in the first candidate video set on each attribute;
a first determining unit, configured to determine, according to matching degrees of the initial video set and each first candidate video in the first candidate video set on each attribute, a second candidate video corresponding to each initial video on each attribute from the first candidate video set, so as to obtain a second candidate video set;
a second determining unit, configured to determine a set of videos to be recommended for the initial set of videos based on the second candidate set of videos.
11. The apparatus of claim 10, wherein the computing unit comprises:
a first calculating unit, configured to calculate an absolute value of a difference between an average value or a median value of attribute scores of the initial videos in the initial video set on each attribute and an attribute score of each first candidate video in the first candidate video set on a corresponding attribute;
a third determining unit, configured to determine, based on the absolute difference value, a matching degree of each attribute between the initial video set and each first candidate video in the first candidate video set.
12. The apparatus of claim 10, wherein the computing unit comprises:
a first calculating unit, configured to calculate an absolute value of a difference between an average value or a median value of attribute scores of the initial videos in the initial video set on each attribute and an attribute score of each first candidate video in the first candidate video set on a corresponding attribute;
a second calculating unit, configured to calculate video similarities between the initial videos and the matched first candidate videos;
a fourth determining unit, configured to determine, based on the absolute difference value and the video similarity, a matching degree of each attribute of each first candidate video in the initial video set and the first candidate video set.
13. The apparatus of claim 12, wherein the second computing unit is configured to: acquiring a plurality of unit video sets, wherein one unit video set comprises videos clicked by a user within one unit time in history;
calculating the video similarity between each initial video and each matched first candidate video by the following formula:
Figure FDA0002896001950000051
wherein, VciRepresenting an ith initial video in the set of initial videos; vhjRepresenting and said initial video VciMatching jth first candidate video; r (V)ci,Vhj) Representing the initial video VciAnd the first candidate video VhjVideo similarity between them; n is a radical of(ci,hj)Representing simultaneous inclusion of said initial video VciAnd the first candidate viewFrequency VhjThe number of unit video sets of (a); n is a radical ofciRepresenting the occurrence of the initial video V in the plurality of unit video setsciThe number of times of (c); n is a radical ofciRepresenting the occurrence of the first candidate video V in the plurality of unit video setshjThe number of times.
14. The apparatus of claim 13, wherein the second computing unit is configured to: in a blockchain system composed of a plurality of nodes, the plurality of unit video sets are acquired.
15. The apparatus of claim 12, wherein the fourth determining unit is configured to: calculating the matching degree by the following formula:
Pe(Vci,Vhj)=k*(1-R(Vci,Vhj))+q*Mecihj
wherein, VciRepresenting an ith initial video in the set of initial videos; vhjRepresenting and said initial video VciMatching jth first candidate video; pe(Vci,Vhj) Representing the initial video VciAnd the first candidate video VhjA degree of match on the e-th attribute; mecihjRepresenting the mean value of the attribute scores or the median value of the attribute scores of the initial videos in the initial video set on the e-th attribute and the jth first candidate video VhjThe absolute value of the difference between the attribute scores on the e-th attribute; r (V)ci,Vhj) Representing the initial video VciAnd the first candidate video VhjVideo similarity between them; k and q both represent preset parameters for the formula, where k + q is 1.
16. The apparatus according to any one of claims 10 to 15, wherein the video recommendation apparatus further comprises:
a loop process executing unit, configured to execute a loop process until the number of loops satisfies a predetermined number of times, according to the following steps:
acquiring a third candidate video set matched with each video to be recommended in the video set to be recommended to obtain a plurality of third candidate video sets;
calculating the matching degree of the video set to be recommended and each third candidate video in each third candidate video set on each attribute by taking the video set to be recommended as a whole;
determining a fourth candidate video set of each video to be recommended on each attribute from the third candidate video set according to the matching degree of each video to be recommended on each attribute of the third candidate video set and each third candidate video in each third candidate video set;
and determining a video set to be recommended based on the fourth candidate video set.
17. The apparatus according to any one of claims 10 to 15, wherein the second determining unit is configured to: determining a video set to be recommended for the initial video set by any one of the following methods:
determining a video set to be recommended from the second candidate video set based on a decomposed multi-objective evolutionary algorithm;
determining a video set to be recommended from the second candidate video set based on a pareto dominant multi-objective evolutionary algorithm;
and determining a video set to be recommended from the second candidate video set by using an index-based multi-objective evolutionary algorithm.
18. The apparatus according to any one of claims 10 to 15, wherein the second determining unit is configured to: determining a video set to be recommended for the initial video set from a union of the second candidate video set and the initial video set based on the second candidate video set and the initial video set.
19. A computer-readable storage medium, on which a computer program is stored, the computer program comprising executable instructions that, when executed by a processor, carry out the method of any one of claims 1 to 9.
20. An electronic device, comprising:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is arranged to execute the executable instructions to implement the method of any one of claims 1 to 9.
CN201910843794.0A 2019-09-06 2019-09-06 Video recommendation method and device Active CN110598045B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910843794.0A CN110598045B (en) 2019-09-06 2019-09-06 Video recommendation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910843794.0A CN110598045B (en) 2019-09-06 2019-09-06 Video recommendation method and device

Publications (2)

Publication Number Publication Date
CN110598045A CN110598045A (en) 2019-12-20
CN110598045B true CN110598045B (en) 2021-03-19

Family

ID=68858235

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910843794.0A Active CN110598045B (en) 2019-09-06 2019-09-06 Video recommendation method and device

Country Status (1)

Country Link
CN (1) CN110598045B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8719261B2 (en) * 2011-12-02 2014-05-06 Verizon Patent And Licensing Inc. Dynamic catalog ranking
CN104199896A (en) * 2014-08-26 2014-12-10 海信集团有限公司 Video similarity determining method and video recommendation method based on feature classification
CN108462900A (en) * 2017-02-22 2018-08-28 合网络技术(北京)有限公司 Video recommendation method and device
CN108733842A (en) * 2018-05-29 2018-11-02 北京奇艺世纪科技有限公司 Video recommendation method and device
CN109388739A (en) * 2017-08-03 2019-02-26 合信息技术(北京)有限公司 The recommended method and device of multimedia resource

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20160037253A (en) * 2014-09-23 2016-04-06 주식회사 케이티 Method, apparatus and system for providing recommend theme
CN105260458A (en) * 2015-10-15 2016-01-20 海信集团有限公司 Video recommendation method for display apparatus and display apparatus
CN105260481B (en) * 2015-11-13 2019-09-17 优酷网络技术(北京)有限公司 A kind of multifarious evaluating method of push list and system
CN105847985A (en) * 2016-03-30 2016-08-10 乐视控股(北京)有限公司 Video recommendation method and device
CN105930423A (en) * 2016-04-18 2016-09-07 乐视控股(北京)有限公司 Multimedia similarity determination method and apparatus as well as multimedia recommendation method
CN106686460B (en) * 2016-12-22 2020-03-13 优地网络有限公司 Video program recommendation method and video program recommendation device
CN107426610B (en) * 2017-03-29 2020-04-28 聚好看科技股份有限公司 Video information synchronization method and device
CN110019943B (en) * 2017-09-11 2021-09-14 中国移动通信集团浙江有限公司 Video recommendation method and device, electronic equipment and storage medium
CN109871490B (en) * 2019-03-08 2021-03-09 腾讯科技(深圳)有限公司 Media resource matching method and device, storage medium and computer equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8719261B2 (en) * 2011-12-02 2014-05-06 Verizon Patent And Licensing Inc. Dynamic catalog ranking
CN104199896A (en) * 2014-08-26 2014-12-10 海信集团有限公司 Video similarity determining method and video recommendation method based on feature classification
CN108462900A (en) * 2017-02-22 2018-08-28 合网络技术(北京)有限公司 Video recommendation method and device
CN109388739A (en) * 2017-08-03 2019-02-26 合信息技术(北京)有限公司 The recommended method and device of multimedia resource
CN108733842A (en) * 2018-05-29 2018-11-02 北京奇艺世纪科技有限公司 Video recommendation method and device

Also Published As

Publication number Publication date
CN110598045A (en) 2019-12-20

Similar Documents

Publication Publication Date Title
CA3007853C (en) End-to-end deep collaborative filtering
US10140342B2 (en) Similarity calculation system, method of calculating similarity, and program
CN104053023B (en) A kind of method and device of determining video similarity
US11216518B2 (en) Systems and methods of providing recommendations of content items
CN104182449A (en) System and method for personalized video recommendation based on user interests modeling
CN111552883B (en) Content recommendation method and computer-readable storage medium
CN106127506B (en) recommendation method for solving cold start problem of commodity based on active learning
WO2019118236A1 (en) Deep learning on image frames to generate a summary
WO2023087914A1 (en) Method and apparatus for selecting recommended content, and device, storage medium and program product
CN113301017B (en) Attack detection and defense method and device based on federal learning and storage medium
CN117238451B (en) Training scheme determining method, device, electronic equipment and storage medium
US20140289334A1 (en) System and method for recommending multimedia information
CN111291217A (en) Content recommendation method and device, electronic equipment and computer readable medium
JP2014215685A (en) Recommendation server and recommendation content determination method
US20160132771A1 (en) Application Complexity Computation
CN112801053B (en) Video data processing method and device
CN105260458A (en) Video recommendation method for display apparatus and display apparatus
US20160048595A1 (en) Filtering Content Suggestions for Multiple Users
CN110085292A (en) Drug recommended method, device and computer readable storage medium
CN113254882A (en) Method, device and equipment for determining experimental result and storage medium
CN110598045B (en) Video recommendation method and device
CN108848152B (en) Object recommendation method and server
CN106919946B (en) A kind of method and device of audience selection
CN110309361B (en) Video scoring determination method, recommendation method and device and electronic equipment
KR102715895B1 (en) System and method for recommending contents based on deep neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant