CN105095508B - A kind of multimedia content recommended method and multimedia content recommendation apparatus - Google Patents
A kind of multimedia content recommended method and multimedia content recommendation apparatus Download PDFInfo
- Publication number
- CN105095508B CN105095508B CN201510549931.1A CN201510549931A CN105095508B CN 105095508 B CN105095508 B CN 105095508B CN 201510549931 A CN201510549931 A CN 201510549931A CN 105095508 B CN105095508 B CN 105095508B
- Authority
- CN
- China
- Prior art keywords
- emotion
- weighted value
- user
- descriptor
- label
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 55
- 230000008451 emotion Effects 0.000 claims abstract description 360
- 238000011156 evaluation Methods 0.000 claims abstract description 54
- 230000002996 emotional effect Effects 0.000 claims description 16
- 239000003086 colorant Substances 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 10
- 238000004590 computer program Methods 0.000 description 7
- 230000008569 process Effects 0.000 description 7
- 230000008859 change Effects 0.000 description 6
- 238000012545 processing Methods 0.000 description 5
- 206010028813 Nausea Diseases 0.000 description 4
- 238000000605 extraction Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 230000008693 nausea Effects 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 238000013459 approach Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 210000001072 colon Anatomy 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000008909 emotion recognition Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/955—Retrieval from the web using information identifiers, e.g. uniform resource locators [URL]
- G06F16/9562—Bookmark management
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/40—Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
- G06F16/43—Querying
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/951—Indexing; Web crawling techniques
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Databases & Information Systems (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The embodiment of the invention provides a kind of multimedia content recommended method and device, method therein includes: to obtain the evaluation content of user's input;The emotion model of user is established according to the evaluation content;Recommend multimedia content to user according to the emotion model of the user of foundation.
Description
Technical Field
The present invention relates to the field of communication technology application, and in particular, to a multimedia content recommendation method and a multimedia content recommendation apparatus.
Background
With the continuous development of communication technology and the popularization of mobile terminals, more and more users obtain multimedia contents on the mobile terminals.
At present, most of the ways for users to acquire multimedia content are to directly search for the content desired to be viewed or to select the content desired to be viewed according to a recommendation page provided by a multimedia content provider.
Obviously, this approach does not satisfy the diversity of user preferences.
Therefore, for the multimedia content provider, real-time and accurate multimedia content push (including video recommendation, advertisement placement, or other resource push) can bring better viewing experience to the user, and can also improve the viewing times, loyalty, and the like of the user.
The working mode of the existing multimedia content recommendation system is usually to establish a model under the online, then to use the online user watching behavior as the parameter input, and to obtain a recommendation list through the calculation of the model. There are also some improvements of real-time recommendation algorithm, and most of them dynamically adjust the model according to the statistical data of watching playing records and so on. However, the above method cannot sense the current emotion and interest of the user, and cannot sense the change of the interest of the user. Therefore, the user interest cannot be more accurately analyzed according to the change of the user interest, and the multimedia content can be recommended based on the analysis.
Disclosure of Invention
In order to solve the existing technical problem, embodiments of the present invention are expected to provide a multimedia content recommendation method and apparatus.
The embodiment of the invention provides a multimedia content recommendation method, which comprises the following steps:
acquiring evaluation content input by a user;
establishing an emotion model of the user according to the evaluation content;
and recommending the multimedia content to the user according to the established emotion model of the user.
In the above scheme, the evaluation content includes: comments and/or barrage of the user's ratings.
In the above scheme, the establishing an emotion model of the user according to the evaluation content includes: determining an emotion label list of the user on different emotion dimensions according to the evaluation content; wherein the sentiment dimensions include at least: like dimension; the emotion tag list includes: and the emotion label and the weighted value corresponding to the emotion label.
In the above scheme, the determining the emotion tag list of the user in different emotion dimensions according to the evaluation content includes:
extracting emotion descriptors in user evaluation content and content described by the emotion descriptors;
the emotion descriptor can be an emotion adjective or an emotion adjective and an emotion adverb;
the content described by the emotion descriptor corresponds to the emotion label in the emotion label list, and the weighted value of the emotion descriptor corresponds to the weighted value of the emotion label.
In the above scheme, the weighted value of the emotion descriptor is determined by the following method:
when the emotion descriptor only comprises emotion adjectives, determining a first weighted value, and determining the determined first weighted value as a weighted value corresponding to the emotion label; when the emotion descriptor comprises an emotion adjective and an emotion adverb, determining a first weighted value and a second weighted value, and taking the product of the first weighted word and the second weighted value as a weighted value corresponding to the emotion label; the first weighted value refers to a weighted value corresponding to the emotional adjective, and the second weighted value refers to a weighted value corresponding to the emotional adverb.
In the above scheme, when the product of the first weighted value and the second weighted value is greater than the maximum threshold of the weighted values corresponding to the emotion tags, the weighted value corresponding to the emotion tags is taken as the maximum threshold; and when the product of the first weighted value and the second weighted value is smaller than the minimum threshold of the weighted values corresponding to the emotion tags, taking the weighted value corresponding to the emotion tags as the minimum threshold.
In the above solution, before determining the weighted value of the emotion descriptor, the method further includes:
presetting a weighted value corresponding to the emotion adjectives in the emotion descriptor, or presetting a weighted value corresponding to the emotion adjectives and the emotion adverbs in the emotion descriptor.
In the above scheme, the recommending multimedia content to the user according to the established emotion model of the user includes:
determining an emotion label list of a user;
determining a tag list of multimedia content to be recommended;
determining the similarity between a tag list of multimedia content to be recommended and an emotion tag list of the user;
recommending one or more multimedia contents with the highest similarity with the emotion label list of the user in the multimedia contents to be recommended to the user.
The embodiment of the invention provides a multimedia content recommendation device, which comprises:
the system comprises an evaluation content acquisition module, an emotion model establishment module and a multimedia content recommendation module; wherein
The evaluation content acquisition module is used for acquiring the evaluation content input by the user;
the emotion model establishing module is used for establishing an emotion model of the user according to the evaluation content;
and the multimedia content recommending module is used for recommending the multimedia content to the user according to the established emotion model of the user.
In the above scheme, the evaluation content includes: comments and/or barrage of the user's ratings.
In the above scheme, the emotion model establishing module is configured to determine, according to the evaluation content, an emotion tag list of the user in different emotion dimensions; wherein the sentiment dimensions include at least: like dimension; the emotion tag list includes: and the emotion label and the weighted value corresponding to the emotion label.
In the scheme, the emotion model establishing module comprises an extraction submodule and a setting submodule; wherein,
the extraction submodule is used for extracting the emotion descriptors in the user evaluation content and the content described by the emotion descriptors; the emotion descriptor can be an emotion adjective or an emotion adjective and an emotion adverb;
the setting submodule is used for setting the content described by the emotion descriptor as an emotion label in an emotion label list and setting the weighted value of the emotion descriptor as the weighted value of the emotion label.
In the above scheme, the emotion model establishing module further includes a first determining sub-module, configured to determine a first weighted value when the emotion descriptor only includes an emotion adjective, and determine the first weighted value as a weighted value corresponding to the emotion tag; when the emotion descriptor comprises an emotion adjective and an emotion adverb, determining a first weighted value and a second weighted value, and determining the product of the first weighted word and the second weighted value as a weighted value corresponding to the emotion label; the first weighted value refers to a weighted value corresponding to the emotional adjective, and the second weighted value refers to a weighted value corresponding to the emotional adverb.
In the above scheme, the emotion model establishing module is configured to, when a product of the first weighted value and the second weighted value is greater than a maximum threshold of the weighted values corresponding to the emotion tags, take the weighted value corresponding to the emotion tag as the maximum threshold; and when the product of the first weighted value and the second weighted value is smaller than the minimum threshold of the weighted values corresponding to the emotion tags, taking the weighted value corresponding to the emotion tags as the minimum threshold.
In the above scheme, the apparatus further comprises: and the weighted value setting module is used for presetting the weighted value corresponding to the emotion adjective in the emotion descriptor or presetting the weighted value corresponding to the emotion adjective and the emotion adverb in the emotion descriptor before the emotion model establishing module determines the weighted value of the emotion descriptor.
In the foregoing solution, the multimedia content push module includes: a second determining submodule and a recommending submodule; wherein,
the second determining submodule is used for determining an emotion label list of the user; determining a tag list of multimedia content to be recommended; determining the similarity between a tag list of multimedia content to be recommended and an emotional tag list preferred by the user;
and the recommending submodule is used for recommending one or more multimedia contents with the highest similarity with the emotion tag list of the user in the multimedia contents to be recommended to the user.
The embodiment of the invention at least has the following advantages:
the multimedia content recommendation method and device provided by the embodiment of the invention are used for acquiring evaluation content input by a user; establishing an emotion model of the user according to the evaluation content; and recommending the multimedia content to the user according to the established emotion model of the user. According to the multimedia content recommendation method provided by the embodiment of the invention, the emotion model of the user can be established according to the user evaluation content, and the multimedia content is recommended for the user according to the emotion model established for the user, compared with the method for recommending the multimedia content for the user according to the user watching record in the prior art. The multimedia content recommendation method provided by the embodiment of the invention can reflect the real preference of the user, track the preference of the user changing at any time, continuously adjust the multimedia content recommended to the user and increase the interest and satisfaction degree of the user in the multimedia content watching process.
Drawings
FIG. 1 is a flowchart illustrating steps of a method for recommending multimedia contents according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating steps of a multimedia content recommendation method according to a second embodiment of the present invention;
fig. 3 is a block diagram showing a basic structure in a multimedia contents recommending apparatus of the present invention;
fig. 4 shows an exemplary flowchart of a multimedia content recommendation method according to the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
Method embodiment one
Referring to fig. 1, a flowchart illustrating steps of an embodiment of a multimedia content recommendation method according to the present invention is shown, which may specifically include:
step 101, obtaining evaluation content input by a user;
specifically, in this step, the multimedia provider, especially the video providing system, collects the evaluation content input by the user, where the user evaluation may be a comment made by the user for the video or a bullet screen made during the playing of the video.
102, establishing an emotion model of the user according to the evaluation content;
specifically, the emotion model of the user is composed of an emotion tag list of the user in different emotion dimensions, where the different emotion dimensions at least include: like dimension;
further, the different emotion dimensions may also include an annoying dimension, or further include an imperceptible dimension, etc.
The emotion label list is composed of one or more emotion labels and weighted values corresponding to the emotion labels.
As the name suggests, the emotion label list in the favorite dimension can represent the emotion labels which are interested by the user and the weighted values of the emotion labels; the weighted value of each different emotion label represents the proportion of the emotion label in the favorite content range of the user, and represents the favorite degree of the user to the content identified by the emotion label.
In practical applications, in the field of video playing, the emotion tag may include the following elements: names of persons (names of famous persons such as actors, singers, etc.), names of movies, names of characters in dramas, types of movies, etc.; only some common examples of emotion tag elements are given here, and are not intended to limit the scope of protection of the present invention, and in practical applications, various emotion tag elements may be set according to actual needs.
It should be noted that, for the same user, the emotion tag list may include all the above elements, such as at least one of names (names of famous people, such as actors and singers), names of movies and television shows, names of characters in the movies and television shows, and one or more emotion tags may be included for each element.
And 103, recommending multimedia content to the user according to the established emotion model of the user.
Specifically, the multimedia content is recommended to the user according to the established emotion model of the user, and specifically, the multimedia content can be recommended to the user according to the label list of the user in the favorite dimension, so that the multimedia content recommended to the user in this way can better meet the current preference of the user. And along with tracking and analyzing the evaluation content input by the user, the change of the preference of the user can be sensed in real time, and the multimedia content pushed to the user can be adjusted to adapt to the change, so that good viewing experience can be brought to the user.
In summary, according to the multimedia content recommendation method provided by the embodiment of the invention, the emotion model of the user can be established according to the user evaluation content, and the multimedia content can be recommended to the user according to the emotion model established for the user. Compared with the method for recommending the multimedia content for the user according to the watching record of the user in the prior art, the method for recommending the multimedia content provided by the embodiment of the invention can reflect the real preference of the user, track the preference of the user changing at any time, continuously adjust the multimedia content recommended to the user and increase the interest and satisfaction degree of the user in the process of watching the multimedia content.
Method embodiment two
Referring to fig. 2, a flowchart illustrating steps of an embodiment of a multimedia content recommendation method according to the present invention is shown, which may specifically include:
step 201, obtaining evaluation content input by a user;
specifically, the multimedia provider, especially the video providing system, collects the evaluation content input by the user, where the user evaluation may be a comment made by the user for the video and/or a barrage made during the playing of the video.
Step 202, determining an emotion label list of the user on different emotion dimensions according to the evaluation content of the user; wherein the sentiment dimensions include at least: like dimensions, the list of tags includes: the label and the weighted value corresponding to the label;
the emotion label list is composed of one or more emotion labels and weighted values corresponding to the emotion labels.
Further, the different emotion dimensions may also include an annoying dimension, or further include an imperceptible dimension, etc.
As the name implies, the emotion label list on the favorite dimension can represent the emotion labels which are interested by the user and the weighted values of the corresponding emotion labels; the weighted value of each different emotion label represents the proportion of the emotion label in the favorite content range of the user, and represents the favorite degree of the user on the label. Correspondingly, the emotion label list in the annoying dimension can be used for representing the emotion labels that are annoying by the user and the weighted values of the corresponding emotion labels; the weighted value of each different emotion label represents the proportion of the emotion label in the content range which is repugnable by the user, and represents the repugnability degree of the user to the emotion label.
In practical applications, in the field of video playing, the emotion tag may include the following elements: names of persons (names of famous persons such as actors, singers, etc.), names of movies, names of characters in dramas, types of movies, etc.; only some common examples of emotion tag elements are given here, and are not intended to limit the scope of protection of the present invention, and in practical applications, various emotion tag elements may be set according to actual needs.
It should be noted that, for the list of emotion tags for the same user, the emotion tags in the list may include all the above elements, such as at least one of names (names of famous persons such as actors and singers), names of movies and television shows, names of characters in the movies and television shows, and one or more emotion tags may be included for each element.
Specifically, the establishing of the emotion tag list for the user according to the evaluation content of the user includes:
extracting emotion descriptors in user evaluation content and content described by the emotion descriptors;
the emotion descriptor can be an emotion adjective or an emotion adjective and an emotion adverb;
the content described by the emotion descriptor corresponds to an emotion label in an emotion label list, and the weighted value of the emotion descriptor corresponds to the weighted value of the emotion label; that is, the content described by the emotion descriptor is set as an emotion tag in an emotion tag list, and the weight of the emotion descriptor is set as the weight of the emotion tag.
In the scheme, common keywords of emotion adjectives and emotion adverbs can be set, so that user comments are retrieved according to the keywords, descriptions which accord with the keywords in the user comments are extracted and analyzed; for example, keywords of common emotional adjectives may include: like, love, fun, nice-looking, fun, boring, annoying, nausea, laughter, unsightly, disliked, etc.; keywords of common emotional adverbs may include: very, good, special, little, etc. The emotion adjectives are divided into emotion adjectives under different emotion dimensions according to different emotion colors described by the emotion adjectives and the emotion adverbs; for example, like, love, fun, good-looking, etc. in the above example are affective adjectives in the like dimension, while boring, offensive, nausea, bad-looking, etc. are affective adjectives in the offensive dimension.
It should be noted that the emotion adjectives listed above are not all adjectives in the language sense, but only specific words defined for the accuracy of emotion recognition of the user, including, but not limited to, adjectives and verbs in the language sense.
Specifically, the weighted value of the emotion descriptor is determined by the following method:
when the emotion descriptor only comprises emotion adjectives, determining a weighted value (hereinafter referred to as a first weighted value) corresponding to the emotion adjectives, and determining the determined first weighted value as a weighted value corresponding to the label;
when the emotion descriptor comprises an emotion adjective and an emotion adverb, determining a weighted value (hereinafter referred to as a first weighted value) corresponding to the emotion adjective in the emotion descriptor and a weighted value (hereinafter referred to as a second weighted value) corresponding to the emotion adverb in the emotion descriptor, and taking the product of the first weighted value and the second weighted value as the weighted value corresponding to the label.
Specifically, when the product of the first weighted value and the second weighted value is greater than the maximum threshold of the weighted values corresponding to the tags, the weighted value corresponding to the tag is taken as the maximum threshold; and when the product of the first weighted value and the second weighted value is smaller than the minimum threshold of the weighted values corresponding to the tags, taking the weighted value corresponding to the tags as the minimum threshold.
Specifically, before determining the weighted value of the emotion descriptor, the method further includes:
presetting a weighted value corresponding to the emotion adjectives in the emotion descriptor, or presetting a weighted value corresponding to the emotion adjectives and the emotion adverbs in the emotion descriptor.
An exemplary weight setting scheme corresponding to emotion adjectives is shown in table 1, wherein the weight corresponding to the emotion adjective representing liking is positive number, the weight corresponding to the emotion adjective representing disagreement is negative number, and the values of the weights corresponding to different emotion adjectives are different according to different liking or disagreement degrees; the greater the degree of the representative like or dislike, the greater the absolute value of the weighting value.
Emotional adjectives | Weighted values |
Like | 0.8 |
Is interesting | 0.6 |
Bothersome | -0.6 |
Nausea | -0.9 |
TABLE 1
An exemplary weight setting scheme corresponding to the emotion adverb is shown in table 2, and according to table 2, when the emotion adverb and the emotion adjective appear simultaneously and the modification degree of the emotion adjective is deeper, the weight corresponding to the emotion adverb is larger, and the minimum value of the weights is 1.0.
Emotional adverb | Weighted values |
Very much | 1.3 |
Very much | 1.2 |
One point | 1.1 |
TABLE 2
And step 203, recommending the multimedia content to the user according to the emotion label list of the user.
Specifically, recommending multimedia content to a user according to an emotion tag list of the user includes:
determining an emotion label list of a user;
determining a tag list of multimedia content to be recommended;
determining the similarity between a tag list of multimedia content to be recommended and an emotion tag list of the user;
recommending one or more multimedia contents with the highest similarity with the emotion label list of the user in the multimedia contents to be recommended to the user.
Therefore, the multimedia content recommended to the user in the mode can better accord with the preference of the user at the current moment, the change of the preference of the user can be sensed in real time along with the tracking and analysis of the evaluation content input by the user, and the multimedia content pushed to the user can be adjusted to adapt to the change, so that the good viewing experience can be brought to the user.
Device embodiment
Referring to fig. 3, a block diagram of a multimedia content recommendation apparatus according to an embodiment of the present invention is shown, the apparatus including: an evaluation content acquisition module 31, an emotion model establishment module 32 and a multimedia content recommendation module 33; wherein
The evaluation content obtaining module 31 is configured to obtain evaluation content input by a user;
the emotion model establishing module 32 is configured to establish an emotion model of the user according to the evaluation content;
the multimedia content recommending module 33 is configured to recommend multimedia content to the user according to the established emotion model of the user.
Specifically, the evaluation content may be a comment made by a user for a video and/or a barrage made during the playing of the video.
Specifically, the emotion model establishing module 32 is configured to determine, according to the evaluation content of the user, an emotion tag list of the user in different emotion dimensions; wherein the sentiment dimensions include at least: like dimension; the emotion tag list includes: and the emotion label and the weighted value corresponding to the emotion label.
Specifically, the emotion model establishing module 32 includes an extraction submodule and a setting submodule; wherein,
the extraction submodule is used for extracting the emotion descriptors in the user evaluation content and the content described by the emotion descriptors; the emotion descriptor can be an emotion adjective or an emotion adjective and an emotion adverb;
the setting submodule is used for setting the content described by the emotion descriptor as an emotion label in an emotion label list and setting the weighted value of the emotion descriptor as the weighted value of the emotion label.
More specifically, the emotion model establishing module 32 further includes a first determining sub-module, configured to determine a first weighted value when the emotion descriptor includes only an emotion adjective, and determine the first weighted value as the weighted value corresponding to the emotion tag;
when the emotion descriptor comprises an emotion adjective and an emotion adverb, determining a first weighted value and a second weighted value, and taking the product of the first weighted word and the second weighted value as a weighted value corresponding to the label;
in the above scheme, the first weighted value is a weighted value corresponding to the emotion adjective, and the second weighted value is a weighted value corresponding to the emotion adverb.
Specifically, the emotion model establishing module 32 is configured to, when a product of the first weighted value and the second weighted value is greater than a maximum threshold of the weighted values corresponding to the emotion tags, take the weighted value corresponding to the emotion tag as the maximum threshold; and when the product of the first weighted value and the second weighted value is smaller than the minimum threshold of the weighted values corresponding to the emotion tags, taking the weighted value corresponding to the emotion tags as the minimum threshold.
In another embodiment of the present invention, the apparatus further comprises: a weighted value setting module 34, configured to preset a weighted value corresponding to an emotion adjective in the emotion descriptor, or preset a weighted value corresponding to an emotion adjective and an emotion adverb in the emotion descriptor before the emotion model establishing module 32 determines the weighted value of the emotion descriptor.
Specifically, the multimedia content pushing module 33 includes: a second determining submodule and a recommending submodule; wherein,
the second determining submodule is used for determining an emotion label list of the user; determining a tag list of multimedia content to be recommended; determining the similarity between a tag list of multimedia content to be recommended and an emotion tag list of the user;
and the recommending submodule is used for recommending one or more multimedia contents with the highest similarity with the emotion tag list of the user in the multimedia contents to be recommended to the user.
In a specific implementation process, the evaluation content obtaining module 31, the emotion model establishing module 32, the multimedia content recommending module 33, the weighting value setting module 34, the extracting sub-module, the setting sub-module, the first determining sub-module, the second determining sub-module, and the recommending sub-module may be implemented by at least one of a Central Processing Unit (CPU), a microprocessor Unit (MPU), a Digital Signal Processor (DSP), or a Programmable logic array (FPGA) in the multimedia content recommending apparatus.
Application example
Referring to fig. 4, a flowchart of a multimedia content recommendation method according to the present invention is shown, which specifically includes:
step 401: managing a list of multimedia content tags to be recommended;
specifically, a tag list is added to multimedia content (video, advertisement, and goods) that may be recommended, and the tag list may be generated by manual editing or may be automatically generated according to the attribute of the multimedia content to be recommended. For example, the tag list may be automatically generated according to actors, subject matters, year, region, background, etc. of the multimedia content to be recommended, or may be generated by extracting keywords from related comments of the user on the corresponding multimedia content. Meanwhile, a weight value is set for each label in the label list to indicate the degree of correlation between the corresponding label and the corresponding multimedia content, so that each piece has a label set. For example, if there is a movie M1, a love show with star A, B, C, where stars a and B are heroes, and a is very heavy, and C is only a match, then the tag list tag (M1) for this movie may be:
tag (M1) { (star a: 1), (star B: 0.8), (star C: 0.3), (love plate: 1) };
in the above tag list, the content enclosed in each bracket is a tag and the weighted value of the tag, and the corresponding tag and the weighted value are separated by a colon.
It is noted that there may be a very large number of tags in a tag list of multimedia content, where only 4 tags are listed for ease of illustration. In the label list, the weighted value of the label is between [0, 1], and when the value of one label is closer to 1, the correlation degree between the label and the corresponding multimedia content is higher; accordingly, as the value of a tag approaches 0, the tag is associated with the corresponding multimedia content to a lower degree.
Step 402: acquiring user evaluation content and establishing an emotion model of a user;
in this step, it is assumed that an exemplary emotion adjective weighting value list is shown in table 3, and an exemplary emotion adverb weighting value list is shown in table 4:
emotional adjectives | Weighted values |
Like | 0.8 |
Is interesting | 0.6 |
Bothersome | -0.6 |
Nausea | -0.9 |
Dislike of | -0.8 |
TABLE 3
Emotional adverb | Weighted values |
Very much | 1.4 |
In particular | 1.3 |
Very much | 1.2 |
One point | 1.1 |
TABLE 4
At this time, the evaluation content of the user a in a certain period of time, for example, in one day, is acquired, and if it is found that the user a has mentioned "favorite star a", "dislike star B", and "favorite love", the emotion tag of the user can be determined as a feeing tag (a) from the above information:
feling tag (a) { (star a: 0.96), (star B: -0.8), (love piece: 0.8) }.
Step 403: and matching the multimedia content.
Because the emotion label list of the user and the label list of the multimedia content to be recommended are vector sets, the similarity of the vector sets can be calculated in various ways, such as Euclidean distance, cosine similarity and the like. The following is illustrated by way of example:
taking the emotion tag list of user a as an example, at this time, assume that there are two movies: movies M1 and M2, whose tab lists are:
tag (M1) { (star a: 1), (star B: 0.2), (love piece: 1) }
tag (M1) { (star a: 1), (star B: 0.8), (love piece: 0) }
The Similarity between the tag lists of the movie M1 and the movie M2 and the emotion tag list of the user a is calculated according to the euclidean distance as follows, and if the Similarity between the movie M1 and the emotion tag list of the user a is set to Similarity (M, M1) and the Similarity between the movie M2 and the emotion tag list of the user a is set to Similarity (M, M2), the calculation results are as follows:
Similarity(M,M1)=1.021;
Similarity(M,M2)=1.789;
it can be seen that movie M1 has less similarity to user a's list of emotion tags, and therefore, movie M1 can be recommended to user a.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element.
The method and the device for recommending multimedia content provided by the invention are described in detail, and the principle and the implementation mode of the invention are explained by applying a specific example, and the description of the embodiment is only used for helping to understand the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.
Claims (16)
1. A method for multimedia content recommendation, the method comprising:
acquiring evaluation content input by a user;
establishing an emotion model of the user according to the evaluation content; the emotion model is established by the following method: retrieving the evaluation content according to preset keywords to extract emotion descriptors in the evaluation content and the content described by the emotion descriptors; the keywords are set according to common emotion descriptors; the emotion descriptor at least comprises an emotion adjective;
dividing emotion adjectives in the emotion descriptor into emotion adjectives on different emotion dimensions according to the emotion colors described by the emotion descriptor;
setting the content described by the emotion descriptor as an emotion label according to the emotion adjectives in the emotion descriptor in different emotion dimensions and the content described by the emotion descriptor, and setting the weighted value of the emotion descriptor as the weighted value of the emotion label to obtain an emotion label list of the user in different emotion dimensions; the emotion tag list includes: the emotion label and a weighted value corresponding to the emotion label;
and recommending the multimedia content to the user according to the established emotion model of the user.
2. The method of claim 1, wherein the evaluation content comprises: comments and/or barrage of the user's ratings.
3. The method of claim 2, wherein the sentiment dimensions comprise at least: the dimensionality is favored.
4. The method of claim 3, wherein the emotion descriptors further comprise emotion adverbs.
5. The method of claim 4, wherein the weight of the emotion descriptor is determined by:
when the emotion descriptor only comprises emotion adjectives, determining a first weighted value, and determining the determined first weighted value as a weighted value corresponding to the emotion label; when the emotion descriptor comprises an emotion adjective and an emotion adverb, determining a first weighted value and a second weighted value, and taking the product of the first weighted word and the second weighted value as a weighted value corresponding to the emotion label; the first weighted value refers to a weighted value corresponding to the emotional adjective, and the second weighted value refers to a weighted value corresponding to the emotional adverb.
6. The method of claim 5, wherein when the product of the first weighting value and the second weighting value is greater than a maximum threshold of the weighting values corresponding to the emotion tags, the weighting value corresponding to the emotion tag is taken as the maximum threshold; and when the product of the first weighted value and the second weighted value is smaller than the minimum threshold of the weighted values corresponding to the emotion tags, taking the weighted value corresponding to the emotion tags as the minimum threshold.
7. The method of claim 5, wherein prior to determining the weighting values for the emotion descriptors, the method further comprises:
presetting a weighted value corresponding to the emotion adjectives in the emotion descriptor, or presetting a weighted value corresponding to the emotion adjectives and the emotion adverbs in the emotion descriptor.
8. The method according to any one of claims 3 to 7, wherein the recommending multimedia content to the user according to the established emotion model of the user comprises:
determining an emotion label list of a user;
determining a tag list of multimedia content to be recommended;
determining the similarity between a tag list of multimedia content to be recommended and an emotion tag list of the user;
recommending one or more multimedia contents with the highest similarity with the emotion label list of the user in the multimedia contents to be recommended to the user.
9. An apparatus for recommending multimedia contents, said apparatus comprising: the system comprises an evaluation content acquisition module, an emotion model establishment module and a multimedia content recommendation module; wherein
The evaluation content acquisition module is used for acquiring the evaluation content input by the user;
the emotion model establishing module is used for establishing an emotion model of the user according to the evaluation content; the emotion model is established by the following method: retrieving the evaluation content according to preset keywords to extract emotion descriptors in the evaluation content and the content described by the emotion descriptors; the keywords are set according to common emotion descriptors; the emotion descriptor at least comprises an emotion adjective;
dividing emotion adjectives in the emotion descriptor into emotion adjectives on different emotion dimensions according to the emotion colors described by the emotion descriptor;
setting the content described by the emotion descriptor as an emotion label according to the emotion adjectives in the emotion descriptor in different emotion dimensions and the content described by the emotion descriptor, and setting the weighted value of the emotion descriptor as the weighted value of the emotion label to obtain an emotion label list of the user in different emotion dimensions; the emotion tag list includes: the emotion label and a weighted value corresponding to the emotion label;
and the multimedia content recommending module is used for recommending the multimedia content to the user according to the established emotion model of the user.
10. The apparatus of claim 9, wherein the evaluation content comprises: comments and/or barrage of the user's ratings.
11. The apparatus of claim 10, wherein the emotion dimensions comprise at least: the dimensionality is favored.
12. The apparatus of claim 11, wherein the emotion descriptors further comprise emotion adverbs.
13. The apparatus of claim 12, wherein the emotion model creation module further comprises a first determination sub-module for determining a first weighting value when the emotion descriptor includes only emotion adjectives, and determining the first weighting value as the weighting value corresponding to the emotion tag; when the emotion descriptor comprises an emotion adjective and an emotion adverb, determining a first weighted value and a second weighted value, and determining the product of the first weighted word and the second weighted value as a weighted value corresponding to the emotion label; the first weighted value refers to a weighted value corresponding to the emotional adjective, and the second weighted value refers to a weighted value corresponding to the emotional adverb.
14. The apparatus of claim 13, wherein the emotion model establishing module is configured to, when a product of the first weighting value and the second weighting value is greater than a maximum threshold of the weighting values corresponding to the emotion tags, take the weighting value corresponding to the emotion tag as the maximum threshold; and when the product of the first weighted value and the second weighted value is smaller than the minimum threshold of the weighted values corresponding to the emotion tags, taking the weighted value corresponding to the emotion tags as the minimum threshold.
15. The apparatus of claim 13, further comprising: and the weighted value setting module is used for presetting the weighted value corresponding to the emotion adjective in the emotion descriptor or presetting the weighted value corresponding to the emotion adjective and the emotion adverb in the emotion descriptor before the emotion model establishing module determines the weighted value of the emotion descriptor.
16. The apparatus according to any one of claims 11 to 15, wherein the multimedia content push module comprises: a second determining submodule and a recommending submodule; wherein,
the second determining submodule is used for determining an emotion label list of the user; determining a tag list of multimedia content to be recommended; determining the similarity between a tag list of multimedia content to be recommended and an emotional tag list preferred by the user;
and the recommending submodule is used for recommending one or more multimedia contents with the highest similarity with the emotion tag list of the user in the multimedia contents to be recommended to the user.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510549931.1A CN105095508B (en) | 2015-08-31 | 2015-08-31 | A kind of multimedia content recommended method and multimedia content recommendation apparatus |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510549931.1A CN105095508B (en) | 2015-08-31 | 2015-08-31 | A kind of multimedia content recommended method and multimedia content recommendation apparatus |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105095508A CN105095508A (en) | 2015-11-25 |
CN105095508B true CN105095508B (en) | 2019-11-08 |
Family
ID=54575943
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510549931.1A Active CN105095508B (en) | 2015-08-31 | 2015-08-31 | A kind of multimedia content recommended method and multimedia content recommendation apparatus |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105095508B (en) |
Families Citing this family (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105205699A (en) * | 2015-09-17 | 2015-12-30 | 北京众荟信息技术有限公司 | User label and hotel label matching method and device based on hotel comments |
CN105893436A (en) * | 2015-12-14 | 2016-08-24 | 乐视网信息技术(北京)股份有限公司 | Single-account multi-hobby recommendation method and device of video website |
CN105574132A (en) * | 2015-12-15 | 2016-05-11 | 海信集团有限公司 | Multimedia file recommendation method and terminal |
US10580064B2 (en) * | 2015-12-31 | 2020-03-03 | Ebay Inc. | User interface for identifying top attributes |
CN105824923A (en) * | 2016-03-17 | 2016-08-03 | 海信集团有限公司 | Movie and video resource recommendation method and device |
CN105843922A (en) * | 2016-03-25 | 2016-08-10 | 乐视控股(北京)有限公司 | Multimedia classification recommendation method, apparatus and system |
CN105979338B (en) * | 2016-05-16 | 2019-07-09 | 武汉斗鱼网络科技有限公司 | A kind of system and method according to barrage content mood matching color |
CN106231428A (en) * | 2016-07-29 | 2016-12-14 | 乐视控股(北京)有限公司 | A kind of video recommendation method and device |
CN106303675B (en) * | 2016-08-24 | 2019-11-15 | 北京奇艺世纪科技有限公司 | A kind of video clip extracting method and device |
CN107547922B (en) * | 2016-10-28 | 2019-12-17 | 腾讯科技(深圳)有限公司 | Information processing method, device, system and computer readable storage medium |
CN106572399A (en) * | 2016-11-18 | 2017-04-19 | 广州爱九游信息技术有限公司 | Information recommendation method and device, server and user terminal |
CN106792208A (en) * | 2016-11-24 | 2017-05-31 | 武汉斗鱼网络科技有限公司 | Video preference information processing method, apparatus and system |
CN106792014B (en) * | 2016-11-25 | 2019-02-26 | 广州酷狗计算机科技有限公司 | A kind of method, apparatus and system of recommendation of audio |
CN107197368B (en) * | 2017-05-05 | 2019-10-18 | 中广热点云科技有限公司 | Determine user to the method and system of multimedia content degree of concern |
CN107509116A (en) * | 2017-09-08 | 2017-12-22 | 咪咕互动娱乐有限公司 | A kind of information-pushing method, device and storage medium |
CN108513175B (en) * | 2018-03-29 | 2020-05-22 | 网宿科技股份有限公司 | Bullet screen information processing method and system |
CN108737859A (en) * | 2018-05-07 | 2018-11-02 | 华东师范大学 | Video recommendation method based on barrage and device |
CN108681919A (en) * | 2018-05-10 | 2018-10-19 | 苏州跃盟信息科技有限公司 | A kind of content delivery method and device |
CN110209921B (en) * | 2018-05-10 | 2023-04-11 | 深圳市雅阅科技有限公司 | Method and device for pushing media resource, storage medium and electronic device |
CN109271423A (en) * | 2018-09-29 | 2019-01-25 | 深圳市轱辘汽车维修技术有限公司 | A kind of object recommendation method, apparatus, terminal and computer readable storage medium |
CN109284403A (en) * | 2018-11-12 | 2019-01-29 | 珠海格力电器股份有限公司 | Method and device for processing media information, storage medium and electronic device |
CN109862397B (en) * | 2019-02-02 | 2021-11-09 | 广州虎牙信息科技有限公司 | Video analysis method, device, equipment and storage medium |
CN110267111A (en) * | 2019-05-24 | 2019-09-20 | 平安科技(深圳)有限公司 | Video barrage analysis method, device and storage medium, computer equipment |
CN113766281B (en) * | 2021-09-10 | 2022-11-22 | 北京快来文化传播集团有限公司 | Short video recommendation method, electronic device and computer-readable storage medium |
CN113704630B (en) * | 2021-10-27 | 2022-04-22 | 武汉卓尔数字传媒科技有限公司 | Information pushing method and device, readable storage medium and electronic equipment |
CN117312658B (en) * | 2023-09-08 | 2024-04-09 | 广州风腾网络科技有限公司 | Popularization method and system based on big data analysis |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103714071A (en) * | 2012-09-29 | 2014-04-09 | 株式会社日立制作所 | Label emotional tendency quantifying method and label emotional tendency quantifying system |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120179751A1 (en) * | 2011-01-06 | 2012-07-12 | International Business Machines Corporation | Computer system and method for sentiment-based recommendations of discussion topics in social media |
CN102611785B (en) * | 2011-01-20 | 2014-04-02 | 北京邮电大学 | Personalized active news recommending service system and method for mobile phone user |
CN102387207A (en) * | 2011-10-21 | 2012-03-21 | 华为技术有限公司 | Push method and system based on user feedback information |
CN103678329B (en) * | 2012-09-04 | 2018-05-04 | 中兴通讯股份有限公司 | Recommend method and device |
CN104331451B (en) * | 2014-10-30 | 2017-12-26 | 南京大学 | A kind of recommendation degree methods of marking of network user's comment based on theme |
-
2015
- 2015-08-31 CN CN201510549931.1A patent/CN105095508B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103714071A (en) * | 2012-09-29 | 2014-04-09 | 株式会社日立制作所 | Label emotional tendency quantifying method and label emotional tendency quantifying system |
Also Published As
Publication number | Publication date |
---|---|
CN105095508A (en) | 2015-11-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105095508B (en) | A kind of multimedia content recommended method and multimedia content recommendation apparatus | |
CN108009228B (en) | Method and device for setting content label and storage medium | |
CN106331778B (en) | Video recommendation method and device | |
AU2016247184B2 (en) | Attribute weighting for media content-based recommendation | |
US9471936B2 (en) | Web identity to social media identity correlation | |
CN108694223B (en) | User portrait database construction method and device | |
CN105378720B (en) | Media content discovery and role organization techniques | |
CN111178970B (en) | Advertisement putting method and device, electronic equipment and computer readable storage medium | |
CN106326391B (en) | Multimedia resource recommendation method and device | |
JP2021103543A (en) | Use of machine learning for recommending live-stream content | |
WO2017096877A1 (en) | Recommendation method and device | |
CN103984741A (en) | Method and system for extracting user attribute information | |
CN107704560B (en) | Information recommendation method, device and equipment | |
CN104469430A (en) | Video recommending method and system based on context and group combination | |
Chiny et al. | Netflix recommendation system based on TF-IDF and cosine similarity algorithms | |
US20140074828A1 (en) | Systems and methods for cataloging consumer preferences in creative content | |
CN105718545A (en) | Recommendation method and device of multimedia resources | |
CN111597446B (en) | Content pushing method and device based on artificial intelligence, server and storage medium | |
US20090144226A1 (en) | Information processing device and method, and program | |
CN105045859A (en) | User feature analysis method and apparatus for intelligent device | |
CN112507163A (en) | Duration prediction model training method, recommendation method, device, equipment and medium | |
KR20170107868A (en) | Method and system to recommend music contents by database composed of user's context, recommended music and use pattern | |
CN116821475A (en) | Video recommendation method and device based on client data and computer equipment | |
US20240220537A1 (en) | Metadata tag identification | |
CN108460131B (en) | Classification label processing method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |