CN111565322A - User emotional tendency information obtaining method and device and electronic equipment - Google Patents

User emotional tendency information obtaining method and device and electronic equipment Download PDF

Info

Publication number
CN111565322A
CN111565322A CN202010406415.4A CN202010406415A CN111565322A CN 111565322 A CN111565322 A CN 111565322A CN 202010406415 A CN202010406415 A CN 202010406415A CN 111565322 A CN111565322 A CN 111565322A
Authority
CN
China
Prior art keywords
video
comment
comments
target
target user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010406415.4A
Other languages
Chinese (zh)
Other versions
CN111565322B (en
Inventor
宁宇光
李晨
阳任科
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing QIYI Century Science and Technology Co Ltd
Original Assignee
Beijing QIYI Century Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing QIYI Century Science and Technology Co Ltd filed Critical Beijing QIYI Century Science and Technology Co Ltd
Priority to CN202010406415.4A priority Critical patent/CN111565322B/en
Publication of CN111565322A publication Critical patent/CN111565322A/en
Application granted granted Critical
Publication of CN111565322B publication Critical patent/CN111565322B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/235Processing of additional data, e.g. scrambling of additional data or processing content descriptors
    • H04N21/2355Processing of additional data, e.g. scrambling of additional data or processing content descriptors involving reformatting operations of additional data, e.g. HTML pages
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/7867Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title and artist information, manually generated time, location and usage information, user ratings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/251Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • H04N21/25866Management of end-user data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/435Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
    • H04N21/4355Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream involving reformatting operations of additional data, e.g. HTML pages on a television screen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/4508Management of client data or end-user data
    • H04N21/4532Management of client data or end-user data involving end-user characteristics, e.g. viewer profile, preferences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/466Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • H04N21/4667Processing of monitored end-user data, e.g. trend analysis based on the log file of viewer selections
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/475End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data
    • H04N21/4756End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data for rating content, e.g. scoring a recommended movie
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4884Data services, e.g. news ticker for displaying subtitles

Abstract

The embodiment of the application provides a method and a device for obtaining user emotional tendency information and electronic equipment, and relates to the technical field of information processing, wherein the method comprises the following steps: estimating initial information representing the target user's emotional tendency to the target video according to the comments made by the target user to the target video; determining a first mixing degree between a positive comment and a negative comment in the comments for the target video; estimating the difference degree of the comment of the target user on the video relative to the comment of other users on the video according to the comment issued by the target user on the commented video and the second mixing degree between the positive comment and the negative comment in the comment of each commented video; and correcting the initial information by adopting the first mixing degree and the difference degree to obtain final information representing the target video emotional tendency of the target user. By applying the scheme provided by the embodiment of the application, the accuracy of the obtained information representing the emotional tendency of the user to the video can be improved.

Description

User emotional tendency information obtaining method and device and electronic equipment
Technical Field
The present application relates to the field of information processing technologies, and in particular, to a method and an apparatus for obtaining user emotional tendency information, and an electronic device.
Background
In order to recommend videos of interest to a user, a video platform tends to evaluate emotional tendencies of the user to the respective videos. For example, the emotional tendency of the user U1 to the video X is like, and the emotional tendency of the user U2 to the video X is dislike, so that when video recommendation is performed, a video similar to the video X can be recommended to the user U1, and a video similar to the video X is not recommended to the user U2.
The emotional tendency of the user to each video can be represented by quantized information. In the prior art, information representing emotional tendency of a user to a watched video is generally obtained directly based on barrage comments issued by the user in a video watching process or list comments issued in a comment area. Although the information representing the emotional tendency of the user to the video can be obtained by applying the above method, since various comments issued by the user are often comments of a local video of the watched video and have limitations, the accuracy of the information representing the emotional tendency of the user to the video obtained by applying the above method is low.
Disclosure of Invention
The embodiment of the application aims to provide a method and a device for obtaining user emotional tendency information and electronic equipment, so that the accuracy of the obtained information representing the user emotional tendency to the video is improved. The specific technical scheme is as follows:
in a first aspect, an embodiment of the present application provides a method for obtaining user emotional tendency information, where the method includes:
estimating initial information representing the emotional tendency of a target user to a target video according to comments made by the target user to the target video;
determining a first degree of mixing between positive and negative comments among the comments for the target video;
estimating the difference degree of the comment of the target user on the video relative to the comment of other users on the video according to the comment of the target user on the commented video and the second mixing degree between the positive comment and the negative comment in the comment of each commented video, wherein the commented video is the video commented by the target user;
and correcting the initial information by adopting the first mixing degree and the difference degree to obtain final information representing the target video emotional tendency of the target user.
In an embodiment of the present application, the predicting initial information representing an emotional tendency of a target user to a target video according to a comment issued by the target user to the target video includes:
obtaining a first total number of positive comments and a second total number of negative comments in the comments, which are published by the target user, of the target video;
and estimating initial information representing the target user's emotional tendency to the target video according to the first total quantity and the second total quantity.
In an embodiment of the application, the predicting initial information representing the emotional tendency of the target user to the target video according to the first total number and the second total number includes:
calculating a difference between the first total number and the second total number;
estimating initial information representing the target user's emotional tendency to the target video according to the following expression:
Figure BDA0002491460100000021
wherein, VtagIndicating the initial information and diff indicating the difference.
In one embodiment of the application, the determining a first degree of mixing between positive comments and negative comments in the comments for the target video includes:
counting the total number of the forward comments in the comments aiming at the target video as a third total number;
counting the total number of negative comments in the comments of the target video as a fourth total number;
calculating information entropy between the third total number and the fourth total number;
and determining a first mixing degree between the positive comments and the negative comments in the comments of the target video according to the third total number, the fourth total number and the information entropy.
In an embodiment of the application, the determining, according to the third total number, the fourth total number, and the information entropy, a first mixing degree between positive comments and negative comments in the comments for the target video includes:
determining a first degree of mixing between positive and negative comments in the comments for the target video according to the following expression:
Figure BDA0002491460100000031
wherein v is+Represents said third total number, v-Representing said fourth total number, H (X) representing said entropy of information, representingThe first degree of mixing.
In an embodiment of the application, the estimating, according to a second mixing degree between the comment posted by the target user to the commented videos and the positive comment and the negative comment in the comment for each commented video, a difference degree of the comment made by the target user to the videos relative to the comment made by other users to the videos includes:
obtaining reference information which is reflected by each reference comment and represents the commented video emotional tendency of the target user to the reference comment, wherein the reference comment is as follows: the comments made by the target user to each commented video;
estimating the difference degree of the comment made by the target user to the video relative to the comment made by other users to the video according to the difference of the reference information corresponding to each reference comment relative to the second mixing degree corresponding to the commented video to which the reference information is directed, wherein the second mixing degree corresponding to the commented video is as follows: the degree of mixing between positive and negative comments in the comments for the commented video.
In an embodiment of the application, the estimating, according to a difference between reference information corresponding to each reference comment and a second mixing degree corresponding to a commented video to which the reference information is directed, a difference degree of commenting the video by the target user with respect to the video commented by other users includes:
estimating the difference degree of the video commented by the target user relative to the video commented by other users according to the following expression:
Figure BDA0002491460100000032
wherein c represents the disparity, f () represents a preset normalization function, n represents the number of the commented videos, i represents the serial number of the commented videos, j represents the serial number of the comment issued by the target user on each commented video, and m represents the number of the comment issued by the target user on each commented videoiRepresenting the number of comments made by the target user to the ith commented video, (V)tag)ijRepresenting the reference information corresponding to the jth comment published by the target user on the ith commented video,iindicating a second degree of mixing corresponding to the ith commented video.
In an embodiment of the application, the modifying the initial information by using the first mixing degree and the difference degree to obtain final information indicating the emotional tendency of the target user to the target video includes:
and acquiring final information representing the emotional tendency of the target user to the target video according to the following expression:
Vsrc=cVtag+(1-c)
wherein, VsrcRepresenting the final information, c representing the degree of difference, VtagIndicating the initial information and indicating the first mixing degree.
In a second aspect, an embodiment of the present application provides an apparatus for obtaining user emotional tendency information, where the apparatus includes:
the initial information estimation module is used for estimating initial information representing the emotional tendency of a target user to a target video according to comments made by the target user to the target video;
the mixing degree determining module is used for determining a first mixing degree between the positive comment and the negative comment in the comments of the target video;
the difference degree estimation module is used for estimating the difference degree of the comment of the target user on the video relative to the comment of other users on the video according to the second mixing degree of the comment of the target user on the commented video and the positive comment and the negative comment in the comment of each commented video, wherein the commented video is the video commented by the target user;
and the final information obtaining module is used for correcting the initial information by adopting the first mixing degree and the difference degree to obtain final information representing the target video emotional tendency of the target user.
In an embodiment of the application, the initial information estimation module includes:
the total number obtaining unit is used for obtaining a first total number of positive comments and a second total number of negative comments in the comments, issued by the target user, of the target video;
and the initial information estimation unit is used for estimating initial information representing the target user's emotional tendency to the target video according to the first total quantity and the second total quantity.
In an embodiment of the application, the initial information estimation unit is specifically configured to:
calculating a difference between the first total number and the second total number;
estimating initial information representing the target user's emotional tendency to the target video according to the following expression:
Figure BDA0002491460100000051
wherein, VtagIndicating the initial information and diff indicating the difference.
In an embodiment of the application, the mixing degree determining module includes:
the number counting unit is used for counting the total number of the forward comments in the comments aiming at the target video as a third total number; counting the total number of negative comments in the comments of the target video as a fourth total number;
the information entropy calculation unit is used for calculating the information entropy between the third total number and the fourth total number;
and the mixing degree determining unit is used for determining a first mixing degree between the positive comments and the negative comments in the comments of the target video according to the third total number, the fourth total number and the information entropy.
In an embodiment of the application, the mixing degree determining unit is specifically configured to determine a first mixing degree between a positive comment and a negative comment in the comment of the target video according to the following expression:
Figure BDA0002491460100000052
wherein v is+Represents said third total number, v-Represents the fourth total number, h (x) represents the information entropy, representing the first degree of mixing.
In an embodiment of the application, the disparity estimation module includes:
a reference information obtaining unit, configured to obtain reference information that is reflected by each reference comment and represents a commented video emotional tendency of the target user to the reference comment, where the reference comment is: the comments made by the target user to each commented video;
the difference degree estimation unit is used for estimating the difference degree of the comment made by the target user to the video relative to the comment made by other users to the video according to the difference of the reference information corresponding to each reference comment relative to the second mixing degree corresponding to the comment made by the reference information, wherein the second mixing degree corresponding to the comment made video is as follows: the degree of mixing between positive and negative comments in the comments for the commented video.
In an embodiment of the application, the difference degree estimation unit is specifically configured to estimate the difference degree of the comment made by the target user on the video relative to the comment made by other users on the video according to the following expression:
Figure BDA0002491460100000061
wherein c represents the disparity, f () represents a preset normalization function, n represents the number of the commented videos, i represents the serial number of the commented videos, j represents the serial number of the comment issued by the target user on each commented video, and m represents the number of the comment issued by the target user on each commented videoiRepresenting the number of comments made by the target user to the ith commented video, (V)tag)ijRepresenting the reference information corresponding to the jth comment published by the target user on the ith commented video,iindicating a second degree of mixing corresponding to the ith commented video.
In an embodiment of the application, the final information obtaining module is specifically configured to obtain final information indicating an emotional tendency of the target user to the target video according to the following expression:
Vsrc=cVtag+(1-c)
wherein, VsrcRepresenting the final information, c representing the degree of difference, VtagIndicating the initial information and indicating the first mixing degree.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor, a communication interface, a memory, and a communication bus, where the processor and the communication interface complete communication between the memory and the processor through the communication bus;
a memory for storing a computer program;
a processor configured to implement the method steps of the first aspect when executing the program stored in the memory.
In a fourth aspect, the present application provides a computer-readable storage medium, in which a computer program is stored, and the computer program, when executed by a processor, implements the method steps of the first aspect.
In a fifth aspect, embodiments of the present application provide a computer program product comprising instructions which, when executed on a computer, cause the computer to perform the method steps of the first aspect described above.
As can be seen from the above, when the scheme provided by the embodiment of the application is applied to obtain the user emotional tendency information, not only is the initial information representing the emotional tendency of the target user to the target video estimated according to the comment issued by the target user to the target video taken into account, but also the first mixing degree is taken into account. Because the first mixing degree represents the mixing degree between the positive comments and the negative comments in the comments on the target video, in the scheme provided by the embodiment of the application, the distribution conditions of the positive comments and the negative comments in the comments issued by most users on the target video are also considered, that is, the comment environment on the target video is considered. In addition, in the scheme provided by the embodiment of the application, the difference degree of the target user commenting on the video relative to other users is also considered, that is, the habit of the target user commenting on the video relative to other users is considered. Therefore, compared with the prior art that information representing the emotional tendency of the user to the watched video is obtained directly based on the comments issued by the user in the video watching process, the reference information is richer and more comprehensive, and therefore the scheme provided by the embodiment of the application obtains the emotional tendency information of the user, and the accuracy rate of the obtained information representing the emotional tendency of the user to the video can be improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below.
FIG. 1 is a schematic flowchart of a first method for obtaining user emotional tendency information according to an embodiment of the present disclosure;
fig. 2 a-fig. 2f are exemplary diagrams of information correspondence relationships provided in an embodiment of the present application;
FIG. 2g is a schematic view of a line type according to an embodiment of the present application;
FIG. 3 is a flowchart illustrating a second method for obtaining user emotional tendency information according to an embodiment of the present application;
FIG. 4a is a flowchart illustrating a third method for obtaining emotional tendency information of a user according to an embodiment of the present application;
FIG. 4b is a graph illustrating entropy change according to an embodiment of the present disclosure;
FIG. 5 is a flowchart illustrating a fourth method for obtaining emotional tendency information of a user according to an embodiment of the present application;
FIG. 6 is a schematic structural diagram of a first apparatus for obtaining emotional tendency information of a user according to an embodiment of the present application;
FIG. 7 is a schematic structural diagram of a second apparatus for obtaining emotional tendency information of a user according to an embodiment of the present application;
FIG. 8 is a schematic structural diagram of a third apparatus for obtaining emotional tendency information of a user according to an embodiment of the present application;
FIG. 9 is a schematic structural diagram of a fourth apparatus for obtaining emotional tendency information of users according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application.
In order to solve the technical problem that the accuracy is low when information representing the emotional tendency of the user to the video is determined in the prior art, embodiments of the present application provide a method and an apparatus for obtaining the emotional tendency information of the user, and an electronic device.
In one embodiment of the present application, a method for obtaining user emotional tendency information is provided, the method comprising:
estimating initial information representing the target user's emotional tendency to the target video according to the comments made by the target user to the target video;
determining a first mixing degree between a positive comment and a negative comment in the comments for the target video;
estimating the difference degree of the comment of the target user on the video relative to the comment of other users on the video according to the comment of the target user on the commented video and the second mixing degree between the positive comment and the negative comment in the comment of each commented video, wherein the commented video is the video of the target user who has made a comment;
and correcting the initial information by adopting the first mixing degree and the difference degree to obtain final information representing the target video emotional tendency of the target user.
As can be seen from the above, when the scheme provided by this embodiment is applied to obtain the user emotional tendency information, not only the initial information representing the emotional tendency of the target user to the target video, which is estimated according to the comment issued by the target user to the target video, is considered, but also the first mixing degree is considered. Because the first mixing degree represents the mixing degree between the positive comments and the negative comments in the comments on the target video, in the scheme provided by the embodiment of the application, the distribution conditions of the positive comments and the negative comments in the comments issued by most users on the target video are also considered, that is, the comment environment on the target video is considered. In addition, in the scheme provided by this embodiment, the difference degree of the target user commenting on the video relative to the other users is also considered, that is, the habit of the target user commenting on the video relative to the other users is considered. Therefore, in summary, compared with the prior art, the information representing the emotional tendency of the user to the watched video is obtained directly based on the comment issued by the user in the process of watching the video, and the reference information is richer and more comprehensive, so that the scheme provided by the embodiment is applied to obtain the emotional tendency information of the user, and the accuracy of the obtained information representing the emotional tendency of the user to the video can be improved.
The method, the apparatus and the electronic device for obtaining user emotional tendency information provided by the embodiments of the present application are respectively described below by specific embodiments.
Referring to fig. 1, a flowchart of a first method for obtaining emotional tendency information of a user is provided, which includes the following steps S101-S104.
S101: according to the comments of the target users to the target video, estimating initial information representing the target users' emotional tendency to the target video.
The videos mentioned in the embodiments of the present application may be various types of videos such as episode videos of a television show, variety shows, movies, and the like.
The user may post a barrage-type comment for the viewed video content during the process of viewing the video, or may post a list-type comment in a special comment area, or the user may post a list-type comment in the comment area after viewing the video. Regardless of which type of comment the user posts, the emotional tendency of the user to the viewed video is reflected, for example, the user likes the viewed video, dislikes the viewed video, and so on.
In order to intuitively and quantitatively represent the emotional tendency of the user to the video, the emotional tendency of the user to the video can be represented by different forms of information. For example, the possible emotional tendencies of the user to the video can be preset, and then each possible emotional tendency can be represented by different values. For example, possible emotional tendencies include: very like, general, dislike and very dislike, which can be denoted by 5, 4, 3, 2 and 1, respectively. In addition, the user's likeability of the video may also be represented in different degree values, such as 90, 80, etc.
In an embodiment of the application, when initial information representing an emotional tendency of a target user to a target video is estimated according to a comment issued by the target user to the target video, whether the comment issued by the target user to the target video contains preset words and/or expression graphs and the like can be analyzed, for example, "follow-up", "show too rotten", heart-shaped expression graphs, rose-shaped expression graphs and the like can be shown, the words and the expression graphs can show the emotion, namely the emotional tendency, of the target user when the target video is watched, and then the initial information is estimated according to an analysis result. For example, when the analysis result includes "continue to chase down", it may be considered that the target user likes the target video very much, and if the emotional tendency that the target user likes the target video very much is represented by a numerical value "5", the value of the initial information is 5; when the analysis result includes "the performance is too rotten", it can be considered that the target user dislikes the target video very much, the emotional tendency that the target user dislikes the target video very much is represented by a numerical value "1", and the value of the initial information is 1.
Other ways may also be adopted to predict the initial information, and the detailed process may refer to the scheme provided by the embodiment shown in fig. 3, which will not be described in detail here.
S102: a first degree of mixing between positive and negative comments among the comments for the target video is determined.
In addition to the target user commenting on the target video, other users may comment on the target video, so there may be multiple comments for the target video. In addition, positive comments indicating that the target video is liked and the content of the target video is positively attitude may exist in the comments for the target video, and negative comments indicating that the target video is disliked and the content of the target video is negatively attitude may also exist, and the comments form the overall comment environment of the user for the target video.
The positive comments and the negative comments aiming at the target video are mixed in all the comments aiming at the target video, the more the positive comments are relative to the negative comments, the fact that the emotional tendency of most users to the target video tends to the positive direction and the overall comment environment of the users to the target video tends to the positive direction is shown, and on the contrary, the more the negative comments are relative to the positive comments, the fact that the emotional tendency of most users to the target video tends to the negative direction and the overall comment environment of the users to the target video tends to the negative direction is shown. By combining the above, it can be considered that the distribution situation of the positive comments and the negative comments in the comments can reflect the overall comment environment of the user on the target video. In the embodiment of the application, the distribution situation of the positive comments and the negative comments in the comments of the target video is expressed by adopting the mixing degree between the positive comments and the negative comments. When the positive comments are stronger than the negative comments in the comments for the target video, for example, the number of the positive comments is greater than that of the negative comments, the lower the mixing degree between the positive comments and the negative comments is, the higher the purity of the overall comment environment of the target video is, and the overall comment environment tends to the positive comment environment. When negative comments are stronger than positive comments in the comments for the target video, for example, the number of the negative comments is greater than that of the positive comments, the lower the mixing degree between the positive comments and the negative comments is, the higher the purity of the overall comment environment of the target video is, but the overall comment environment tends to be the negative comment environment. When the positive comments and the negative comments in the comments on the target video tend to be consistent, for example, the number of the positive comments and the number of the negative comments tend to be consistent, the mixing degree between the positive comments and the negative comments tends to be highest, and the purity of the overall comment environment of the target video tends to be lowest, so that the overall comment environment tends to be a neutral comment environment.
Specifically, the positive comment and the negative comment for determining the first mixing degree may be determined based on all comments for the target video, or may be determined based on part of comments for the target video. For example, the partial comment may be a comment posted for the target video within a preset time period, such as a comment posted for the target video within the last month, a comment posted for the target video within the last two months, and the like.
In addition, the first mixing degree may be determined based on the number of positive comments and the number of negative comments in the comments on the target video, for example, when the number of positive comments in the comments on the target video is greater than the number of negative comments, a ratio of the number of positive comments to the total number of the first comments is used as the first mixing degree, and when the number of positive comments in the comments on the target video is not greater than the number of negative comments, a negative value of the ratio of the number of negative comments to the total number of the comments is used as the first mixing degree. And the total amount of the first comments is equal to the sum of the number of positive comments and the number of negative comments in the comments of the target video.
Of course, the first mixing degree can also be determined in other ways, and the detailed process can be referred to the embodiment shown in the following fig. 4a, which will not be described in detail here.
When the first mixing degree is determined, the total number of users who make forward comments can be determined according to the forward comments in the comments of the target video, and the total number is used as a fifth total number; and determining the total number of users who give negative comments according to the negative comments in the comments aiming at the target video as a sixth total number. The first degree of mixing is then determined from the fifth total number and the sixth total number.
The manner of determining the first mixing degree according to the fifth total number and the sixth total number may be similar to the manner of determining the first mixing degree according to the number of positive comments and the number of negative comments in the comments on the target video, where the fifth total number corresponds to the number of positive comments, and the sixth total number corresponds to the number of negative comments, and the difference is only that the number represents a different specific meaning.
S103: and estimating the difference degree of the comment of the target user on the video relative to the comment of other users on the video according to the second mixing degree of the comment of the target user on the commented video and the positive comment and the negative comment in the comment of each commented video.
The commented videos are videos in which target users give comments. Specifically, the commented videos may or may not include the target video, and the application is not limited to this.
In addition to the target video, the target user may also have commented on other videos. In addition, each user often presents personalized characteristics under the influence of factors such as characters, cognition and the like when commenting on each video. For example, some users are very popular in their characters, and their comments made on each video may deviate from the comments of most users.
In view of the above situation, in the embodiment of the present application, when obtaining emotional tendency information of a target user on a target video, a difference degree when the target user comments on the video relative to other users is also considered. The comment issued by the target user to the commented video can reflect the emotional tendency of the target user to the commented video, such as positive emotional tendency, negative emotional request, neutral emotional tendency and the like. Based on the method, the emotional tendency of the target user to each commented video can be synthesized, so that the habit of the target user in evaluating the video is obtained, and the personalized characteristic of the target user in evaluating the video is also obtained. In addition, similar to the first mixing degree, the second mixing degree of each commented video represents the distribution between the positive comment and the negative comment in the comment for each commented video, that is, represents the overall comment environment of each commented video, such as the positive comment environment, the negative comment environment, the neutral comment environment, and the like. Therefore, by combining the above embodiments of the present application, the degree of difference between the evaluation of the target user on the video and the comment of other users on the video can be estimated according to the comments made by the target user on the commented video and each second mixing degree.
Specifically, when the degree of difference is estimated, information representing the emotional tendency of the target user to each commented video may be estimated according to the comment issued by the target user to each commented video, and then the degree of difference may be estimated according to the estimated information and the second mixing degree.
In the above manner of predicting the initial information in S101, information representing the emotional tendency of the target user to each of the commented videos may be predicted.
In addition, the second mixing degree can be obtained by referring to the manner of obtaining the first mixing degree, and the details are not repeated herein.
S104: and correcting the initial information by adopting the first mixing degree and the difference degree to obtain final information representing the target video emotional tendency of the target user.
The initial information is information which is estimated according to comments issued by the target users to the target video and represents the emotional tendency of the target users to the target video, the information only considers the comments issued by the users to the target video, the actual emotional tendency of the target users to the target video can be influenced by factors such as the habits of the target users in commenting the video, the overall comment environment of the target video and the like, the difference degree is reflected by the difference when the target users evaluate the video relative to other users, namely the habits of the target users in evaluating the video, and in addition, the first mixing degree is reflected by the overall evaluation environment of the target video. In view of this, in this step, the initial information is corrected by using the difference degree and the first blending degree, so as to obtain final information indicating the target user's emotional tendency to the target video.
In an embodiment of the application, when the difference degree is large, it is stated that a habit of a target user commenting on a video is different from a habit of most users commenting on the video, and at this time, the initial information can more represent a real emotional tendency of the target user to the target video. In this case, when the initial information is corrected based on the above-described degree of difference, a small correction can be performed. For example, the adjustment is performed based on a preset first adjustment amplitude, which may be set smaller.
On the contrary, when the difference degree is small, the habit of the target user commenting on the video is the same as the habit of most users commenting on the video, and at this time, the deletion of the initial information cannot well represent the real emotional tendency of the target user to the target video. In this case, when the initial information is corrected based on the degree of difference, the initial information can be corrected greatly in a direction indicated by the overall comment environment of the target video. For example, the adjustment is performed based on a preset second adjustment amplitude, which can be set larger.
The initial information may be corrected again by adding the first mixture degree and the initial information corrected based on the degree of difference.
The present application is described only by way of example, and is not limited to the embodiment. Other implementations of obtaining the final information described above may refer to the following embodiments.
As can be seen from the above, when the scheme provided by the above embodiment obtains the user emotional tendency information, not only the initial information representing the emotional tendency of the target user to the target video, which is estimated according to the comment issued by the target user to the target video, is considered, but also the first mixing degree is considered. Because the first mixing degree represents the mixing degree between the positive comments and the negative comments in the comments on the target video, in the scheme provided by the embodiment of the application, the distribution conditions of the positive comments and the negative comments in the comments issued by most users on the target video are also considered, that is, the comment environment on the target video is considered. In addition, in the scheme provided by the above embodiment, the difference degree of the target user commenting on the video relative to other users is also considered, that is, the habit of the target user commenting on the video relative to other users is considered. Therefore, in summary, compared with the prior art that information representing the emotional tendency of the user to the watched video is obtained directly based on the comments issued by the user in the process of watching the video, the reference information is richer and more comprehensive, and therefore, the scheme provided by the embodiment is applied to obtain the emotional tendency information of the user, and the accuracy of the obtained information representing the emotional tendency of the user to the video can be improved.
In an embodiment of the present application, the final information indicating the target user's emotional tendency to the target video may be obtained according to the following expression:
Vsrc=cVtag+(1-c)
wherein, VsrcRepresents the final information, c represents the degree of difference, VtagThe initial information is indicative of the first mixing degree.
V is described below in conjunction with FIGS. 2 a-2 fsrc、VtagAnd corresponding relationships when located in different value ranges.
In the first case, when 0 < VtagWhen, Vtag>Vsrc>>0
With VtagIs taken as 9, VsrcThe value of (1) is 7 and the value of (3) is taken as an example, and the corresponding relationship between the information and the values is shown in fig. 2 a. In this case, the overall comment environment of the target video is a forward comment environment, and is influenced by the forward comment environment, and information V representing the target user's emotional tendency to the target videosrcFrom VtagThe actual emotional tendency of the target user to the target video is weakened, that is, weaker than the emotional tendency indicated by the initial information.
In the second case, when 0 < VtagWhen less than, greater than Vsrc>Vtag>0
With VtagIs taken as 3, VsrcThe value of (1) is 7 and the value of (9) is taken as an example, and the corresponding relationship between the information and the values is shown in fig. 2 b. In this case, the overall comment environment of the target video is a forward comment environment, and is influenced by the forward comment environment, and information V representing the target user's emotional tendency to the target videosrcFrom VtagEnhancement is thatThe actual emotional tendency of the target user to the target video is stronger than the emotional tendency indicated by the initial information.
In the third case, when VtagWhen less than 0, Vtag<Vsrc<<0
With VtagHas a value of-9, VsrcThe value of (a) is-7 and the value of (b) is-3, for example, and the correspondence between these information and values is shown in fig. 2 c. In this case, the overall comment environment of the target video is a negative comment environment, and is influenced by the negative comment environment, and the information V representing the target user's emotional tendency on the target videosrcFrom VtagThe actual emotional tendency of the target user to the target video is weakened, that is, weaker than the emotional tendency indicated by the initial information.
In the fourth case, when < VtagWhen < 0, < Vsrc<Vtag<0
With VtagHas a value of-3, VsrcThe value of (a) is-7 and the value of (b) is-9, for example, and the correspondence between these information and values is shown in fig. 2 d. In this case, the overall comment environment of the target video is a negative comment environment, and is influenced by the negative comment environment, and the information V representing the target user's emotional tendency on the target videosrcFrom VtagAnd enhancing, namely, the actual emotional tendency of the target user to the target video is stronger than the emotional tendency represented by the initial information.
In the fifth case, when < 0 < VtagWhen is < Vsrc<Vtag
With VtagIs taken as 9, VsrcThe value of (a) is 7 or-2, the value of (b) is-3, and the corresponding relationship between these information and values is shown in fig. 2 e. In this case, the overall comment environment of the target video is a negative comment environment, and is influenced by the negative comment environment, and the information V representing the target user's emotional tendency on the target videosrcFrom VtagAnd changing the real emotional tendency of the target user to the target video into a negative emotional tendency or a positive emotional tendency weaker than the emotional tendency represented by the initial information.
In the sixth case, when VtagWhen < 0 < V, Vtag<Vsrc
With VtagHas a value of-5 and VsrcThe value of (a) is 3 or-3, the value of (b) is 9, and the corresponding relationship between these information and values is shown in fig. 2 f. In this case, the overall comment environment of the target video is a forward comment environment, and is influenced by the forward comment environment, and information V representing the target user's emotional tendency to the target videosrcFrom VtagAnd the target user has a negative emotional tendency or a positive emotional tendency which is weaker than the emotional tendency represented by the initial information.
In addition, it is assumed that,
Figure BDA0002491460100000151
by transforming the above expression for obtaining the final information, the following conclusions can be drawn.
Conclusion 1: when V istag<0<,0<c<OpressWhen, Vsrc>0。
Conclusion 2: when V istag<0<,c>OpressWhen, Vsrc<0。
Conclusion 3: when < 0 < Vtag,0<c<OpressWhen, Vsrc<0。
Conclusion 4: when < 0 < Vtag,c>OpressWhen, Vsrc>0。
Further, referring to fig. 2g, a schematic line diagram of final information obtained by correcting the initial information by using the above expression and the initial information before correction is shown. That is, VsrcAnd VtagSchematic line drawing of (a).
In FIG. 2g, the solid line represents VsrcIs a line diagram of (1), and the dotted line represents VtagIn the line graph of (1), the horizontal axis represents information indicating the emotional tendency of the target user to the target video, and the vertical axis represents the number of comments made to the target video.
As can be seen in FIG. 2g, VtagThe line pattern of (A) does not have obvious regularity, and (V)srcThe line graph of (a) shows a normal-like distribution. Therefore, the scheme provided by the embodiment of the application can obtain the information which objectively represents the emotional tendency of the user to the video, so that the accuracy of the obtained information which represents the emotional tendency of the user to the video is improved.
Referring to fig. 3, a flowchart of a second method for obtaining user emotional tendency information is provided, and compared with the foregoing embodiment shown in fig. 1, in this embodiment, the above step S101, according to the comment made by the target user on the target video, pre-estimates the initial information representing the emotional tendency of the target user on the target video, includes the following steps S101A and S101B.
S101A: and obtaining a first total number of positive comments and a second total number of negative comments in the comments, which are published by the target user, of the target video.
In one embodiment of the application, the forward comment may be a comment including a forward word, wherein the forward word may be a word indicating that the commented video is liked and the commented video is positively posed. E.g., drama, praise, bang, etc.
In one embodiment of the application, the negative comment may be a comment including a negative word, wherein the negative word may be a word indicating that the commented video is disliked, with a negative attitude toward the commented video. Such as bad, drama, etc.
When the comments issued by the target users to the target video contain both positive words and negative words, the number of the positive words and the number of the negative words can be counted, if the number of the positive words is larger than the number of the negative words, the comments can be considered as positive comments, if the number of the positive words is smaller than the number of the negative words, the comments can be considered as negative comments, and if the number of the positive words is equal to the number of the negative words, the comments can be considered as neutral comments.
In an embodiment of the application, word segmentation processing may be performed on a comment issued by a target user on a target video to obtain each word included in the comment, and then it is determined whether each obtained word is a positive word or a negative word. For example, a positive vocabulary and a negative vocabulary may be preset, and after each word included in the comment is obtained, the word may be respectively matched with the preset positive vocabulary and negative vocabulary, so as to determine whether each word included in the comment is a positive word or a negative word.
Specifically, since the target user may make more than one comment on the target video, the first total number and the second total number may be obtained by counting all comments made on the target video by the target user, or may be obtained by counting part of the comments. The partial comments can be comments made to the target video by the target user in the last year, half year, three months and the like.
S101B: and estimating initial information representing the target user's emotional tendency to the target video according to the first total quantity and the second total quantity.
When a user commends a video, the stronger the positive emotional tendency is, for example, the emotional tendency of liking the video and the emotional tendency of supporting the content expressed by the video are, the more the positive comment is published, and conversely, the stronger the negative emotional tendency is, for example, the emotional tendency of disliking the video and the emotional tendency of resisting the content expressed by the video are, the more the negative comment is published. In view of this, the first total number and the second total number can reflect the emotional tendency of the target user to the target video to a certain extent.
In an embodiment of the present application, a difference between the first total number and the second total number may be calculated, and then initial information representing an emotional tendency of the target user to the target video may be estimated according to the following expression:
Figure BDA0002491460100000171
wherein, VtagThe initial information is shown, and diff is the difference.
In another embodiment of the present application, a ratio between the difference value and the total amount of the second comment may be used as the initial information. Wherein the second total number of comments is equal to the sum of the first total number and the second total number.
In still another embodiment of the present application, when the first total number is equal to or greater than the second total number, a ratio between the first total number and the second review total amount may be used as the initial information, and when the first total number is less than the second total number, a negative value of the ratio between the second total number and the second review total amount may be used as the initial ratio.
It should be noted that the present application is described only by way of example, and the specific implementation of the estimated initial information is not limited.
As can be seen from the above, in the scheme provided by this embodiment, the information representing the emotional tendency of the target user to the target video is estimated based on the total number of positive comments and the total number of negative comments in the comments made by the target user to the target video. The more positive comments issued by the target user when commenting on the target video indicate that the attitude of the target user on the content expressed by the target video is more positive, the stronger the positive emotional tendency is, the more negative comments issued indicate that the attitude of the target user on the content expressed by the target video is more negative, and the stronger the negative emotional tendency is, so that the initial information indicating the emotional tendency of the target user on the target video can be accurately estimated by applying the scheme provided by the embodiment.
Referring to fig. 4a, a flowchart of a third method for obtaining emotional tendency information of a user is provided, and compared with the foregoing embodiment shown in fig. 1, in this embodiment, the above step S102 of determining a first mixing degree between a positive comment and a negative comment in a comment about a target video includes the following steps S102A-S102D.
S102A: and counting the total number of the forward comments in the comments aiming at the target video as a third total number.
S102B: and counting the total number of negative comments in the comments aiming at the target video as a fourth total number.
Because the target user and other users may comment on the target video and different user interests and hobbies are different, positive comments and negative comments on the target video may exist in the comments on the target video.
The following describes the above-mentioned steps S102A and S102B together.
Specifically, for a comment of the target video, the number of positive words and the number of negative words in the comment can be counted respectively, when the number of the positive words is larger than the number of the negative words, the comment is considered as a positive comment, and when the number of the negative words is larger than the number of the positive words, the comment is considered as a negative comment.
Of course, whether a comment belongs to a positive comment or a negative comment can also be judged by detecting the prediction vocabulary. And if the comment comprises the preset forward vocabulary, the comment is considered to belong to the forward comment. And if one comment comprises a preset negative vocabulary, the comment is considered to belong to the negative comment.
It should be noted that the present embodiment does not limit the execution sequence of S102A and S102B, and S102A may be executed before S102B, after S102B, or in synchronization with S102B.
S102C: and calculating the information entropy between the third total number and the fourth total number.
Specifically, the above information entropy can be calculated according to the following expression.
Figure BDA0002491460100000181
Wherein v is+Denotes a third total number, v-Denotes the fourth total number, and h (x) denotes the information entropy.
Referring to fig. 4b, a graph is shown of information entropy as a function of p when the information entropy is calculated according to the above expression. Wherein the content of the first and second substances,
Figure BDA0002491460100000182
in fig. 4b, the horizontal axis represents the value of p, and the vertical axis represents the value of information entropy. As can be seen from the figure, the information entropy is continuously increased when p is close to 0.5, and reaches the maximum value when p is 0.5. And when the value of p is far away from 0.5, the value of the information entropy is gradually reduced. As can be seen from the above expression regarding p, the process that the value of p increases from 0 to 0.5 indicates that negative comments are more than positive comments in the comments on the target video, and the closer the value of p is to 0, the more negative comments and the less positive comments are, and the lower the mixing degree between the positive comments and the negative comments is. When the value of p reaches 0.5, the number of the positive comments is equal to that of the negative comments, and the mixing degree between the positive comments and the negative comments reaches the highest degree. The process that the value of p is increased from 0.5 to 1 indicates that more positive comments are contained in the comments of the target video than negative comments, the closer the value of p is to 1, the more positive comments are contained, the less negative comments are contained, and the lower the mixing degree of the positive comments and the negative comments is contained.
S102D: and determining a first mixing degree between the positive comments and the negative comments in the comments aiming at the target video according to the third total number, the fourth total number and the information entropy.
As can be seen from fig. 4b above, the information entropy exhibits a non-linear variation with the variation of p. When the number of the positive comments is smaller than the number of the negative comments, the negative comments are in a dominant position in the overall comment environment of the target video, and when the number of the positive comments is larger than the number of the negative comments, the positive comments are in the dominant position in the overall comment environment of the target video.
In view of the above, in one embodiment of the present application, a first degree of mixing between positive and negative comments in the comments for the target video may be determined according to the following expression:
Figure BDA0002491460100000191
wherein the first degree of mixing is indicated.
It can be seen that when the number of the positive comments is greater than the number of the negative comments, that is, when the third total number is greater than the fourth total number, the first mixing degree is calculated based on the third total number, in this case, the larger the third total number is, the smaller the fourth total number is, the smaller the information entropy is, the smaller the first mixing degree is, the higher the purity of the overall comment environment of the target video is, the more dominant the positive comments are, and the larger the influence of the positive comments on the overall comment environment is.
When the number of the negative comments is greater than that of the positive comments, that is, when the fourth total number is greater than the third total number, the first mixing degree is calculated by taking the fourth total number as a reference, in this case, the larger the fourth total number is, the smaller the third total number is, the smaller the information entropy is, the smaller the first mixing degree is, the higher the purity of the overall comment environment of the target video is, the more the negative comments dominate, and the larger the influence of the negative comments on the overall comment environment is.
As can be seen from the above, in the scheme provided in this embodiment, the mixing degree between the positive comments and the negative comments is determined according to the total number of the positive comments and the total number of the negative comments in the comments for the target video. The total number of the positive comments and the total number of the negative comments are combined to reflect the distribution situation of the positive comments and the negative comments in the comments of the target video, and the overall comment environment of the target video can be represented, so that the mixing degree between the positive comments and the negative comments can be accurately determined by applying the scheme provided by the embodiment.
Referring to fig. 5, a schematic flow chart of a fourth method for obtaining emotional tendency information of a user is provided, and compared with the foregoing embodiment, in this embodiment, the step S103 of estimating the difference degree of the comment made by the target user on the video relative to the comment made by other users according to the comment made by the target user on the commented video and the second mixing degree between the positive comment and the negative comment in the comment made on each commented video includes the following steps S103A and S103B.
S103A: reference information that is reflected by each reference comment and represents the commented video emotional tendency targeted by the target user for the reference comment is obtained.
In order to distinguish from the initial information, in the embodiment of the present application, information that is reflected by each reference comment and that represents a commented video emotional tendency targeted by a target user for the reference comment is referred to as reference information. In addition, the reference comments are: the comments posted by the target user for each of the commented videos.
S103B: and estimating the difference degree of the comment of the target user on the video relative to the comment of other users on the video according to the difference of the reference information corresponding to each reference comment relative to the second mixing degree corresponding to the commented video to which the reference information aims.
Wherein, the second mixing degree corresponding to the commented videos is as follows: the degree of mixing between positive and negative comments in the comments for the commented video.
In an embodiment of the application, the difference degree of the comment of the target user on the video relative to the comment of other users on the video can be estimated according to the following expression:
Figure BDA0002491460100000201
wherein c represents the above-mentioned disparity, f () represents a preset normalization function, n represents the number of the commented videos, i represents the serial number of the commented videos, j represents the serial number of the comment issued by the target user on each commented video, and m represents the number of the comment issued by the target user on each commented videoiRepresenting the number of comments made by the target user to the ith commented video, (V)tag)ijRepresenting the reference information corresponding to the jth comment published by the target user on the ith commented video,iindicating a second degree of mixing corresponding to the ith commented video.
For example, the above f () can be a max-min normalization function.
After normalization processing is performed by the normalization function of f (), the value range of c can be merged to a preset interval, for example, the interval of [0, 1], the interval of [0, 10], and the like.
In addition, statistical values such as variance, maximum value, minimum value, and the like of the difference may be calculated as the degree of difference.
Because the reference comment is a comment made by the target user on the commented video, the reference information corresponding to the reference comment reflects the emotional tendency of the target user to comment on each commented video, and the commented video comprises a plurality of videos, each reference information reflects the habit of the target user to comment on the video, and each second mixing degree reflects the overall comment environment of each commented video, namely, the condition that most users comment on each commented video, therefore, according to the difference of the reference information corresponding to each reference comment on the video relative to the second mixing degree corresponding to the commented video aimed at by the reference information, the difference of the comment on the video by the target user relative to other users can be accurately estimated.
Corresponding to the user emotional tendency information obtaining method, the embodiment of the application also provides a user emotional tendency information obtaining device.
Referring to fig. 6, there is provided a schematic structural diagram of a first user emotional tendency information obtaining apparatus, where the apparatus includes:
the initial information estimation module 601 is configured to estimate initial information representing an emotional tendency of a target user to a target video according to a comment issued by the target user to the target video;
a blending degree determining module 602, configured to determine a first blending degree between a positive comment and a negative comment in the comment for the target video;
the difference degree estimation module 603 is configured to estimate, according to a second mixing degree between the comment made by the target user to the commented video and the positive comment and the negative comment in the comment for each commented video, a difference degree of the comment made by the target user to the video relative to the comment made by other users to the video, where the commented video is the video made by the target user to make a comment;
a final information obtaining module 604, configured to modify the initial information by using the first mixing degree and the difference degree, so as to obtain final information indicating an emotional tendency of the target user to the target video.
In an embodiment of the present application, the final information obtaining module 604 is specifically configured to obtain the final information indicating the target user's emotional tendency to the target video according to the following expression:
Vsrc=cVtag+(1-c)
wherein, VsrcRepresenting the final information, c representing the degree of difference, VtagIndicating the initial information and indicating the first mixing degree.
As can be seen from the above, when the scheme provided by the above embodiment obtains the user emotional tendency information, not only the initial information representing the emotional tendency of the target user to the target video, which is estimated according to the comment issued by the target user to the target video, is considered, but also the first mixing degree is considered. Because the first mixing degree represents the mixing degree between the positive comments and the negative comments in the comments on the target video, in the scheme provided by the embodiment of the application, the distribution conditions of the positive comments and the negative comments in the comments issued by most users on the target video are also considered, that is, the comment environment on the target video is considered. In addition, in the scheme provided by the above embodiment, the difference degree of the target user commenting on the video relative to other users is also considered, that is, the habit of the target user commenting on the video relative to other users is considered. Therefore, in summary, compared with the prior art that information representing the emotional tendency of the user to the watched video is obtained directly based on the comments issued by the user in the process of watching the video, the reference information is richer and more comprehensive, and therefore, the scheme provided by the embodiment is applied to obtain the emotional tendency information of the user, and the accuracy of the obtained information representing the emotional tendency of the user to the video can be improved.
Referring to fig. 7, a schematic structural diagram of a second user emotional tendency information obtaining apparatus is provided, and compared with the foregoing embodiment shown in fig. 6, in this embodiment, the initial information estimating module 601 includes:
the total number obtaining unit 601A is configured to obtain a first total number of positive comments and a second total number of negative comments in the comments, which are posted by the target user, of the target video;
an initial information estimating unit 601B, configured to estimate initial information indicating an emotional tendency of the target user to the target video according to the first total number and the second total number.
In an embodiment of the application, the initial information estimating unit 601B is specifically configured to:
calculating a difference between the first total number and the second total number;
estimating initial information representing the target user's emotional tendency to the target video according to the following expression:
Figure BDA0002491460100000231
wherein, VtagIndicating the initial information and diff indicating the difference.
As can be seen from the above, in the scheme provided by the above embodiment, the information representing the emotional tendency of the target user to the target video is estimated based on the total number of positive comments and the total number of negative comments in the comments made by the target user to the target video. The more positive comments issued by the target user when commenting on the target video indicate that the attitude of the target user on the content expressed by the target video is more positive, the stronger the positive emotional tendency is, the more negative comments issued indicate that the attitude of the target user on the content expressed by the target video is more negative, and the stronger the negative emotional tendency is, so that the scheme provided by the embodiment can accurately estimate the initial information indicating the emotional tendency of the target user on the target video.
Referring to fig. 8, a schematic structural diagram of a third apparatus for obtaining emotional tendency information of a user is provided, and compared with the foregoing embodiment shown in fig. 6, in this embodiment, the blending degree determining module 602 includes:
a number counting unit 602A, configured to count, as a third total number, a total number of forward comments in the comments for the target video; counting the total number of negative comments in the comments of the target video as a fourth total number;
an information entropy calculating unit 602B, configured to calculate information entropy between the third total number and a fourth total number;
a mixing degree determining unit 602C, configured to determine, according to the third total number, the fourth total number, and the information entropy, a first mixing degree between a positive comment and a negative comment in the comments for the target video.
In an embodiment of the present application, the mixing degree determining unit 602C is specifically configured to determine a first mixing degree between a positive comment and a negative comment in the comment for the target video according to the following expression:
Figure BDA0002491460100000232
wherein v is+Represents said third total number, v-Represents the fourth total number, h (x) represents the information entropy, representing the first degree of mixing.
As can be seen from the above, in the scheme provided by the above embodiment, the degree of mixing between the positive comments and the negative comments is determined according to the total number of the positive comments and the total number of the negative comments in the comments for the target video. The total number of the positive comments and the total number of the negative comments are combined to reflect the distribution situation of the positive comments and the negative comments in the comments of the target video, and the overall comment environment of the target video can be represented, so that the mixing degree between the positive comments and the negative comments can be accurately determined by applying the scheme provided by the embodiment.
Referring to fig. 9, a schematic structural diagram of a fourth user emotional tendency information obtaining apparatus is provided, and compared with the foregoing embodiment shown in fig. 6, in this embodiment, the difference degree estimation module 603 includes:
a reference information obtaining unit 603A, configured to obtain reference information that is reflected by each reference comment and represents an emotional tendency of the commented user to the target video, where the reference comment is: the comments made by the target user to each commented video;
the difference degree estimation unit 603B is configured to estimate, according to a difference between the reference information corresponding to each reference comment and a second mixing degree corresponding to a commented video to which the reference information is directed, a difference degree, obtained by commenting the video by the target user, of commenting the video by the target user relative to other users, where the second mixing degree corresponding to the commented video is: the degree of mixing between positive and negative comments in the comments for the commented video.
In an embodiment of the application, the difference degree estimation unit 603B is specifically configured to estimate the difference degree of the comment made by the target user on the video relative to the comment made by other users on the video according to the following expression:
Figure BDA0002491460100000241
wherein c represents the disparity, f () represents a preset normalization function, n represents the number of the commented videos, i represents the serial number of the commented videos, j represents the serial number of the comment issued by the target user on each commented video, and m represents the number of the comment issued by the target user on each commented videoiRepresenting the number of comments made by the target user to the ith commented video, (V)tag)ijRepresenting the reference information corresponding to the jth comment published by the target user on the ith commented video,iindicating a second degree of mixing corresponding to the ith commented video.
Because the reference comment is a comment made by the target user on the commented video, the reference information corresponding to the reference comment reflects the emotional tendency of the target user to comment on each commented video, and the commented video comprises a plurality of videos, each reference information reflects the habit of the target user to comment on the video, and each second mixing degree reflects the overall comment environment of each commented video, namely, the condition that most users comment on each commented video, therefore, according to the difference of the reference information corresponding to each reference comment on the video relative to the second mixing degree corresponding to the commented video aimed at by the reference information, the difference of the comment on the video by the target user relative to other users can be accurately estimated.
Corresponding to the user emotional tendency information obtaining method, the embodiment of the application also provides electronic equipment.
Referring to fig. 10, an embodiment of the present application provides a schematic structural diagram of an electronic device, which includes a processor 1001, a communication interface 1002, a memory 1003 and a communication bus 1004, where the processor 1001, the communication interface 1002 and the memory 1003 complete communication with each other through the communication bus 1004,
a memory 1003 for storing a computer program;
the processor 1001 is configured to implement the method for obtaining user emotional tendency information according to the embodiment of the present application when executing the program stored in the memory 1003.
It should be noted that the specific implementation example of the user emotional tendency information obtaining method implemented by the processor 1001 executing the program stored in the memory 1003 is the same as the embodiment mentioned in the previous method embodiment, and is not described again here.
The communication bus mentioned in the electronic device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the electronic equipment and other equipment.
The Memory may include a Random Access Memory (RAM) or a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the device can also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component.
Corresponding to the method for obtaining the user emotional tendency information, an embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and when the computer program is executed by a processor, the method for obtaining the user emotional tendency information provided by the embodiment of the present application is implemented.
Corresponding to the method for obtaining the user emotional tendency information, the embodiment of the application also provides a computer program product containing instructions, and when the computer program product runs on a computer, the computer is enabled to execute the method for obtaining the user emotional tendency information provided by the embodiment of the application.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the apparatus, the electronic device, the computer-readable storage medium, and the computer program product embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and in relation to them, reference may be made to the partial description of the method embodiments.
The above description is only for the preferred embodiment of the present application, and is not intended to limit the scope of the present application. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application are included in the protection scope of the present application.

Claims (11)

1. A method for obtaining emotional tendency information of a user is characterized by comprising the following steps:
estimating initial information representing the emotional tendency of a target user to a target video according to comments made by the target user to the target video;
determining a first degree of mixing between positive and negative comments among the comments for the target video;
estimating the difference degree of the comment of the target user on the video relative to the comment of other users on the video according to the comment of the target user on the commented video and the second mixing degree between the positive comment and the negative comment in the comment of each commented video, wherein the commented video is the video commented by the target user;
and correcting the initial information by adopting the first mixing degree and the difference degree to obtain final information representing the target video emotional tendency of the target user.
2. The method of claim 1, wherein the pre-estimating initial information representing the emotional tendency of the target user to the target video according to the comment made by the target user to the target video comprises:
obtaining a first total number of positive comments and a second total number of negative comments in the comments, which are published by the target user, of the target video;
and estimating initial information representing the target user's emotional tendency to the target video according to the first total quantity and the second total quantity.
3. The method of claim 2, wherein the pre-estimating initial information representing the emotional tendency of the target user to the target video according to the first total number and the second total number comprises:
calculating a difference between the first total number and the second total number;
estimating initial information representing the target user's emotional tendency to the target video according to the following expression:
Figure FDA0002491460090000011
wherein, VtagIndicating the initial information and diff indicating the difference.
4. The method of claim 1, wherein the determining a first degree of mixing between positive and negative comments among the comments for the target video comprises:
counting the total number of the forward comments in the comments aiming at the target video as a third total number;
counting the total number of negative comments in the comments of the target video as a fourth total number;
calculating information entropy between the third total number and the fourth total number;
and determining a first mixing degree between the positive comments and the negative comments in the comments of the target video according to the third total number, the fourth total number and the information entropy.
5. The method of claim 4, wherein determining the first degree of mixing between positive and negative comments in the comments for the target video according to the third and fourth total numbers and the entropy comprises:
determining a first degree of mixing between positive and negative comments in the comments for the target video according to the following expression:
Figure FDA0002491460090000021
wherein v is+Represents said third total number, v-Represents the fourth total number, h (x) represents the information entropy, representing the first degree of mixing.
6. The method of claim 1, wherein estimating the degree of dissimilarity of the comment made by the target user to the video relative to comments made by other users to the video according to the second degree of mixing between the comment made by the target user to the commented video and the positive comment and the negative comment made for each commented video comprises:
obtaining reference information which is reflected by each reference comment and represents the commented video emotional tendency of the target user to the reference comment, wherein the reference comment is as follows: the comments made by the target user to each commented video;
estimating the difference degree of the comment made by the target user to the video relative to the comment made by other users to the video according to the difference of the reference information corresponding to each reference comment relative to the second mixing degree corresponding to the commented video to which the reference information is directed, wherein the second mixing degree corresponding to the commented video is as follows: the degree of mixing between positive and negative comments in the comments for the commented video.
7. The method of claim 6, wherein estimating the degree of difference of the comment made to the video by the target user relative to the comment made to the video by other users according to the difference of the reference information corresponding to each reference comment relative to the second mixing degree corresponding to the comment made to the video by the reference information comprises:
estimating the difference degree of the video commented by the target user relative to the video commented by other users according to the following expression:
Figure FDA0002491460090000031
wherein c represents the disparity, f () represents a preset normalization function, n represents the number of the commented videos, i represents the serial number of the commented videos, j represents the serial number of the comment issued by the target user on each commented video, and m represents the number of the comment issued by the target user on each commented videoiRepresenting the number of comments made by the target user to the ith commented video, (V)tag)ijRepresenting the target user to the ith commented video postThe corresponding reference information of the published jth comment,iindicating a second degree of mixing corresponding to the ith commented video.
8. The method according to any one of claims 1-7, wherein said modifying the initial information with the first blending degree and the difference degree to obtain final information representing the emotional tendency of the target user to the target video comprises:
and acquiring final information representing the emotional tendency of the target user to the target video according to the following expression:
Vsrc=cVtag+(1-c)
wherein, VsrcRepresenting the final information, c representing the degree of difference, VtagIndicating the initial information and indicating the first mixing degree.
9. An apparatus for obtaining emotional tendency information of a user, the apparatus comprising:
the initial information estimation module is used for estimating initial information representing the emotional tendency of a target user to a target video according to comments made by the target user to the target video;
the mixing degree determining module is used for determining a first mixing degree between the positive comment and the negative comment in the comments of the target video;
the difference degree estimation module is used for estimating the difference degree of the comment of the target user on the video relative to the comment of other users on the video according to the second mixing degree of the comment of the target user on the commented video and the positive comment and the negative comment in the comment of each commented video, wherein the commented video is the video commented by the target user;
and the final information obtaining module is used for correcting the initial information by adopting the first mixing degree and the difference degree to obtain final information representing the target video emotional tendency of the target user.
10. An electronic device is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor and the communication interface are used for realizing mutual communication by the memory through the communication bus;
a memory for storing a computer program;
a processor for implementing the method steps of any of claims 1 to 8 when executing a program stored in the memory.
11. A computer-readable storage medium, characterized in that a computer program is stored in the computer-readable storage medium, which computer program, when being executed by a processor, carries out the method steps of any one of the claims 1-8.
CN202010406415.4A 2020-05-14 2020-05-14 User emotional tendency information obtaining method and device and electronic equipment Active CN111565322B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010406415.4A CN111565322B (en) 2020-05-14 2020-05-14 User emotional tendency information obtaining method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010406415.4A CN111565322B (en) 2020-05-14 2020-05-14 User emotional tendency information obtaining method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN111565322A true CN111565322A (en) 2020-08-21
CN111565322B CN111565322B (en) 2022-03-04

Family

ID=72071002

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010406415.4A Active CN111565322B (en) 2020-05-14 2020-05-14 User emotional tendency information obtaining method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN111565322B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114490952A (en) * 2022-04-15 2022-05-13 广汽埃安新能源汽车有限公司 Text emotion analysis method and device, electronic equipment and storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130013685A1 (en) * 2011-04-04 2013-01-10 Bagooba, Inc. Social Networking Environment with Representation of a Composite Emotional Condition for a User and/or Group of Users
CN104008091A (en) * 2014-05-26 2014-08-27 上海大学 Sentiment value based web text sentiment analysis method
CN106295702A (en) * 2016-08-15 2017-01-04 西北工业大学 A kind of social platform user classification method analyzed based on individual affective behavior
CN106599063A (en) * 2016-11-15 2017-04-26 武汉璞华大数据技术有限公司 Fine-grained viewpoint mining method based on theme emotion semantic extraction
CN107491531A (en) * 2017-08-18 2017-12-19 华南师范大学 Chinese network comment sensibility classification method based on integrated study framework
CN109146625A (en) * 2018-08-14 2019-01-04 中山大学 A kind of multi version App more the new evaluating method and system based on content
CN109271512A (en) * 2018-08-29 2019-01-25 中国平安保险(集团)股份有限公司 The sentiment analysis method, apparatus and storage medium of public sentiment comment information
CN110232181A (en) * 2018-03-06 2019-09-13 优酷网络技术(北京)有限公司 Comment and analysis method and device
CN110516249A (en) * 2019-08-29 2019-11-29 新华三信息安全技术有限公司 A kind of Sentiment orientation information obtaining method and device
CN110569495A (en) * 2018-06-05 2019-12-13 北京四维图新科技股份有限公司 Emotional tendency classification method and device based on user comments and storage medium
CN110825876A (en) * 2019-11-07 2020-02-21 上海德拓信息技术股份有限公司 Movie comment viewpoint emotion tendency analysis method

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130013685A1 (en) * 2011-04-04 2013-01-10 Bagooba, Inc. Social Networking Environment with Representation of a Composite Emotional Condition for a User and/or Group of Users
CN104008091A (en) * 2014-05-26 2014-08-27 上海大学 Sentiment value based web text sentiment analysis method
CN106295702A (en) * 2016-08-15 2017-01-04 西北工业大学 A kind of social platform user classification method analyzed based on individual affective behavior
CN106599063A (en) * 2016-11-15 2017-04-26 武汉璞华大数据技术有限公司 Fine-grained viewpoint mining method based on theme emotion semantic extraction
CN107491531A (en) * 2017-08-18 2017-12-19 华南师范大学 Chinese network comment sensibility classification method based on integrated study framework
CN110232181A (en) * 2018-03-06 2019-09-13 优酷网络技术(北京)有限公司 Comment and analysis method and device
CN110569495A (en) * 2018-06-05 2019-12-13 北京四维图新科技股份有限公司 Emotional tendency classification method and device based on user comments and storage medium
CN109146625A (en) * 2018-08-14 2019-01-04 中山大学 A kind of multi version App more the new evaluating method and system based on content
CN109271512A (en) * 2018-08-29 2019-01-25 中国平安保险(集团)股份有限公司 The sentiment analysis method, apparatus and storage medium of public sentiment comment information
CN110516249A (en) * 2019-08-29 2019-11-29 新华三信息安全技术有限公司 A kind of Sentiment orientation information obtaining method and device
CN110825876A (en) * 2019-11-07 2020-02-21 上海德拓信息技术股份有限公司 Movie comment viewpoint emotion tendency analysis method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张宜浩等: "基于用户评论的深度情感分析和多视图协同融合的混合推荐方法", 《计算机学报》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114490952A (en) * 2022-04-15 2022-05-13 广汽埃安新能源汽车有限公司 Text emotion analysis method and device, electronic equipment and storage medium
CN114490952B (en) * 2022-04-15 2022-07-15 广汽埃安新能源汽车有限公司 Text emotion analysis method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN111565322B (en) 2022-03-04

Similar Documents

Publication Publication Date Title
US11416536B2 (en) Content recommendation system
CN108322829B (en) Personalized anchor recommendation method and device and electronic equipment
US9967628B2 (en) Rating videos based on parental feedback
CN108875022B (en) Video recommendation method and device
Gunawardana et al. Evaluating recommender systems
CN110929052A (en) Multimedia resource recommendation method and device, electronic equipment and storage medium
CN108989889B (en) Video playing amount prediction method and device and electronic equipment
US10592074B2 (en) Systems and methods for analyzing visual content items
US20110145040A1 (en) Content recommendation
US20160379123A1 (en) Entertainment Prediction Favorites
CN113656681B (en) Object evaluation method, device, equipment and storage medium
CN111107416B (en) Bullet screen shielding method and device and electronic equipment
Laiche et al. When machine learning algorithms meet user engagement parameters to predict video QoE
CN112579913A (en) Video recommendation method, device, equipment and computer-readable storage medium
CN111062527A (en) Video collection flow prediction method and device
CN111225246B (en) Video recommendation method and device and electronic equipment
CN111565322B (en) User emotional tendency information obtaining method and device and electronic equipment
CN111597380B (en) Recommended video determining method and device, electronic equipment and storage medium
CN110087103A (en) A kind of video recommendation system, method, apparatus and computer
CN113515696A (en) Recommendation method and device, electronic equipment and storage medium
CN112733014A (en) Recommendation method, device, equipment and storage medium
CN109168044B (en) Method and device for determining video characteristics
CN110020129B (en) Click rate correction method, prediction method, device, computing equipment and storage medium
TW202335511A (en) System, method and computer-readable medium for recommending streaming data
CN110309361B (en) Video scoring determination method, recommendation method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant