CN105959737A - Video evaluation method and device based on user emotion recognition - Google Patents
Video evaluation method and device based on user emotion recognition Download PDFInfo
- Publication number
- CN105959737A CN105959737A CN201610509864.5A CN201610509864A CN105959737A CN 105959737 A CN105959737 A CN 105959737A CN 201610509864 A CN201610509864 A CN 201610509864A CN 105959737 A CN105959737 A CN 105959737A
- Authority
- CN
- China
- Prior art keywords
- emotion
- video
- sympathetic response
- mark
- response value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/25—Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
- H04N21/251—Learning process for intelligent management, e.g. learning user preferences for recommending movies
- H04N21/252—Processing of multiple end-users' preferences to derive collaborative data
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/437—Interfacing the upstream path of the transmission network, e.g. for transmitting client requests to a VOD server
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/45—Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
- H04N21/4508—Management of client data or end-user data
- H04N21/4532—Management of client data or end-user data involving end-user characteristics, e.g. viewer profile, preferences
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/60—Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client
- H04N21/65—Transmission of management data between client and server
- H04N21/654—Transmission by server directed to the client
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/60—Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client
- H04N21/65—Transmission of management data between client and server
- H04N21/658—Transmission by the client directed to the server
- H04N21/6581—Reference data, e.g. a movie identifier for ordering a movie or a product identifier in a home shopping application
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Databases & Information Systems (AREA)
- Computing Systems (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention discloses a video evaluation method and device based on user emotion recognition. The method comprises the following steps: receiving emotion data of different users when watching a video sent by a client, wherein the emotion data comprises a plurality of emotion identifiers and corresponding times; acquiring the emotion identifier of generating resonance at each preset moment of the video according to the emotion data, and calculating a corresponding first resonance value; and sending the emotion identifier of generating the resonance, the first resonance value and the corresponding moment to the client. According to the method and the device disclosed by the invention, indirect evaluation of the users for the video contents can be actively acquired, the collection universality of the evaluation data can be obviously improved by simple settings, statistics on the audience feedback condition at each moment of the video is accurately conducted to provide more accurate reference data for other user the data can serve as important bases of video content recommendation as well, and meanwhile, more detailed user emotion feedback information for video producers is provided.
Description
Technical field
The present invention relates to Computer Applied Technology field, based on user emotion identification regard particularly to a kind of
Frequently evaluation method and device.
Background technology
Along with the development of the Internet, the number of videos on the Internet is skyrocketed through, and user is often according to individual
After preference and online friend's grading/evaluation are screened, decide whether viewing;And at present to video on the Internet
Scoring is all the most general, and has relied on what user's active operation completed.The precision of scoring
With the simplification of scoring process, for promoting Consumer's Experience, find potential available commercial value extremely
Close important.
As a rule, user is when watching video, and its expression often changes along with video content, each
The expression in moment and the emotion revealing out are all probably different, including happiness, sadness, surprised,
Fear, indignation or detest etc., these emotions also have expressed the attitude to video content simultaneously, and it is right to reveal
The fancy grade of video.
In the prior art, manually mark on the interface of viewing video after needing user to log in, and
And scoring is single, general overall assessment;Existing video evaluation method relies on user's active operation,
And the range that evaluation information is collected also has the restricted of reality, therefore, video content provider is necessary main
Obtain the evaluation of user dynamicly.
Information disclosed in background of invention part is merely intended to increase the general background to the present invention
Understand, and be not construed as recognizing or imply in any form that this information has constituted this area general technology
Prior art well known to personnel.
Summary of the invention
It is an object of the invention to provide a kind of video evaluation method and apparatus based on user emotion identification,
Thus overcome that prior art exists only the active evaluation of video could be obtained user by gathering user
Shortcoming to the feedback information of video.
A kind of based on user emotion identification the video evaluation method that the present invention provides, including:
Receiving the mood data during different user viewing video of client transmission, described mood data includes
Multiple emotion mark and the time of correspondence;
According to described mood data, obtain the emotion mark that each predetermined time at described video is empathized
Know and calculate corresponding first sympathetic response value;
The moment of the described emotion empathized mark, the first sympathetic response value and correspondence is sent to described client
End.
Preferably, in technique scheme, also include:
Number according to described predetermined time and described first sympathetic response value, calculate when described video default
The second sympathetic response value in Duan;
According to default feedback information generating method and described second sympathetic response value, generate the user of described video
Feedback information.
Preferably, in technique scheme, also include:
Obtain the emotion mark that number of times of empathizing in the preset period of time of described video is most, according to described
Number of times of empathizing most emotion mark described video is carried out category division.
Preferably, in technique scheme, described according to described mood data, obtain at described video
The emotion that each predetermined time is empathized identifies and calculates corresponding first sympathetic response value and includes:
Calculating at each predetermined time, described multiple emotion mark is in whole emotions of current time identify
Shared ratio;
Emotion corresponding for maximum ratio in synchronization is identified the emotion mark empathized as current time
Know, and according to described maximum ratio and default corresponding relation, generate the first sympathetic response of each predetermined time
Value.
Preferably, in technique scheme, described emotion mark include happiness, sadness, surprised, frightened,
Indignation, detest.
Another kind video evaluation based on the user emotion identification method that the present invention provides, including:
Expression action during Real-time Collection different user viewing video, according to Expression Recognition technology identification
The multiple emotion mark of expression action, using the time of described multiple emotion mark and correspondence as mood data
It is sent to server;
Every at described video that described server obtains is received according to described mood data from described server
The emotion mark that individual predetermined time is empathized, the first sympathetic response value calculated and the moment of correspondence;
Preferably, in technique scheme, also include:
The emotion empathized according to default output rule output identifies, the first sympathetic response value and correspondence
Moment.
Another kind video evaluation based on the user emotion identification device that the present invention provides, including:
Mood data receiver module, the different user sent for receiving client watches emotion during video
Data, described mood data includes multiple emotion mark and the time of correspondence;
First score value computing module, for according to described mood data, obtain at described video is each pre-
If the emotion that the moment empathizes identifies and calculates corresponding first sympathetic response value;
Sympathetic response information sending module, for identifying the described emotion empathized, the first sympathetic response value and right
The moment answered is sent to described client.
Preferably, in technique scheme, also include:
Second score value computing module, for the number according to described predetermined time and described first sympathetic response value,
Calculate the second sympathetic response value in the preset period of time of described video;
Feedback information generation module, for according to the feedback information generating method preset and described second sympathetic response
Value, generates the field feedback of described video.
Preferably, in technique scheme, also include:
Video category division module, empathizes number of times for obtaining in the preset period of time of described video
Many emotion marks, carry out classification according to the emotion mark that described number of times of empathizing is most to described video
Divide.
Preferably, in technique scheme, described first score value computing module specifically includes:
Ratio calculating sub module, for calculating at each predetermined time, described multiple emotion identifies currently
Ratio shared in whole emotions mark in moment;
Score value calculating sub module, for identifying emotion corresponding for maximum ratio in synchronization as current
The emotion mark that moment empathizes, and according to described maximum ratio and default corresponding relation, generate every
First sympathetic response value of individual predetermined time.
Preferably, in technique scheme, described emotion mark include happiness, sadness, surprised, frightened,
Indignation, detest.
A kind of based on user emotion identification the video evaluation device that the present invention provides, including:
Mood data sending module, the expression action when Real-time Collection different user viewing video, root
According to action of expressing one's feelings described in Expression Recognition technology identification multiple emotion identify, will described multiple emotion mark with
The corresponding time is sent to server as mood data;
Sympathetic response information receiving module, for receiving described server according to described emotion number from described server
The first sympathetic response that the emotion empathized according to each predetermined time at described video obtained identifies, calculates
Value and the moment of correspondence;
Preferably, in technique scheme, also include:
Sympathetic response message output module, for the emotion empathized according to the output rule output preset
Mark, the first sympathetic response value and the moment of correspondence.
Compared with prior art, there is advantages that the one of the embodiment of the present invention based on
The video evaluation method and apparatus of user emotion identification, it is possible to indirect to video content of active obtaining user
Evaluate, it is not necessary to user logs in, it is to avoid the situation that malice is evaluated occurs, actively make without spectators and commenting
Valency operates.And, the extensively degree using simple setting i.e. to enable to evaluating data collection obtains substantially
Improve, added up the audience feedback situation in video each moment subtly, provided more for other users
Reference data accurately, it is possible to as the important evidence of video content recommendation, be simultaneously video production side also
Detailed user emotion feedback information is provided.
Accompanying drawing explanation
Fig. 1 is the schematic flow sheet of video evaluation method based on user emotion identification in the embodiment of the present invention.
Fig. 2 is the flow process of a kind of based on user emotion identification the video evaluation method of the embodiment of the present invention one
Schematic diagram.
Fig. 3 is the flow process of a kind of based on user emotion identification the video evaluation method of the embodiment of the present invention two
Schematic diagram.
Fig. 4 is the mutual of a kind of based on user emotion identification the video evaluation method of the embodiment of the present invention three
Flow chart.
Fig. 5 is the structure of a kind of based on user emotion identification the video evaluation device of the embodiment of the present invention four
Schematic diagram.
Fig. 6 is the knot of another kind video evaluation based on the user emotion identification device of the embodiment of the present invention four
Structure schematic diagram.
Fig. 7 is the structure of a kind of based on user emotion identification the video evaluation device of the embodiment of the present invention five
Schematic diagram.
Detailed description of the invention
Below in conjunction with the accompanying drawings, the detailed description of the invention of the present invention is described in detail, it is to be understood that this
The protection domain of invention is not limited by detailed description of the invention.
Various exemplary embodiments, feature and the aspect of the present invention is described in detail below with reference to accompanying drawing.Attached
Reference identical in figure represents the same or analogous element of function.Although enforcement shown in the drawings
The various aspects of example, but unless otherwise indicated, it is not necessary to accompanying drawing drawn to scale.
The most special word " exemplary " means " as example, embodiment or illustrative ".Here make
Any embodiment illustrated by " exemplary " should not necessarily be construed as preferred or advantageous over other embodiments.It addition,
In order to better illustrate the present invention, detailed description of the invention below gives numerous details.
It will be appreciated by those skilled in the art that do not have some detail, the present invention equally implements.One
In a little examples, method well known to those skilled in the art, means, element are not described in detail, with
It is easy to highlight the purport of the present invention.
In order to solve only by collection user, the active evaluation of video could being obtained of prior art existence
User's technical problem to the feedback information of video, the present invention proposes a kind of based on user emotion identification
Video evaluation method and apparatus.
A kind of based on user emotion identification the video evaluation method that the embodiment of the present invention provides, sees Fig. 1
Shown in, including step S11-S13:
Step S11: receive the mood data during different user viewing video of client transmission, mood data
Including multiple emotion mark and the time of correspondence;
Step S12: according to mood data, obtains the emotion mark that each predetermined time at video is empathized
Know and calculate corresponding first sympathetic response value;
Step S13: the moment of the emotion empathized mark, the first sympathetic response value and correspondence is sent to client
End.
A kind of based on user emotion identification the video evaluation method of the embodiment of the present invention, it is possible to active obtaining
The user Indirect evaluation to video content, it is not necessary to user logs in, it is to avoid the situation that malice is evaluated occurs,
Actively make an appraisal without spectators operation.
Video evaluation method of based on user emotion identification and dress are discussed in detail below by several embodiments
Put.
Embodiment one
As in figure 2 it is shown, a kind of based on user emotion identification the video evaluation method of the embodiment of the present invention,
It is applicable to server, comprises the following steps:
Step S101: receive the mood data during different user viewing video of client transmission, described feelings
Thread data include multiple emotion mark and the time of correspondence;
Such as one video acquisition device can be set in client, automatically turn on after user opens video
Or prompt the user whether to open this video acquisition device, after video acquisition device is opened, pass through video
Harvester gathers the expression action of user, and passes through the feelings that Expression Recognition technology identification expression action is corresponding
Thread, and use textual or digitized Form generation emotion to identify, generate feelings together with the corresponding time
Thread data send to server;Client can not preserve during sending mood data or upload bat
The user taken the photograph expresses one's feelings action picture.
Mood data during the different user viewing video that server reception client reports, mood data bag
Containing emotion mark and the time of correspondence, it is preferred that described emotion mark include happiness, sadness, surprised,
Frightened, angry, detest.
Step S102: according to described mood data, obtains each predetermined time at described video and produces altogether
The emotion of ring identifies and calculates corresponding first sympathetic response value;
Concrete, step S102 can be embodied as following steps: calculates at each predetermined time, described multiple
The ratio that emotion mark is shared in whole emotions of current time identify;By maximum ratio in synchronization
The emotion mark that corresponding emotion mark is empathized as current time, and according to described maximum ratio and
The corresponding relation preset, generates the first sympathetic response value of each predetermined time.
The moment preset can be such as the set time of each 5 seconds/10 seconds (being not limited to this), the present invention
Empathizing described in embodiment, refers to the server mood data according to client upload, to each
All kinds of emotions mark of predetermined time is added up, by the emotion mark of accounting maximum (occurrence number is most)
Know the emotion mark being defined as empathizing, such as: at a time, total glad, sad, surprised,
Fear waits 4 kinds of emotions marks to occur, ratio shared respectively is 40%, 30%, 20%, 10%, then it is assumed that
In this moment, user creates the sympathetic response of " glad " emotion.
First sympathetic response value is for representing at each predetermined time, the height of the sympathetic response degree of user emotion.In advance
If corresponding relation can be such as: sometime, user creates the sympathetic response of " glad " emotion,
And proportion is 40%, then the first sympathetic response value now is defined as 40, if proportion is 50%,
Then the first sympathetic response value is defined as 50.It will be understood by those skilled in the art that the embodiment of the present invention is above-mentioned to preset
Corresponding relation be only for example signal, every can be the most right according to the ratio height of the emotion mark empathized
The method that first sympathetic response value of current time makes a distinction, should be included in the scope of protection of present invention
In.
Step S103: the moment of the described emotion empathized mark, the first sympathetic response value and correspondence is sent
To described client.
Server, will after getting emotion mark that each predetermined time empathizes, the first sympathetic response value
Above-mentioned information returns to client, according to the information received, client user can accurately know that this regards
User's Indirect evaluation situation of frequency.
In the method for the embodiment of the present invention, it is preferred that can also comprise the following steps:
Step S104: according to number and the described first sympathetic response value of described predetermined time, calculates and regards described
The second sympathetic response value in the preset period of time of frequency;
In the method for the embodiment of the present invention, using the second sympathetic response value as total sympathetic response degree, Appreciation gist
For the predetermined time number empathized, (embodiment of the present invention thinks that there is a sympathetic response in each moment, and difference exists
Dividing in sympathetic response degree height) and the sympathetic response degree (the first sympathetic response value) in each moment, ask for sympathetic response degree
Average.
The second sympathetic response value described in the embodiment of the present invention can be thought within the whole period of video, and first altogether
The average of ring value, such as: the number of predetermined time is 5, its first corresponding respectively sympathetic response value is
30,40,60,90,20, then the second sympathetic response value is (30+40+60+90+20) ÷ 5=48.
Step S105: according to default feedback information generating method and described second sympathetic response value, generates described
The field feedback of video.
The feedback information generating method preset can be such as: when total sympathetic response degree (the second sympathetic response value)
The lowest, show that the information that video content is expressed is chaotic, audience emotion is in neutrality, then user feedback
Information is that video content is dull;Total sympathetic response degree (the second sympathetic response value) is the highest, then video is described
Content more can attract the attention of most users, then field feedback is that video content is more excellent, sees
Point.It will be understood by those skilled in the art that default feedback information generating method is not limited to said method,
Every method that can generate corresponding feedback information according to the height of the second sympathetic response value, should be included in this
In the scope that invention is claimed.
The method of the embodiment of the present invention, it is preferred that can also comprise the following steps:
Step S106: obtain the emotion mark that number of times of empathizing in the preset period of time of described video is most,
According to the emotion mark that described number of times of empathizing is most, described video is carried out category division.
Such as: in 5 moment of a video, the emotion that 3 moment empathize wherein is had to identify all
It is " glad ", then illustrates this video feels it is " glad " to the entirety of spectators, substantially can be divided into
" comedy " classification.
A kind of based on user emotion identification the video evaluation method of the embodiment of the present invention, it is possible to active obtaining
The user Indirect evaluation to video content, it is not necessary to user logs in, it is to avoid the situation that malice is evaluated occurs,
Actively make an appraisal without spectators operation.And, use simple setting i.e. to enable to evaluating data
The extensively degree collected is improved significantly, and has added up the audience feedback situation in video each moment subtly,
Reference data the most accurately is provided, it is possible to as the important evidence of video content recommendation for other users,
Also provide for detailed user emotion feedback information for video production side simultaneously.
Embodiment two
As it is shown on figure 3, a kind of based on user emotion identification the video evaluation method of the embodiment of the present invention,
It is applicable to the clients such as PC, comprises the following steps:
Step S201: expression action during Real-time Collection different user viewing video, according to Expression Recognition skill
Express one's feelings described in art identification the multiple emotion mark of action, by described multiple emotion mark and corresponding time catch cropping
It is sent to server for mood data;
Such as one video acquisition device can be set in client, automatically turn on after user opens video
Or prompt the user whether to open this video acquisition device, after video acquisition device is opened, pass through video
Harvester gathers the expression action of user, and passes through the feelings that Expression Recognition technology identification expression action is corresponding
Thread, and use textual or digitized Form generation emotion to identify, generate feelings together with the corresponding time
Thread data send to server;Client can not preserve during sending mood data or upload bat
The user taken the photograph expresses one's feelings action picture.
Step S202: from described server receive described server according to described mood data obtain in institute
State emotion mark, the first sympathetic response value calculated and correspondence that each predetermined time of video empathizes time
Carve;
Mood data during the different user viewing video that server reception client reports, mood data bag
Containing emotion mark and the time of correspondence, it is preferred that described emotion mark include happiness, sadness, surprised,
Frightened, angry, detest.
Concrete, each predetermined time at described video that server obtains according to described mood data produces
The emotion mark of raw sympathetic response, calculate corresponding first sympathetic response value and may be embodied as following steps: calculate often
Individual predetermined time, the ratio that described multiple emotion mark is shared in whole emotions of current time identify;
The emotion mark that emotion mark corresponding for maximum ratio in synchronization is empathized as current time,
And according to described maximum ratio and default corresponding relation, generate the first sympathetic response value of each predetermined time.
The moment preset can be such as the set time of each 5 seconds/10 seconds (being not limited to this), the present invention
Empathizing described in embodiment, refers to the server mood data according to client upload, to each
All kinds of emotions mark of predetermined time is added up, by the emotion mark of accounting maximum (occurrence number is most)
Know the emotion mark being defined as empathizing, such as: at a time, total glad, sad, surprised,
Fear waits 4 kinds of emotions marks to occur, ratio shared respectively is 40%, 30%, 20%, 10%, then it is assumed that
In this moment, user creates the sympathetic response of " glad " emotion.
First sympathetic response value is for representing at each predetermined time, the height of the sympathetic response degree of user emotion.In advance
If corresponding relation can be such as: sometime, user creates the sympathetic response of " glad " emotion,
And proportion is 40%, then the first sympathetic response value now is defined as 40, if proportion is 50%,
Then the first sympathetic response value is defined as 50.It will be understood by those skilled in the art that the embodiment of the present invention is above-mentioned to preset
Corresponding relation be only for example signal, every can be the most right according to the ratio height of the emotion mark empathized
The method that first sympathetic response value of current time makes a distinction, should be included in the scope of protection of present invention
In.
Server, will after getting emotion mark that each predetermined time empathizes, the first sympathetic response value
Above-mentioned information returns to client, according to the information received, client user can accurately know that this regards
User's Indirect evaluation situation of frequency.
The method of the embodiment of the present invention, it is preferred that can also comprise the following steps:
Step S203: the emotion mark empathized according to default output rule output, first altogether
Ring value and the moment of correspondence.
Client get from server empathize emotion mark, the first sympathetic response value and correspondence time
After the information such as quarter, can export and show other users.Preset output rule for example: draw column
Figure, laterally for the moment that in chronological sequence order is arranged in order, be longitudinally the most respectively correspondence the
One sympathetic response value (sympathetic response degree), and by different colors, the emotion mark empathized can be made a distinction.
A kind of based on user emotion identification the video evaluation method of the embodiment of the present invention, it is possible to active obtaining
The user Indirect evaluation to video content, it is not necessary to user logs in, it is to avoid the situation that malice is evaluated occurs,
Actively make an appraisal without spectators operation.And, use simple setting i.e. to enable to evaluating data
The extensively degree collected is improved significantly, and has added up the audience feedback situation in video each moment subtly,
Reference data the most accurately is provided, it is possible to as the important evidence of video content recommendation for other users,
Also provide for detailed user emotion feedback information for video production side simultaneously.
Embodiment three
As shown in Figure 4, a kind of based on user emotion identification the video evaluation method to the embodiment of the present invention
Interaction flow be described in detail, different user is as a example by A, B, C, D, E, and video is X, preset
Moment be respectively moment 1, moment 2, moment 3, moment 4, client preset output rule be post
Shape figure, the method comprises the following steps:
Step S301: client gathers user A, B, C, D, E expression when watching video X respectively
Action, and the multiple emotion mark of action of expressing one's feelings according to Expression Recognition technology identification;
Step S302: user A, B, C, D, E are watched emotion mark during video X by client respectively
It is sent to server as mood data with the corresponding time.
Step S303: server receives mood data during user A, B, C, D, E viewing video X,
Described mood data includes multiple emotion mark and the time of correspondence;
Step S304: server watches feelings during video X according to user A, B, C, D, the E received
Thread data, obtain the feelings empathized in moment 1 of described video X, moment 2, moment 3, moment 4
Thread identifies and calculates corresponding first sympathetic response value;
Such as: the emotion empathized in the moment 1 is " glad ", and the first sympathetic response value is 50;In the moment 2
The emotion empathized is " angry ", and the first sympathetic response value is 30;The emotion empathized in the moment 3 is " height
Emerging ", the first sympathetic response value is 70;The emotion empathized in the moment 4 is " glad ", and the first sympathetic response value is
60。
Step S305: emotion that 4 moment at video X are empathized by server mark, first altogether
Ring value is sent to described client.
Step S306: client uses the mode of block diagram to export and empathizes in 4 moment of video X
Emotion mark, the first sympathetic response value.
Step S307: server is according to the number 4 of predetermined time and described first sympathetic response value
(50,30,70,60), calculate the second sympathetic response value (50+30+70+60) ÷ 4=52.5;
Step S308: according to default feedback information generating method and described second sympathetic response value, generates described
The field feedback of video.
Step S309: obtain the emotion mark that number of times of empathizing in the preset period of time of described video is most
" glad ", is divided into comedy according to emotion mark " glad " by video X.
The present embodiment is embodiment one method and the interaction diagrams of embodiment two method, has embodiment one
With whole Advantageous Effects of embodiment two, do not repeat them here.
Embodiment four
As it is shown in figure 5, a kind of based on user emotion identification the video evaluation device of the embodiment of the present invention,
Including:
Mood data receiver module 41, the different user sent for receiving client watches feelings during video
Thread data, described mood data includes multiple emotion mark and the time of correspondence;
First score value computing module 42, for according to described mood data, obtain at described video is each
The emotion that predetermined time is empathized identifies and calculates corresponding first sympathetic response value;
Sympathetic response information sending module 43, for by the described emotion empathized identify, the first sympathetic response value and
The corresponding moment is sent to described client.
Preferably, in technique scheme, shown in Figure 6, also include:
Second score value computing module 44, for the number according to described predetermined time and described first sympathetic response value,
Calculate the second sympathetic response value in the preset period of time of described video;
Feedback information generation module 45, for according to the feedback information generating method preset and described second altogether
Ring value, generates the field feedback of described video.
Preferably, in technique scheme, also include:
Video category division module 46, for obtaining number of times of empathizing in the preset period of time of described video
Most emotion marks, carries out class according to the emotion mark that described number of times of empathizing is most to described video
Do not divide.
Preferably, in technique scheme, described first score value computing module 42 specifically includes:
Ratio calculating sub module 421, for calculating at each predetermined time, described multiple emotion mark is being worked as
Ratio shared in whole emotions mark in front moment;
Score value calculating sub module 422, for identifying emotion corresponding for maximum ratio in synchronization as working as
The emotion mark that the front moment empathizes, and according to described maximum ratio and default corresponding relation, generate
First sympathetic response value of each predetermined time.
Preferably, in technique scheme, described emotion mark includes happiness, sadness, surprised, probably
Fear, indignation, detest.
The present embodiment is the device that in embodiment one, method is corresponding, has the whole useful skill of embodiment one
Art effect, does not repeats them here.
Embodiment five
As it is shown in fig. 7, a kind of based on user emotion identification the video evaluation device of the embodiment of the present invention,
Including:
Mood data sending module 51, the expression action when Real-time Collection different user viewing video,
Express one's feelings according to Expression Recognition technology identification action multiple emotion mark, will described multiple emotion mark
It is sent to server as mood data with the corresponding time;
Sympathetic response information receiving module 52, for receiving described server according to described emotion from described server
Emotion that each predetermined time at described video of data acquisition is empathized mark, first calculated are altogether
Ring value and the moment of correspondence;
Preferably, in technique scheme, also include:
Sympathetic response message output module 53, for the feelings empathized according to the output rule output preset
Thread mark, the first sympathetic response value and the moment of correspondence.
The present embodiment is the device that in embodiment two, method is corresponding, has the whole useful skill of embodiment two
Art effect, does not repeats them here.
Those skilled in the art is it can be understood that arrive, and for convenience and simplicity of description, above-mentioned retouches
The specific works process of the device stated, is referred to the corresponding process in preceding method embodiment, at this not
Repeat again.
One of ordinary skill in the art will appreciate that: realize all or part of step of above-mentioned each method embodiment
Suddenly can be completed by the hardware that programmed instruction is relevant.Aforesaid program can be stored in a computer can
Read in storage medium.This program upon execution, performs to include the step of above-mentioned each method embodiment;And
Aforesaid storage medium includes: ROM, RAM, magnetic disc or CD etc. are various can store program code
Medium.
Last it is noted that above example is only in order to illustrate technical scheme, rather than to it
Limit;Although the present invention being described in detail with reference to previous embodiment, the ordinary skill of this area
Personnel it is understood that the technical scheme described in foregoing embodiments still can be modified by it, or
Person carries out equivalent to wherein portion of techniques feature;And these amendments or replacement, do not make corresponding skill
The essence of art scheme departs from the spirit and scope of various embodiments of the present invention technical scheme.In other words, above institute
State, only the detailed description of the invention of the present invention, but protection scope of the present invention is not limited thereto, any
Those familiar with the art in the technical scope that the invention discloses, can readily occur in change or
Replace, all should contain within protection scope of the present invention.Therefore, protection scope of the present invention should be with institute
State scope of the claims to be as the criterion.
Claims (14)
1. a video evaluation method based on user emotion identification, it is characterised in that including:
Receiving the mood data during different user viewing video of client transmission, described mood data includes multiple emotion mark and the time of correspondence;
According to described mood data, obtain the emotion that each predetermined time at described video empathizes and identify and calculate corresponding first sympathetic response value;
The moment of the described emotion empathized mark, the first sympathetic response value and correspondence is sent to described client.
Method the most according to claim 1, it is characterised in that also include:
Number according to described predetermined time and described first sympathetic response value, calculate the second sympathetic response value in the preset period of time of described video;
According to default feedback information generating method and described second sympathetic response value, generate the field feedback of described video.
Method the most according to claim 1 and 2, it is characterised in that also include:
Obtain the emotion mark that number of times of empathizing in the preset period of time of described video is most, according to the emotion mark that described number of times of empathizing is most, described video is carried out category division.
Method the most according to claim 1 and 2, it is characterised in that described according to described mood data, obtains the emotion that each predetermined time at described video empathizes and identifies and calculate corresponding first sympathetic response value and include:
Calculate at each predetermined time, the ratio that described multiple emotion mark is shared in whole emotions of current time identify;
The emotion mark that emotion mark corresponding for maximum ratio in synchronization is empathized as current time, and according to described maximum ratio and default corresponding relation, generate the first sympathetic response value of each predetermined time.
Method the most according to claim 1 and 2, it is characterised in that described emotion mark includes happiness, sadness, surprised, frightened, angry, detest.
6. a video evaluation method based on user emotion identification, it is characterised in that including:
Expression action during Real-time Collection different user viewing video, the multiple emotion mark of action of expressing one's feelings according to Expression Recognition technology identification, described multiple emotion mark and corresponding time are sent to server as mood data;
From described server receives emotion mark, the first sympathetic response value calculated and the correspondence that described server is empathized according to each predetermined time at described video that described mood data obtains when.
Method the most according to claim 6, it is characterised in that also include:
Emotion mark, the first sympathetic response value and the moment of correspondence empathized according to default output rule output.
8. a video evaluation device based on user emotion identification, it is characterised in that including:
Mood data receiver module, mood data during for receiving the different user viewing video of client transmission, described mood data includes multiple emotion mark and the time of correspondence;
First score value computing module, for according to described mood data, obtains the emotion that each predetermined time at described video empathizes and identifies and calculate corresponding first sympathetic response value;
Sympathetic response information sending module, for being sent to described client by the moment of the described emotion empathized mark, the first sympathetic response value and correspondence.
Device the most according to claim 8, it is characterised in that also include:
Second score value computing module, for the number according to described predetermined time and described first sympathetic response value, calculates the second sympathetic response value in the preset period of time of described video;
Feedback information generation module, for according to the feedback information generating method preset and described second sympathetic response value, generating the field feedback of described video.
Device the most according to claim 8 or claim 9, it is characterised in that also include:
Video category division module, for obtaining the emotion mark that number of times of empathizing in the preset period of time of described video is most, carries out category division according to the emotion mark that described number of times of empathizing is most to described video.
11. devices according to claim 8 or claim 9, it is characterised in that described first score value computing module specifically includes:
Ratio calculating sub module, for calculating at each predetermined time, the ratio that described multiple emotion mark is shared in whole emotions of current time identify;
Score value calculating sub module, for emotion mark emotion mark corresponding for maximum ratio in synchronization empathized as current time, and according to described maximum ratio and default corresponding relation, generates the first sympathetic response value of each predetermined time.
12. devices according to claim 8 or claim 9, it is characterised in that described emotion mark includes happiness, sadness, surprised, frightened, angry, detest.
13. 1 kinds of video evaluation devices based on user emotion identification, it is characterised in that including:
Mood data sending module, the expression action when Real-time Collection different user viewing video, the multiple emotion mark of action of expressing one's feelings according to Expression Recognition technology identification, described multiple emotion mark and corresponding time are sent to server as mood data;
Sympathetic response information receiving module, for from described server receives emotion mark, the first sympathetic response value calculated and the correspondence that described server is empathized according to each predetermined time at described video that described mood data obtains when.
14. devices according to claim 13, it is characterised in that also include:
Sympathetic response message output module, for emotion mark, the first sympathetic response value and the moment of correspondence empathized according to the output rule output preset.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610509864.5A CN105959737A (en) | 2016-06-30 | 2016-06-30 | Video evaluation method and device based on user emotion recognition |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610509864.5A CN105959737A (en) | 2016-06-30 | 2016-06-30 | Video evaluation method and device based on user emotion recognition |
Publications (1)
Publication Number | Publication Date |
---|---|
CN105959737A true CN105959737A (en) | 2016-09-21 |
Family
ID=56902115
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610509864.5A Pending CN105959737A (en) | 2016-06-30 | 2016-06-30 | Video evaluation method and device based on user emotion recognition |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105959737A (en) |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107133561A (en) * | 2017-03-16 | 2017-09-05 | 腾讯科技(深圳)有限公司 | Event-handling method and device |
CN107545415A (en) * | 2017-10-01 | 2018-01-05 | 上海量科电子科技有限公司 | Payment evaluation method, client and system based on action |
CN107809674A (en) * | 2017-09-30 | 2018-03-16 | 努比亚技术有限公司 | A kind of customer responsiveness acquisition, processing method, terminal and server based on video |
CN108337563A (en) * | 2018-03-16 | 2018-07-27 | 深圳创维数字技术有限公司 | Video evaluation method, apparatus, equipment and storage medium |
CN108694384A (en) * | 2018-05-14 | 2018-10-23 | 芜湖岭上信息科技有限公司 | A kind of viewer satisfaction investigation apparatus and method based on image and sound |
CN108814595A (en) * | 2018-03-15 | 2018-11-16 | 南京邮电大学 | EEG signals fear degree graded features research based on VR system |
CN108848416A (en) * | 2018-06-21 | 2018-11-20 | 北京密境和风科技有限公司 | The evaluation method and device of audio-video frequency content |
CN108881985A (en) * | 2018-07-18 | 2018-11-23 | 南京邮电大学 | Program points-scoring system based on brain electricity Emotion identification |
CN109447729A (en) * | 2018-09-17 | 2019-03-08 | 平安科技(深圳)有限公司 | A kind of recommended method of product, terminal device and computer readable storage medium |
CN110020625A (en) * | 2019-04-09 | 2019-07-16 | 昆山古鳌电子机械有限公司 | A kind of service evaluation system |
CN110175565A (en) * | 2019-05-27 | 2019-08-27 | 北京字节跳动网络技术有限公司 | The method and apparatus of personage's emotion for identification |
CN110888997A (en) * | 2018-09-10 | 2020-03-17 | 北京京东尚科信息技术有限公司 | Content evaluation method and system and electronic equipment |
CN111026265A (en) * | 2019-11-29 | 2020-04-17 | 华南理工大学 | System and method for continuously labeling emotion labels based on VR scene videos |
CN112492397A (en) * | 2019-09-12 | 2021-03-12 | 上海哔哩哔哩科技有限公司 | Video processing method, computer device, and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030093784A1 (en) * | 2001-11-13 | 2003-05-15 | Koninklijke Philips Electronics N.V. | Affective television monitoring and control |
CN1662922A (en) * | 2002-06-27 | 2005-08-31 | 皇家飞利浦电子股份有限公司 | Measurement of content ratings through vision and speech recognition |
CN102945624A (en) * | 2012-11-14 | 2013-02-27 | 南京航空航天大学 | Intelligent video teaching system based on cloud calculation model and expression information feedback |
CN104299225A (en) * | 2014-09-12 | 2015-01-21 | 姜羚 | Method and system for applying facial expression recognition in big data analysis |
CN105094292A (en) * | 2014-05-05 | 2015-11-25 | 索尼公司 | Method and device evaluating user attention |
-
2016
- 2016-06-30 CN CN201610509864.5A patent/CN105959737A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030093784A1 (en) * | 2001-11-13 | 2003-05-15 | Koninklijke Philips Electronics N.V. | Affective television monitoring and control |
CN1662922A (en) * | 2002-06-27 | 2005-08-31 | 皇家飞利浦电子股份有限公司 | Measurement of content ratings through vision and speech recognition |
CN102945624A (en) * | 2012-11-14 | 2013-02-27 | 南京航空航天大学 | Intelligent video teaching system based on cloud calculation model and expression information feedback |
CN105094292A (en) * | 2014-05-05 | 2015-11-25 | 索尼公司 | Method and device evaluating user attention |
CN104299225A (en) * | 2014-09-12 | 2015-01-21 | 姜羚 | Method and system for applying facial expression recognition in big data analysis |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107133561A (en) * | 2017-03-16 | 2017-09-05 | 腾讯科技(深圳)有限公司 | Event-handling method and device |
CN107809674A (en) * | 2017-09-30 | 2018-03-16 | 努比亚技术有限公司 | A kind of customer responsiveness acquisition, processing method, terminal and server based on video |
CN107545415A (en) * | 2017-10-01 | 2018-01-05 | 上海量科电子科技有限公司 | Payment evaluation method, client and system based on action |
CN108814595A (en) * | 2018-03-15 | 2018-11-16 | 南京邮电大学 | EEG signals fear degree graded features research based on VR system |
CN108337563A (en) * | 2018-03-16 | 2018-07-27 | 深圳创维数字技术有限公司 | Video evaluation method, apparatus, equipment and storage medium |
CN108694384A (en) * | 2018-05-14 | 2018-10-23 | 芜湖岭上信息科技有限公司 | A kind of viewer satisfaction investigation apparatus and method based on image and sound |
CN108848416A (en) * | 2018-06-21 | 2018-11-20 | 北京密境和风科技有限公司 | The evaluation method and device of audio-video frequency content |
CN108881985A (en) * | 2018-07-18 | 2018-11-23 | 南京邮电大学 | Program points-scoring system based on brain electricity Emotion identification |
CN110888997A (en) * | 2018-09-10 | 2020-03-17 | 北京京东尚科信息技术有限公司 | Content evaluation method and system and electronic equipment |
CN109447729A (en) * | 2018-09-17 | 2019-03-08 | 平安科技(深圳)有限公司 | A kind of recommended method of product, terminal device and computer readable storage medium |
CN110020625A (en) * | 2019-04-09 | 2019-07-16 | 昆山古鳌电子机械有限公司 | A kind of service evaluation system |
CN110175565A (en) * | 2019-05-27 | 2019-08-27 | 北京字节跳动网络技术有限公司 | The method and apparatus of personage's emotion for identification |
CN112492397A (en) * | 2019-09-12 | 2021-03-12 | 上海哔哩哔哩科技有限公司 | Video processing method, computer device, and storage medium |
CN111026265A (en) * | 2019-11-29 | 2020-04-17 | 华南理工大学 | System and method for continuously labeling emotion labels based on VR scene videos |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105959737A (en) | Video evaluation method and device based on user emotion recognition | |
CN108108821B (en) | Model training method and device | |
CN103327045B (en) | User recommendation method and system in social network | |
CN103718166B (en) | Messaging device, information processing method | |
CN110134829A (en) | Video locating method and device, storage medium and electronic device | |
CN109829064B (en) | Media resource sharing and playing method and device, storage medium and electronic device | |
US10223430B2 (en) | Intelligent playbook application | |
US20110179003A1 (en) | System for Sharing Emotion Data and Method of Sharing Emotion Data Using the Same | |
CN105072460B (en) | A kind of information labeling and correlating method based on video content element, system and equipment | |
DE112015003750T5 (en) | SYSTEMS AND METHOD FOR WEARING MEASUREMENT OF AUDIENCE | |
CN110879851A (en) | Video dynamic cover generation method and device, electronic equipment and readable storage medium | |
DE102008044635A1 (en) | Apparatus and method for providing a television sequence | |
JP2016040660A (en) | Content recommendation device, content recommendation method, and content recommendation program | |
CN103207662A (en) | Method and device for obtaining physiological characteristic information | |
CN112601105B (en) | Information extraction method and device applied to live comments | |
CN106685798A (en) | Method and device for generating message, and mobile terminal | |
CN111107444B (en) | User comment generation method, electronic device and storage medium | |
CN109286848B (en) | Terminal video information interaction method and device and storage medium | |
CN111428454A (en) | Configurable report generation method, device, equipment and readable storage medium | |
CN108353127A (en) | Image stabilization based on depth camera | |
CN106572390A (en) | Audio and video recommending method and equipment | |
CN107210001A (en) | Use the autonomous learning systems of video segment | |
CN110334620A (en) | Appraisal procedure, device, storage medium and the electronic equipment of quality of instruction | |
JP6204315B2 (en) | Caricature image generating apparatus, caricature image generating method, and caricature image generating program | |
US10120932B2 (en) | Social capture rules |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20160921 |
|
WD01 | Invention patent application deemed withdrawn after publication |