CN108337563B - Video evaluation method, device, equipment and storage medium - Google Patents

Video evaluation method, device, equipment and storage medium Download PDF

Info

Publication number
CN108337563B
CN108337563B CN201810217801.1A CN201810217801A CN108337563B CN 108337563 B CN108337563 B CN 108337563B CN 201810217801 A CN201810217801 A CN 201810217801A CN 108337563 B CN108337563 B CN 108337563B
Authority
CN
China
Prior art keywords
video
concentration
evaluation
determining
evaluated
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810217801.1A
Other languages
Chinese (zh)
Other versions
CN108337563A (en
Inventor
吕巧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Skyworth Digital Technology Co Ltd
Original Assignee
Shenzhen Skyworth Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Skyworth Digital Technology Co Ltd filed Critical Shenzhen Skyworth Digital Technology Co Ltd
Priority to CN201810217801.1A priority Critical patent/CN108337563B/en
Publication of CN108337563A publication Critical patent/CN108337563A/en
Application granted granted Critical
Publication of CN108337563B publication Critical patent/CN108337563B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • H04N21/25866Management of end-user data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • H04N21/25866Management of end-user data
    • H04N21/25891Management of end-user data being end-user preferences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/475End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data
    • H04N21/4756End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data for rating content, e.g. scoring a recommended movie

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Graphics (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Social Psychology (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

The embodiment of the invention discloses a video evaluation method, a device, equipment and a storage medium, wherein the method is applied to a terminal and comprises the following steps: acquiring a face image of a viewer watching a video to be evaluated; obtaining evaluation parameters according to the face image, wherein the evaluation parameters comprise concentration expression information and time information; and sending the evaluation parameters to a server, and determining the good evaluation degree of the video to be evaluated by the server according to the dependency relationship between the concentration expression information and the time information. The technical problem that the video quality to be evaluated is difficult to objectively and truly reflect by the video evaluation method in the prior art is solved, and the technical effect of objectively and accurately evaluating the video to be evaluated is achieved.

Description

Video evaluation method, device, equipment and storage medium
Technical Field
The embodiment of the invention relates to the technical field of data processing, in particular to a video evaluation method, a video evaluation device, video evaluation equipment and a storage medium.
Background
Existing video types include movies, television shows, live broadcasts, small videos, and the like. In the face of a huge amount of videos, a user usually decides whether to click to watch the videos by referring to network scores. Although the existing mainstream websites have user scoring systems, users can score according to their own preferences, but most people do not have the habit of scoring videos. In addition, the scoring form of the existing video scoring system includes: (1) carrying out scoring weighting calculation through user click evaluation; (2) counting the watching time length and the number of people of the video to carry out weighted scoring; (3) the weighted scoring is performed by the degree of user attention, and the frequency of the search. (4) And converting the character evaluation after the user looks into the score.
In summary, the video evaluation method in the prior art can employ people to perform false scoring, which results in unreal scoring and difficulty in objectively and truly reflecting the technical problem of the quality of the video to be evaluated.
Disclosure of Invention
The video evaluation method, the device, the equipment and the storage medium provided by the embodiment of the invention solve the technical problem that the video evaluation method in the prior art is difficult to objectively and truly reflect the quality of the video to be evaluated.
In a first aspect, an embodiment of the present invention provides a video evaluation method, applied to a terminal, including:
acquiring a face image of a viewer watching a video to be evaluated;
obtaining evaluation parameters according to the face image, wherein the evaluation parameters comprise concentration expression information and time information;
and sending the evaluation parameters to a server, and determining the good evaluation degree of the video to be evaluated by the server according to the dependency relationship between the concentration expression information and the time information.
Further, the acquiring the evaluation parameters includes:
and when the evaluation type is the video grade, acquiring emotional expression information, wherein the emotional expression information and the concentration expression information are combined and used for the server to determine the video grade of the video to be evaluated.
Further, the acquiring emotional expression information includes:
determining the video type of a video to be evaluated;
determining emotional expression types according to the video types;
and acquiring emotional expression information according to the determined emotional expression type.
In a second aspect, an embodiment of the present invention further provides a video evaluation method, applied to a server, including:
obtaining evaluation parameters, wherein the evaluation parameters comprise time information and concentration expression information and are determined by watching a face image of a video to be evaluated by a viewer;
and determining the good evaluation degree of the video to be evaluated according to the dependency relationship between the concentration expression information and the time information.
Further, the determining the goodness of the video to be evaluated according to the dependency relationship between the concentration expression information and the time information includes:
determining a first concentration value according to the ratio of the concentration expression information to the watching duration in the time information;
determining a second concentration value according to the ratio of the watching duration to the video duration in the time information;
and determining the good evaluation degree of the video to be evaluated according to the first concentration value and the second concentration value.
Further, the determining a first concentration value according to the ratio of the concentration expression information to the viewing duration in the time information includes:
determining a first weight contribution value of the concentration expression information to the goodness of appreciation and a second weight contribution value of the watching duration to the goodness of appreciation;
determining weighted concentration expression information according to the concentration expression information and the first weight contribution value;
determining a weighted viewing duration according to the viewing duration and the second weight contribution value;
and determining a first concentration value according to the ratio of the weighted concentration expression information to the weighted watching duration.
Further, the determining a second concentration value according to the ratio of the viewing duration to the video duration comprises:
and determining a second concentration value according to the ratio of the weighted watching duration to the video duration.
Further, after determining the goodness of the video to be evaluated according to the dependency relationship between the concentration expression information and the time information, the method further includes:
and determining the video grade of the video to be evaluated according to the ratio of the measure of the emotional expression to the measure of the concentration expression in the evaluation parameters.
In a third aspect, an embodiment of the present invention further provides a server device, where the server device includes:
one or more processors;
storage means for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the video rating method of the second aspect.
In a fourth aspect, embodiments of the present invention also provide a storage medium containing computer-executable instructions, which when executed by a computer processor, are configured to perform the video evaluation method according to the second aspect.
According to the technical scheme of the video evaluation method, a terminal acquires a face image of a viewer watching a video to be evaluated, and evaluation parameters are acquired according to the face image, wherein the evaluation parameters comprise time information and concentration expression information, and the concentration expression information is determined by the face image of the viewer watching the video to be evaluated; and then, the evaluation parameters are sent to a server through the terminal, and the server determines the good evaluation degree of the video to be evaluated according to the dependency relationship between the concentration expression information and the time information. According to the dependency relationship between the expression information and the time information generated by the consciousness of the audience when watching the video to be evaluated, the goodness of the video to be evaluated is accurately and quickly determined, the technical problem that the quality of the video to be evaluated is difficult to objectively and truly reflect in the prior art is solved, and the technical effect of objectively and accurately evaluating the video to be evaluated is achieved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a flowchart of a video evaluation method according to an embodiment of the present invention;
fig. 2 is a flowchart of a video evaluation apparatus according to a second embodiment of the present invention;
fig. 3 is a flowchart of a video evaluation method according to a third embodiment of the present invention;
FIG. 4 is a flowchart of a goodness determination method provided in the third embodiment of the invention;
fig. 5 is a flowchart of a video evaluation method according to a fourth embodiment of the present invention;
fig. 6 is a flowchart of a video evaluation method according to a fifth embodiment of the present invention;
fig. 7 is a block diagram of a video evaluation apparatus according to a sixth embodiment of the present invention;
fig. 8 is a schematic structural diagram of a server device according to a seventh embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the technical solutions of the present invention will be clearly and completely described through embodiments with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example one
Fig. 1 is a flowchart of a video evaluation method according to an embodiment of the present invention. The technical scheme of the embodiment is suitable for the situation of automatically evaluating the video, and is particularly suitable for the situation of automatically acquiring the evaluation parameters by the terminal. The method can be executed by the video evaluation device provided by the embodiment of the invention, and the device can be realized in a software and/or hardware manner and is configured to be applied in a processor. The method specifically comprises the following steps:
s101, obtaining a face image of a viewer watching a video to be evaluated.
The terminal is provided with a camera, and the camera collects a frame of face image of a viewer watching a video to be evaluated at preset time intervals. The preset time of the embodiment is selected to be 3s, and the preset time can be set according to actual conditions during specific use.
S102, obtaining evaluation parameters according to the face image, wherein the evaluation parameters comprise concentration expression information and time information.
In video evaluation, the evaluation of goodness of evaluation is usually involved to reflect the receiving and liking degree of the video by the viewer, when the video quality is high, the viewer usually likes, and when the viewer watches the favorite video, the spirit of the viewer is concentrated. The focused expression will be maintained or substantially maintained during viewing, and the viewing time will be longer. Therefore, the present embodiment uses the concentration expression information and the time information as the evaluation parameters for evaluating the video goodness of evaluation.
When the evaluation type of the video is good evaluation, the expression parameters in the evaluation parameters may only include concentration expression information, and when the evaluation type of the video further includes a video rating, the expression parameters in the evaluation parameters further include emotional expression information. And the server determines the video grade of the video to be evaluated according to the emotional expression information and the concentration expression information. The method for acquiring the emotional expression information comprises the following steps:
determining the video type of a video to be evaluated, then determining the emotional expression type according to the video type, and then acquiring emotional expression information according to the determined emotional expression type. For example, when the video type of the video to be evaluated is comedy, it may be determined that the comedy corresponds to the happy expression, and then the happy expression information may be acquired.
Common video types include comedy, horror, thriller, love and the like, and the specific video type determination method is not limited in the embodiment and can be used for determining the video types by adopting the prior art.
S103, sending the evaluation parameters to a server, and determining the good evaluation degree of the video to be evaluated by the server according to the dependency relationship between the concentration expression information and the time information.
And the terminal sends the acquired evaluation parameters to the server, and then the server determines the good evaluation degree of the video to be evaluated according to the dependency relationship between the concentration expression information and the time information. The dependency relationship in this embodiment refers to a relationship that a change in one parameter causes or may cause a change in another parameter. For example, the number of times of the attentive expression and the watching duration show that the number of times of the attentive expression increases as the time for the audience to watch the video increases, and it can be seen that there is a dependency relationship between the number of times of the attentive expression and the watching duration. Because the video duration is fixed duration, there is not the dependency between the expression number of times of concentrating on and the video duration.
According to the technical scheme of the video evaluation method, a terminal acquires a face image of a viewer watching a video to be evaluated, and evaluation parameters are acquired according to the face image, wherein the evaluation parameters comprise time information and concentration expression information, and the concentration expression information is determined by the face image of the viewer watching the video to be evaluated; and then, the evaluation parameters are sent to a server through the terminal, and the server determines the good evaluation degree of the video to be evaluated according to the dependency relationship between the concentration expression information and the time information. According to the dependency relationship between the expression information and the time information generated by the consciousness of the audience when watching the video to be evaluated, the goodness of the video to be evaluated is accurately and quickly determined, the technical problem that the quality of the video to be evaluated is difficult to objectively and truly reflect in the prior art is solved, and the technical effect of objectively and accurately evaluating the video to be evaluated is achieved.
Example two
Fig. 2 is a block diagram of a video evaluation apparatus according to a second embodiment of the present invention. The device is used for executing the video evaluation method provided by any of the above embodiments, and the device can be implemented by software or hardware and is configured to be used in a processor. The device includes:
the face image acquisition module 11 is used for acquiring a face image of a viewer watching a video to be evaluated;
an evaluation parameter obtaining module 12, configured to obtain an evaluation parameter according to the face image, where the evaluation parameter includes concentration expression information and time information;
and the sending module 13 is configured to send the evaluation parameters to a server, and the server determines the good evaluation degree of the video to be evaluated according to the dependency relationship between the concentration expression information and the time information.
According to the technical scheme of the video evaluation device provided by the embodiment, the face image of a viewer watching a video to be evaluated is obtained through a terminal, evaluation parameters are obtained according to the face image, the evaluation parameters comprise time information and concentration expression information, and the concentration expression information is determined by the face image of the viewer watching the video to be evaluated; and then, the evaluation parameters are sent to a server through the terminal, and the server determines the good evaluation degree of the video to be evaluated according to the dependency relationship between the concentration expression information and the time information. According to the dependency relationship between the expression information and the time information generated by the consciousness of the audience when watching the video to be evaluated, the goodness of the video to be evaluated is accurately and quickly determined, the technical problem that the quality of the video to be evaluated is difficult to objectively and truly reflect in the prior art is solved, and the technical effect of objectively and accurately evaluating the video to be evaluated is achieved.
The video evaluation device provided by the embodiment of the invention can execute the video evaluation method provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.
EXAMPLE III
Fig. 3 is a flowchart of a video evaluation method according to a third embodiment of the present invention. The technical scheme of the embodiment is suitable for the situation of automatically evaluating the video, and is particularly suitable for the situation of automatically evaluating the video by the server. The method can be executed by the video evaluation device provided by the embodiment of the invention, and the device can be realized in a software and/or hardware manner and is configured to be applied in a processor. The method specifically comprises the following steps:
s201, obtaining evaluation parameters, wherein the evaluation parameters comprise time information and concentration expression information, and the concentration expression information is determined by watching a face image of a video to be evaluated by a viewer.
In video evaluation, evaluation of goodness of evaluation is usually involved, and is used for reflecting the receiving and likeness degree of the video by the audience, when the video quality is high, the audience usually likes, and when the audience watches the favorite video, the spirit is concentrated, the concentrated expression is kept or basically kept in the watching process, and the watching time is also long. Therefore, the present embodiment uses the concentration expression information and the time information as the evaluation parameters for evaluating the video goodness of evaluation.
When the evaluation type of the video is good evaluation, the expression parameters in the evaluation parameters may only include concentration expressions, and when the evaluation type of the video further includes a video rating, the expression parameters in the evaluation parameters further include emotional expression information. The method for acquiring the emotional expression information comprises the following steps:
the method comprises the steps of firstly determining the video type of a video to be evaluated, then determining the emotional expression type according to the video type, and then obtaining emotional expression information according to the determined emotional expression type. For example, when the video type of the video to be evaluated is comedy, it may be determined that the comedy corresponds to the happy facial expression, and then the happy facial expression information may be acquired.
S202, determining the good rating of the video to be evaluated according to the dependency relationship between the concentration expression information and the time information.
And the server acquires the evaluation parameters from the terminal and then determines the good evaluation degree of the video to be evaluated according to the dependency relationship between the concentration expression information and the time information. The dependency relationship in this embodiment refers to a relationship in which a change in one parameter causes or may cause a change in another parameter. For example, the number of times of the attentive expression and the watching duration show that the number of times of the attentive expression increases as the time for the audience to watch the video increases, and it can be seen that there is a dependency relationship between the number of times of the attentive expression and the watching duration. Because the video duration is fixed duration, there is not the dependency between the expression number of times of concentrating on and the video duration.
The optional determination method of the goodness degree comprises the following steps: and determining the good evaluation degree of the video to be evaluated according to the ratio of the watching duration in the concentration expression information and the time information.
Furthermore, the watching time length of the video can reflect the favorable degree of the video, because the higher the quality of the video is, the more the desire of the audience to watch the video is easily caused, and the longer the watching time length of the video is. The goodness evaluation of the present embodiment also takes into account the occupation ratio of the viewing time length in the video time length. As shown in fig. 4, the goodness determination method includes:
s2021, determining a first concentration value according to the ratio of the concentration expression information to the watching duration in the time information.
When quantifying the concentration expression information, the expression may be expressed by the number of times, time, or the like, or may be expressed simply by the number of times, as long as the amount of the appearance of the concentration expression can be expressed. The present embodiment uses "number of occurrences" for representation.
When the goodness of the video is measured by the number of occurrences of the concentration expressions, the dependency relationship between the number of occurrences of the concentration expressions and the viewing duration needs to be considered, that is, the crossing time of the number of occurrences of the concentration expressions needs to be considered, that is, the number of concentration expressions which appear in the viewing duration is too long for the audience, so the embodiment determines the first concentration value according to the ratio of the number of occurrences of the concentration expressions to the viewing duration.
S2022, determining a second concentration value according to the ratio of the watching time length to the video time length in the time information.
S2023, determining the good rating of the video to be evaluated according to the first concentration value and the second concentration value.
Although the first concentration value and the second concentration value can both reflect the goodness of the video to be evaluated, the contribution weights of the first concentration value and the second concentration value to the goodness may be different, and the embodiment reflects the influence of the first weight contribution value and the second weight contribution value to the goodness by the first weight contribution value and the second weight contribution value, specifically:
determining a first weight contribution value of the concentration expression information to the goodness of appreciation and a second weight contribution value of the watching duration to the goodness of appreciation; determining weighted concentration expression information according to the concentration expression information and the first weight contribution value; determining a weighted viewing time length according to the viewing time length and the second weight contribution value; a first concentration value is determined according to the ratio of the weighted concentration expression information and the weighted viewing duration. A second concentration value is determined based on a ratio of the weighted viewing duration to the video duration.
The goodness of the video is evaluated through the first concentration value and the second concentration value, the goodness of the video is determined by fully considering the occurrence frequency of the concentration expression, the watching time length and the relation between the video time length, and the concentration expression and the watching time length are usually made by audiences according to the time-to-time awareness of the video content, so that the goodness determination method has high accuracy, and the improvement of the goodness of the video by hiring a water army can be effectively avoided.
Illustratively, the video time (T) of the video a to be evaluated, the watching time (UT) of each audience, the concentration times (CC), and the audience number (UN) are obtained, assuming that the weighting of the watching time is 0.5, the weighting of the concentration number is 0.5, and the frame interval of the face image is 3 s.
And (3) calculating the rating of the goodness: the concentration (OC) of a single user is:
Figure BDA0001599009480000111
an average value is obtained by the concentration degrees of all the audiences, and a good result is obtained according to the average value, for example, when the concentration degree average value is 0.8, the video is considered as good as 8 points (full 10 points).
According to the technical scheme of the video evaluation method, evaluation parameters are obtained, wherein the evaluation parameters comprise time information and concentration expression information and are determined by watching a face image of a video to be evaluated by a viewer; and determining the good evaluation degree of the video to be evaluated according to the dependency relationship between the concentration expression information and the time information. According to the dependency relationship between the expression information and the time information generated by the consciousness of the audience when watching the video to be evaluated, the goodness of the video to be evaluated is accurately and quickly determined, the technical problem that the quality of the video to be evaluated is difficult to objectively and truly reflect in the prior art is solved, and the technical effect of objectively and accurately evaluating the video to be evaluated is achieved.
Example four
Fig. 5 is a flowchart of a video evaluation method according to a fourth embodiment of the present invention. On the basis of any embodiment, after the goodness of the video to be evaluated is determined according to the dependency relationship between the concentration expression information and the time information, the method and the device for evaluating the video grade are added. Correspondingly, the method of the embodiment comprises the following steps:
s301, obtaining evaluation parameters, wherein the evaluation parameters comprise time information and concentration expression information and are determined by watching a face image of a video to be evaluated by a viewer.
S302, determining the good rating of the video to be evaluated according to the dependency relationship between the concentration expression information and the time information.
S303, determining the video grade of the video to be evaluated according to the ratio of the emotional expression measurement to the concentration expression measurement in the evaluation parameters.
The video rating is generally a rating of the video to be rated in the video type to which the video belongs, for example, if the comedy rating is set to five, and for a certain comedy, the video rating is 3, it means that the comedy rating of the comedy is medium. It will be appreciated that the evaluation of the video level is independent of the video type, i.e. it is necessary to determine whether the video to be evaluated is a comedy, a thriller, etc. Therefore, before video grade evaluation, the video type needs to be determined, the video to be evaluated is supposed to be comedy, then the emotional expression corresponding to the video type is determined, because the comedy corresponds to the happy event, the emotional expression corresponding to the video to be evaluated is the happy expression, and then the video grade of the video to be evaluated is determined according to the ratio of the occurrence frequency of the happy expression to the occurrence frequency of the attentive expression.
It is understood that, in order to improve the accuracy of the evaluation of the video level, different weights may be given to the number of occurrences of the emotional expression and the concentration expression in the video level evaluation.
Illustratively, the smiling face ratio (SN) of a single user is calculated:
Figure BDA0001599009480000121
wherein SC is the number of occurrences of smiling face expression, CC is the number of occurrences of concentration expression, and then the proportional average of the number of occurrences of smiling face expression of all users is calculated, and the level of the video is determined according to this average, for example, to reach 10%, and the like degree is considered to be 2 levels (maximum level 5 level).
The video grade of the video to be evaluated is determined according to the ratio of the emotional expression measurement to the concentration expression measurement in the evaluation parameters, the grade of the video to be evaluated in the video type to which the video belongs is accurately shown, and an effective reference index is provided for a user to search for the video.
EXAMPLE five
Fig. 6 is a flowchart of a video evaluation method according to a fifth embodiment of the present invention. The technical scheme of the embodiment is suitable for the situation of automatically evaluating the video, and is particularly suitable for the situation of automatically evaluating the video by matching the terminal and the server. The device can be realized by software and/or hardware and is configured to be applied in a processor. The method specifically comprises the following steps:
s401, obtaining a face image of a viewer watching a video to be evaluated.
The terminal acquires the face image of the audience watching the video to be evaluated through a camera arranged on the terminal.
S402, obtaining evaluation parameters according to the face image, wherein the evaluation parameters comprise concentration expression information and time information.
In video evaluation, evaluation of goodness of evaluation is usually involved, and is used for reflecting the receiving and likeness degree of the video by the audience, when the video quality is high, the audience usually likes, and when the audience watches the favorite video, the spirit is concentrated, the concentrated expression is kept or basically kept in the watching process, and the watching time is also long. Therefore, the present embodiment uses the concentration expression information and the time information as the evaluation parameters for evaluating the video goodness of evaluation.
And S403, determining the good rating of the video to be evaluated according to the dependency relationship between the concentration expression information and the time information.
And the terminal sends the acquired evaluation parameters to the server, the server receives the evaluation parameters sent by the terminal, and then the good evaluation degree of the video to be evaluated is determined according to the dependency relationship between the concentration expression information and the time information. The dependency relationship in this embodiment refers to a relationship in which a change in one parameter causes or may cause a change in another parameter. For example, the number of times of the attentive expression and the watching duration show that the number of times of the attentive expression increases as the time for the audience to watch the video increases, and it can be seen that there is a dependency relationship between the number of times of the attentive expression and the watching duration. Because the video duration is fixed duration, there is not the dependency between the expression number of times of concentrating on and the video duration.
According to the technical scheme of the video evaluation method, the terminal obtains the face image of a viewer watching the video to be evaluated, the evaluation parameters are obtained according to the face image, the evaluation parameters comprise concentration expression information and time information, the evaluation parameters are sent to the server, and the server determines the good evaluation degree of the video to be evaluated according to the dependency relationship between the concentration expression information and the time information. According to the dependency relationship between the expression information and the time information generated by the consciousness of the audience when watching the video to be evaluated, the goodness of the video to be evaluated is accurately and quickly determined, the technical problem that the quality of the video to be evaluated is difficult to objectively and truly reflect in the prior art is solved, and the technical effect of objectively and accurately evaluating the video to be evaluated is achieved.
EXAMPLE six
Fig. 7 is a block diagram of a video evaluation apparatus according to a sixth embodiment of the present invention. The device is used for executing the video evaluation method provided by any of the above embodiments, and the control device can be implemented by software or hardware and is configured to be used in a processor. The device includes:
the evaluation parameter acquisition module 21 is configured to acquire evaluation parameters, where the evaluation parameters include time information and concentration expression information and are determined by a viewer watching a face image of a video to be evaluated;
and the goodness determination module 22 is configured to determine the goodness of the video to be evaluated according to the dependency relationship between the concentration expression information and the time information.
According to the technical scheme of the video evaluation device, evaluation parameters are obtained, wherein the evaluation parameters comprise time information and concentration expression information and are determined by watching a face image of a video to be evaluated by a viewer; and determining the good evaluation degree of the video to be evaluated according to the dependency relationship between the concentration expression information and the time information. According to the dependency relationship between the expression information and the time information generated by the consciousness of the audience when watching the video to be evaluated, the goodness of the video to be evaluated is accurately and quickly determined, the technical problem that the quality of the video to be evaluated is difficult to objectively and truly reflect in the prior art is solved, and the technical effect of objectively and accurately evaluating the video to be evaluated is achieved.
The video evaluation device provided by the embodiment of the invention can execute the video evaluation method provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.
EXAMPLE seven
Fig. 8 is a schematic structural diagram of a server apparatus according to a seventh embodiment of the present invention, and as shown in fig. 8, the server apparatus includes a processor 101, a memory 102, an input device 103, and an output device 104; the number of the processors 101 in the server device may be one or more, and one processor 101 is taken as an example in fig. 8; the processor 101, the memory 102, the input device 103, and the output device 104 in the server apparatus may be connected by a bus or other means, and the bus connection is exemplified in fig. 8.
The memory 102 is a computer-readable storage medium, and can be used for storing software programs, computer-executable programs, and modules, such as program instructions/modules (for example, the evaluation parameter acquiring module 21 and the goodness-of-evaluation determining module 22) corresponding to the video evaluation method in the embodiment of the present invention. The processor 101 executes various functional applications and data processing of the server device by executing software programs, instructions, and modules stored in the memory 102, that is, implements the above-described video evaluation method.
The memory 102 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the terminal, and the like. Further, the memory 102 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples, the memory 102 may further include memory located remotely from the processor 101, which may be connected to the device over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input device 103 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the server apparatus.
The output device 104 may include a display device such as a display screen, for example, of a user terminal.
Example eight
An eighth embodiment of the present invention further provides a storage medium containing computer-executable instructions, which when executed by a computer processor, perform a video evaluation method, the method including:
obtaining evaluation parameters, wherein the evaluation parameters comprise time information and concentration expression information and are determined by watching a face image of a video to be evaluated by a viewer;
and determining the good evaluation degree of the video to be evaluated according to the dependency relationship between the concentration expression information and the time information.
Of course, the storage medium provided by the embodiment of the present invention contains computer-executable instructions, and the computer-executable instructions are not limited to the method operations described above, and may also perform related operations in the video evaluation method provided by any embodiment of the present invention.
From the above description of the embodiments, it is obvious for those skilled in the art that the present invention can be implemented by software and necessary general hardware, and certainly, can also be implemented by hardware, but the former is a better embodiment in many cases. Based on such understanding, the technical solutions of the present invention or portions thereof that contribute to the prior art may be embodied in the form of a software product, where the computer software product may be stored in a computer-readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a FLASH Memory (FLASH), a hard disk or an optical disk of a computer, and includes several instructions to enable a computer device (which may be a personal computer, a server, or a network device) to execute the video evaluation method according to the embodiments of the present invention.
It should be noted that, in the embodiment of the video evaluation apparatus, the included units and modules are merely divided according to functional logic, but are not limited to the above division as long as the corresponding functions can be implemented; in addition, specific names of the functional units are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present invention.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (6)

1. A video evaluation method is applied to a server side and is characterized by comprising the following steps:
obtaining evaluation parameters, wherein the evaluation parameters comprise time information and concentration expression information and are determined by watching a face image of a video to be evaluated by a viewer;
determining the good evaluation degree of the video to be evaluated according to the dependency relationship between the concentration expression information and the time information, wherein the method comprises the following steps: determining a first concentration value according to the ratio of the concentration expression information to the watching duration in the time information; determining a second concentration value according to the ratio of the watching duration to the video duration in the time information; and determining the good evaluation degree of the video to be evaluated according to the first concentration value and the second concentration value.
2. The method of claim 1, wherein determining a first concentration value based on a ratio of the concentration expression information to a viewing duration in the temporal information comprises:
determining a first weight contribution value of the concentration expression information to the goodness of appreciation and a second weight contribution value of the watching duration to the goodness of appreciation;
determining weighted concentration expression information according to the concentration expression information and the first weight contribution value;
determining a weighted viewing duration according to the viewing duration and the second weight contribution value;
and determining a first concentration value according to the ratio of the weighted concentration expression information to the weighted watching duration.
3. The method of claim 2, wherein determining a second concentration value based on the ratio of the viewing duration to the video duration comprises:
and determining a second concentration value according to the ratio of the weighted watching duration to the video duration.
4. The method according to claim 1, wherein after determining the goodness of the video to be evaluated according to the dependency relationship between the concentration expression information and the time information, the method further comprises:
and determining the video grade of the video to be evaluated according to the ratio of the measure of the emotional expression to the measure of the concentration expression in the evaluation parameters.
5. A server device, characterized in that the server device comprises:
one or more processors;
storage means for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the video evaluation method of any of claims 1-4.
6. A storage medium containing computer-executable instructions for performing the video evaluation method of any of claims 1-4 when executed by a computer processor.
CN201810217801.1A 2018-03-16 2018-03-16 Video evaluation method, device, equipment and storage medium Active CN108337563B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810217801.1A CN108337563B (en) 2018-03-16 2018-03-16 Video evaluation method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810217801.1A CN108337563B (en) 2018-03-16 2018-03-16 Video evaluation method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN108337563A CN108337563A (en) 2018-07-27
CN108337563B true CN108337563B (en) 2020-09-11

Family

ID=62930844

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810217801.1A Active CN108337563B (en) 2018-03-16 2018-03-16 Video evaluation method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN108337563B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110888997A (en) * 2018-09-10 2020-03-17 北京京东尚科信息技术有限公司 Content evaluation method and system and electronic equipment
CN109361957B (en) * 2018-10-18 2021-02-12 广州酷狗计算机科技有限公司 Method and device for sending praise request
CN109753889A (en) * 2018-12-18 2019-05-14 深圳壹账通智能科技有限公司 Service evaluation method, apparatus, computer equipment and storage medium
CN109787977B (en) * 2019-01-17 2022-09-30 深圳壹账通智能科技有限公司 Product information processing method, device and equipment based on short video and storage medium
CN111666793A (en) * 2019-03-08 2020-09-15 阿里巴巴集团控股有限公司 Video processing method, video processing device and electronic equipment
CN110020625A (en) * 2019-04-09 2019-07-16 昆山古鳌电子机械有限公司 A kind of service evaluation system
CN110147936A (en) * 2019-04-19 2019-08-20 深圳壹账通智能科技有限公司 Service evaluation method, apparatus based on Emotion identification, storage medium
CN112637688B (en) * 2020-12-09 2021-09-07 北京意图科技有限公司 Video content evaluation method and video content evaluation system
CN112565914B (en) * 2021-02-18 2021-06-04 北京世纪好未来教育科技有限公司 Video display method, device and system for online classroom and storage medium
CN113256465B (en) * 2021-06-04 2024-07-02 深圳市国华在线教育科技有限公司 Remote education training system based on blockchain technology
CN113411673B (en) * 2021-07-08 2022-06-28 深圳市古东管家科技有限责任公司 Intelligent short video play recommendation method, system and computer storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2678820A4 (en) * 2011-02-27 2014-12-03 Affectiva Inc Video recommendation based on affect
CN103716661A (en) * 2013-12-16 2014-04-09 乐视致新电子科技(天津)有限公司 Video scoring reporting method and device
CN105959737A (en) * 2016-06-30 2016-09-21 乐视控股(北京)有限公司 Video evaluation method and device based on user emotion recognition
CN107133892A (en) * 2017-03-29 2017-09-05 华东交通大学 The real-time estimating method and system of a kind of network Piano lesson
CN107590459A (en) * 2017-09-11 2018-01-16 广东欧珀移动通信有限公司 The method and apparatus for delivering evaluation

Also Published As

Publication number Publication date
CN108337563A (en) 2018-07-27

Similar Documents

Publication Publication Date Title
CN108337563B (en) Video evaluation method, device, equipment and storage medium
US9467744B2 (en) Comment-based media classification
CN111767429B (en) Video recommendation method and device and electronic equipment
CN111711828A (en) Information processing method and device and electronic equipment
US11455675B2 (en) System and method of providing object for service of service provider
CN112333556B (en) Control method for monitoring video transmission bandwidth, terminal equipment and readable storage medium
CN106454536B (en) Method and device for determining information recommendation degree
CN111754107A (en) Anchor value evaluation method and device, electronic equipment and readable storage medium
CN109241346B (en) Video recommendation method and device
EP3149615A1 (en) Information processing device, information processing method, and program
KR20180117163A (en) Optimizing content distribution using models
CN109086813B (en) Determination method, device and equipment for similarity of anchor and storage medium
CN111405363A (en) Method and device for identifying current user of set top box in home network
CN109462765A (en) A kind of recommendation page downloading and display methods and device
US20220036427A1 (en) Method for managing immersion level and electronic device supporting same
CN105956061B (en) Method and device for determining similarity between users
CN109688217B (en) Message pushing method and device and electronic equipment
CN114222175A (en) Barrage display method and device, terminal equipment, server and medium
JP6069246B2 (en) Information processing apparatus, information processing apparatus control method, and program
JP2014222474A (en) Information processor, method and program
CN110139160B (en) Prediction system and method
CN103581744B (en) Obtain the method and electronic equipment of data
KR101496181B1 (en) Methods and apparatuses for a content recommendations using content themes
CN113473116B (en) Live broadcast quality monitoring method, device and medium
CN113365095B (en) Live broadcast resource recommendation method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant