CN104463231A - Error correction method used after facial expression recognition content is labeled - Google Patents
Error correction method used after facial expression recognition content is labeled Download PDFInfo
- Publication number
- CN104463231A CN104463231A CN201410844999.8A CN201410844999A CN104463231A CN 104463231 A CN104463231 A CN 104463231A CN 201410844999 A CN201410844999 A CN 201410844999A CN 104463231 A CN104463231 A CN 104463231A
- Authority
- CN
- China
- Prior art keywords
- attribute information
- video
- subject matter
- error correction
- content
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention discloses an error correction method used after the facial expression recognition content is labeled. The method includes the steps that the corresponding facial expression information generated when users are watching videos is acquired through an acquisition device arranged at a client; different videos are classified and analyzed according to the corresponding facial expression information acquired in preset time frames, and the attribute information corresponding to different videos is obtained; when the attribute information corresponding to different videos is not consistent with the built subject attribute information corresponding to the same video through comparison, the content of the subject attribute information labeled on the videos is updated to the attribute information compared with the subject attribute information. By means of the error correction method, subject labels of the videos can be corrected according to the watching feedback of the users each time, and subsequent audiences can select the content more accurately.
Description
Technical field
The application relates to Expression Recognition field, specifically, relates to a kind of method to carrying out error correction after Expression Recognition content mark.
Background technology
In prior art, when everyone experiences different emotions, most cases all can be expressed by expression, as happiness, anger, grief and joy, and passion or the expression such as flat.So, by the mutation analysis of expressing one's feelings for user's face, the mood of user under special scenes and impression can be understood how, and then can the atmosphere that this scene is built be defined
Existing video content mark, with strong subjectivity.Namely the mark people of video subject matter marks the understanding of content and impression, and this definition, for the audience, has relatively large deviation (as some history films have fight scenes, even if whether action movie etc.) on may judging.
Therefore, along with video resource is more and more abundanter, spectators from multitude of video, must filter out the content oneself preferred.In movie and television play, spectators do content choice by subject matter (as action, describing love affairs, terror or suspense etc.) usually.Subject matter is provided by making side or media usually, has very strong subjectivity.Even there is sheet side for improving rating, the incoherent subject matter of artificial mark.Cause as a result, the anticipated deviation of content and spectators is very large, cause user to dislike.And if viewing feedback that can be each according to user, correct the subject matter mark of video, it is more accurate to allow on subsequent audiences chosen content, is the problem of very valuable solution.So how to solve the problem, just become technical matters urgently to be resolved hurrily.
Summary of the invention
In view of this, technical problems to be solved in this application there is provided carries out the method for error correction to after Expression Recognition content mark, and viewing feedback that can be each according to user, corrects the subject matter mark of video, and it is more accurate to allow on subsequent audiences chosen content.
In order to solve the problems of the technologies described above, the application has following technical scheme: a kind of method to carrying out error correction after Expression Recognition content mark, is characterized in that, comprising:
By being arranged on the collecting device of client, user corresponding expression information when adopting the video of viewing;
According to the expression information collected that the preset time period is corresponding, classification analysis is carried out to different videos, draws the attribute information corresponding to different video;
When the described attribute information corresponding to different video contrasts inconsistent with the subject matter attribute information that corresponding same video has been set up, be the described attribute information with its contrast by the content update of the described subject matter attribute information that this video marks.
Preferably, wherein, further comprise: the described subject matter attribute information on this video after upgrading to be stored in feature database and to release.
Preferably, wherein, described when the described attribute information corresponding to different video contrasts inconsistent with the subject matter attribute information that corresponding same video has been set up, be the described attribute information with its contrast by the content update of the described subject matter attribute information that this video marks, be further:
When the matching degree that the described attribute information corresponding to different video and the subject matter attribute information that corresponding same video has been set up contrast reach similarity below 50% time, be the described attribute information with its contrast by the content update of the described subject matter attribute information on this video.When the matching degree that the described attribute information corresponding to different video and the subject matter attribute information that corresponding same video has been set up contrast reach similarity more than 50% time, continue to retain to the content of the described subject matter attribute information that this video marks.
Preferably, wherein, the described preset time period, be further: hour, month or day.
Preferably, wherein, described feature database is for arranging in the server.
Compared with prior art, described in the application, carry out the method for error correction to after Expression Recognition content mark, reach following effect:
(1) the present invention's catching by the expression shape change to user, on the one hand, can carry out expressive features classification based on existing subject matter; On the other hand, also can carry out cluster for expression similarity to expression shape change simultaneously, on the basis of original subject matter, find that some are more suitable for the new subject matter definition be brought together, develop with enculturative evolution;
(2) the present invention can carry out subject matter classification to video more accurately, improves Video service quality; Overall process, by robotization mode, is disturbed user 0, without fringe cost, meanwhile, by the analysis mining to massive video, therefrom can be obtained knowledge, has extra help to intrinsic subject matter system development.
Certainly, the arbitrary product implementing the application must not necessarily need to reach above-described all technique effects simultaneously.
Accompanying drawing explanation
Accompanying drawing described herein is used to provide further understanding of the present application, and form a application's part, the schematic description and description of the application, for explaining the application, does not form the improper restriction to the application.In the accompanying drawings:
Fig. 1 is for carrying out the method flow diagram of error correction to after Expression Recognition content mark described in the embodiment of the present application.
Embodiment
As employed some vocabulary to censure specific components in the middle of instructions and claim.Those skilled in the art should understand, and hardware manufacturer may call same assembly with different noun.This specification and claims are not used as with the difference of title the mode distinguishing assembly, but are used as the criterion of differentiation with assembly difference functionally." comprising " as mentioned in the middle of instructions and claim is in the whole text an open language, therefore should be construed to " comprise but be not limited to "." roughly " refer to that in receivable error range, those skilled in the art can solve the technical problem within the scope of certain error, reach described technique effect substantially.In addition, " couple " word and comprise directly any and indirectly electric property coupling means at this.Therefore, if describe a first device in literary composition to be coupled to one second device, then represent described first device and directly can be electrically coupled to described second device, or be indirectly electrically coupled to described second device by other devices or the means that couple.Instructions subsequent descriptions is implement the better embodiment of the application, and right described description is for the purpose of the rule that the application is described, and is not used to the scope limiting the application.The protection domain of the application is when being as the criterion depending on the claims person of defining.
Shown in Figure 1, for carrying out the specific embodiment of the method for error correction to after Expression Recognition content mark described in the embodiment of the present application.Described method will be embodied from server side in the present embodiment.Described in the present embodiment, method comprises the following steps:
Step 101, by being arranged on the collecting device of client, user corresponding expression information when adopting the video of viewing;
Step 102, carries out classification analysis to different videos according to the expression information collected that the preset time period is corresponding, draws the attribute information corresponding to different video;
The content update of the described subject matter attribute information that this video marks, when the described attribute information corresponding to different video contrasts inconsistent with the subject matter attribute information that corresponding same video has been set up, is the described attribute information with its contrast by step 103.
Simultaneously step 103 continues to start and performs step 104, the described subject matter attribute information on this video after upgrading to be stored in feature database (namely server side) and to release.
Step 104, repeats to continue to perform step 101 to 103.(this step object is the object reaching Continuous optimization result).
Particularly, step 103 can also be for: when the matching degree that the described attribute information corresponding to different video and the subject matter attribute information that corresponding same video has been set up contrast reach similarity below 50% time, be the described attribute information with its contrast by the content update of the described subject matter attribute information on this video.When the matching degree that the described attribute information corresponding to different video and the subject matter attribute information that corresponding same video has been set up contrast reach similarity more than 50% time, continue to retain (namely without the need to being updated to the attribute information corresponding to different video that step 102 draws) to the content of the described subject matter attribute information that this video marks.
Embodiment one shown in composition graphs 1, is being described with a specific embodiment here:
Step 1, by being arranged on the collecting device (camera) of client, user corresponding expression information when adopting the video of viewing, if this video is comedy video, the expression information collected is smiling face's information;
Step 2, to this comedy video, carries out classification analysis according to 1 hour all expression information for time period collection, show that the attribute information corresponding to this video is mainly comedy;
Step 3, when the described attribute information corresponding to this video contrasts inconsistent with the subject matter attribute information that corresponding same video has been set up, as: when this video corresponding as described in attribute information be comedy, and the subject matter attribute information that this video has been set up is feature film, then change the feature film preserved in described subject matter attribute information into comedy.
Simultaneously step 3 continues to start and performs step 4, is comedy by the described subject matter attribute information on this video after upgrading, and to be stored in feature database (namely server side) and to release.
Step 4, repeats to continue to perform step 1 to 3.(this step object is the object reaching Continuous optimization result).
For another example, for horror film, user's expression also has typical change, as pupil change etc., and some slice, thin pieces, in fact terrified degree is very poor, is difficult to reach user's expection, so this slice, thin piece, also on response subject matter, the indexs such as degree of being matched to can be set, to identify the category attribute of slice, thin piece.
Compared with prior art, described in the application, carry out the method for error correction to after Expression Recognition content mark, reach following effect:
(1) the present invention's catching by the expression shape change to user, on the one hand, can carry out expressive features classification based on existing subject matter; On the other hand, also can carry out cluster for expression similarity to expression shape change simultaneously, on the basis of original subject matter, find that some are more suitable for the new subject matter definition be brought together, develop with enculturative evolution;
(2) the present invention can carry out subject matter classification to video more accurately, improves Video service quality; Overall process, by robotization mode, is disturbed user 0, without fringe cost, meanwhile, by the analysis mining to massive video, therefrom can be obtained knowledge, has extra help to intrinsic subject matter system development.
Those skilled in the art should understand, the embodiment of the application can be provided as method, device or computer program.Therefore, the application can adopt the form of complete hardware embodiment, completely software implementation or the embodiment in conjunction with software and hardware aspect.And the application can adopt in one or more form wherein including the upper computer program implemented of computer-usable storage medium (including but not limited to magnetic disk memory, CD-ROM, optical memory etc.) of computer usable program code.
Above-mentioned explanation illustrate and describes some preferred embodiments of the application, but as previously mentioned, be to be understood that the application is not limited to the form disclosed by this paper, should not regard the eliminating to other embodiments as, and can be used for other combinations various, amendment and environment, and can in invention contemplated scope described herein, changed by the technology of above-mentioned instruction or association area or knowledge.And the change that those skilled in the art carry out and change do not depart from the spirit and scope of the application, then all should in the protection domain of the application's claims.
Claims (5)
1., to a method of carrying out error correction after Expression Recognition content mark, it is characterized in that, comprising:
By being arranged on the collecting device of client, user corresponding expression information when adopting the video of viewing;
According to the expression information collected that the preset time period is corresponding, classification analysis is carried out to different videos, draws the attribute information corresponding to different video;
When the described attribute information corresponding to different video contrasts inconsistent with the subject matter attribute information that corresponding same video has been set up, be the described attribute information with its contrast by the content update of the described subject matter attribute information that this video marks.
2. carry out the method for error correction to after Expression Recognition content mark according to claim 1, it is characterized in that, further comprise: the described subject matter attribute information on this video after upgrading to be stored in feature database and to release.
3. according to claim 1 to the method for carrying out error correction after Expression Recognition content mark, it is characterized in that, it is described when the described attribute information corresponding to different video contrasts inconsistent with the subject matter attribute information that corresponding same video has been set up, be the described attribute information with its contrast by the content update of the described subject matter attribute information that this video marks, be further:
When the matching degree that the described attribute information corresponding to different video and the subject matter attribute information that corresponding same video has been set up contrast reach similarity below 50% time, be the described attribute information with its contrast by the content update of the described subject matter attribute information on this video; When the matching degree that the described attribute information corresponding to different video and the subject matter attribute information that corresponding same video has been set up contrast reach similarity more than 50% time, continue to retain to the content of the described subject matter attribute information that this video marks.
4. carry out the method for error correction to after Expression Recognition content mark according to claim 1, it is characterized in that, the described preset time period, be further: hour, month or day.
5., according to claim 1 to the method for carrying out error correction after Expression Recognition content mark, it is characterized in that, described feature database is for arranging in the server.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410844999.8A CN104463231A (en) | 2014-12-31 | 2014-12-31 | Error correction method used after facial expression recognition content is labeled |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410844999.8A CN104463231A (en) | 2014-12-31 | 2014-12-31 | Error correction method used after facial expression recognition content is labeled |
Publications (1)
Publication Number | Publication Date |
---|---|
CN104463231A true CN104463231A (en) | 2015-03-25 |
Family
ID=52909245
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410844999.8A Pending CN104463231A (en) | 2014-12-31 | 2014-12-31 | Error correction method used after facial expression recognition content is labeled |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104463231A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106503744A (en) * | 2016-10-26 | 2017-03-15 | 长沙军鸽软件有限公司 | Input expression in chat process carries out the method and device of automatic error-correcting |
CN107454346A (en) * | 2017-07-03 | 2017-12-08 | 李洪海 | Movie data analytic method, video production template recommend method, apparatus and equipment |
CN109902606A (en) * | 2019-02-21 | 2019-06-18 | 维沃移动通信有限公司 | A kind of operating method and terminal device |
CN110650364A (en) * | 2019-09-27 | 2020-01-03 | 北京达佳互联信息技术有限公司 | Video attitude tag extraction method and video-based interaction method |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030118974A1 (en) * | 2001-12-21 | 2003-06-26 | Pere Obrador | Video indexing based on viewers' behavior and emotion feedback |
CN102402765A (en) * | 2011-12-27 | 2012-04-04 | 纽海信息技术(上海)有限公司 | Electronic-commerce recommendation method based on user expression analysis |
CN103049479A (en) * | 2012-11-26 | 2013-04-17 | 北京奇虎科技有限公司 | Method and system for generating online video label |
CN103339649A (en) * | 2011-02-27 | 2013-10-02 | 阿弗科迪瓦公司 | Video recommendation based on affect |
CN103530788A (en) * | 2012-07-02 | 2014-01-22 | 纬创资通股份有限公司 | Multimedia evaluating system, multimedia evaluating device and multimedia evaluating method |
-
2014
- 2014-12-31 CN CN201410844999.8A patent/CN104463231A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030118974A1 (en) * | 2001-12-21 | 2003-06-26 | Pere Obrador | Video indexing based on viewers' behavior and emotion feedback |
CN103339649A (en) * | 2011-02-27 | 2013-10-02 | 阿弗科迪瓦公司 | Video recommendation based on affect |
CN102402765A (en) * | 2011-12-27 | 2012-04-04 | 纽海信息技术(上海)有限公司 | Electronic-commerce recommendation method based on user expression analysis |
CN103530788A (en) * | 2012-07-02 | 2014-01-22 | 纬创资通股份有限公司 | Multimedia evaluating system, multimedia evaluating device and multimedia evaluating method |
CN103049479A (en) * | 2012-11-26 | 2013-04-17 | 北京奇虎科技有限公司 | Method and system for generating online video label |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106503744A (en) * | 2016-10-26 | 2017-03-15 | 长沙军鸽软件有限公司 | Input expression in chat process carries out the method and device of automatic error-correcting |
CN107454346A (en) * | 2017-07-03 | 2017-12-08 | 李洪海 | Movie data analytic method, video production template recommend method, apparatus and equipment |
CN109902606A (en) * | 2019-02-21 | 2019-06-18 | 维沃移动通信有限公司 | A kind of operating method and terminal device |
CN109902606B (en) * | 2019-02-21 | 2021-03-12 | 维沃移动通信有限公司 | Operation method and terminal equipment |
CN110650364A (en) * | 2019-09-27 | 2020-01-03 | 北京达佳互联信息技术有限公司 | Video attitude tag extraction method and video-based interaction method |
CN110650364B (en) * | 2019-09-27 | 2022-04-01 | 北京达佳互联信息技术有限公司 | Video attitude tag extraction method and video-based interaction method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108769823B (en) | Direct broadcasting room display methods, device, equipment | |
US8032539B2 (en) | Method and apparatus for semantic assisted rating of multimedia content | |
CN110740387B (en) | Barrage editing method, intelligent terminal and storage medium | |
CN106303675B (en) | A kind of video clip extracting method and device | |
CN106331778B (en) | Video recommendation method and device | |
JP5795580B2 (en) | Estimating and displaying social interests in time-based media | |
CN109756751B (en) | Multimedia data processing method and device, electronic equipment and storage medium | |
CN103686344B (en) | Strengthen video system and method | |
US8605958B2 (en) | Method and apparatus for generating meta data of content | |
US20140052696A1 (en) | Systems and methods for visual categorization of multimedia data | |
US20130117375A1 (en) | System and Method for Granular Tagging and Searching Multimedia Content Based on User Reaction | |
US20140143218A1 (en) | Method for Crowd Sourced Multimedia Captioning for Video Content | |
US20130305280A1 (en) | Web Identity to Social Media Identity Correlation | |
Tran et al. | Exploiting character networks for movie summarization | |
CN104463231A (en) | Error correction method used after facial expression recognition content is labeled | |
CN107896335B (en) | Video detection and rating method based on big data technology | |
CN108197336B (en) | Video searching method and device | |
US11392791B2 (en) | Generating training data for natural language processing | |
CN104378659A (en) | Personalization recommendation method based on smart television | |
CN113392273A (en) | Video playing method and device, computer equipment and storage medium | |
Shukla et al. | Evaluating content-centric vs. user-centric ad affect recognition | |
US20210126945A1 (en) | Illegal content search device, illegal content search method, and program | |
Gagnon et al. | Towards computer-vision software tools to increase production and accessibility of video description for people with vision loss | |
US11947635B2 (en) | Illegal content search device, illegal content search method, and program | |
US20150278292A1 (en) | Identifying matching video content |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20150325 |
|
RJ01 | Rejection of invention patent application after publication |