WO2019184299A1 - Microexpression recognition-based film and television scoring method, storage medium, and intelligent terminal - Google Patents

Microexpression recognition-based film and television scoring method, storage medium, and intelligent terminal Download PDF

Info

Publication number
WO2019184299A1
WO2019184299A1 PCT/CN2018/109820 CN2018109820W WO2019184299A1 WO 2019184299 A1 WO2019184299 A1 WO 2019184299A1 CN 2018109820 W CN2018109820 W CN 2018109820W WO 2019184299 A1 WO2019184299 A1 WO 2019184299A1
Authority
WO
WIPO (PCT)
Prior art keywords
micro
expression
television
film
drama
Prior art date
Application number
PCT/CN2018/109820
Other languages
French (fr)
Chinese (zh)
Inventor
王鲜
Original Assignee
深圳创维-Rgb电子有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳创维-Rgb电子有限公司 filed Critical 深圳创维-Rgb电子有限公司
Publication of WO2019184299A1 publication Critical patent/WO2019184299A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0282Rating or review of business operators or products
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • G06V40/176Dynamic expression

Definitions

  • the present disclosure relates to the field of consumer electronic products, for example, to a film recognition method based on micro-expression recognition, a storage medium, and a smart terminal.
  • the technical problem to be solved by the present disclosure is to provide a film and television scoring method, a storage medium and a smart terminal based on micro-expression recognition, aiming at solving the prior art video scores by manually inputting scores.
  • the score is too subjective, leading to problems such as untrue scores.
  • a film and television scoring method based on micro-expression recognition comprising:
  • the movie drama is automatically scored according to the analysis result and the acquired micro-expression data, and the final score is output.
  • the micro-expressions when the user is watching are obtained, and the micro-expressions are identified and analyzed, and the output analysis results specifically include:
  • the camera in the smart terminal acquires the face image of the user when watching the movie in real time
  • the output analysis result further includes:
  • the face image is preprocessed so that the pixels of the preprocessed face image are identical.
  • the preprocessing the facial image specifically includes:
  • the Adaboost algorithm is used to detect the face in the image and perform cropping
  • the bilinear difference algorithm is used to normalize the size of the cropped image.
  • the micro-expression image sequence includes: a start frame, a duration, and an end frame of the appearance of the micro-expression.
  • the extracting the feature of the micro-expression image sequence includes:
  • the dynamic spatiotemporal texture features of the micro-expression image sequence are extracted using the CBP-TOP algorithm.
  • the analysis result includes: a type of micro-expression in a user watching process, and an actual proportion of each micro-expression; the types of the micro-expression include: happiness, surprise, disgust, sadness, fear, and anger.
  • the method before the obtaining the category of the currently playing movie and TV and obtaining the micro-expression data corresponding to the category of the current movie and TV drama from the preset micro-expression database, the method further includes:
  • the micro-expression database includes: a theoretical micro-expression type corresponding to each film and television drama and a theoretical proportion of each micro-expression;
  • the categories of film and television dramas include: comedy, adventure, horror, crime, science fiction, martial arts, and love.
  • the obtaining the category of the currently played movie and TV drama, and obtaining the micro-expression data corresponding to the category of the current movie and television drama from the preset micro-expression database specifically includes:
  • the smart terminal acquires the type of the currently played movie drama
  • the automatically scoring the movie drama according to the analysis result and the acquired micro-expression data, and outputting the final score specifically includes:
  • the movie drama is automatically rated as Pass score; the pass score is 6 points.
  • the automatically scoring the movie drama according to the analysis result and the acquired micro-expression data, and outputting the final score further includes:
  • the rating of the film and television drama is Reduce the score on the passing score.
  • the method further includes:
  • the current user's viewing time is less than the preset standard duration, the user's micro-expression cannot be used to score the movie and TV drama.
  • the method further includes:
  • the smart terminal encrypts the user's micro-expression and the user's final score.
  • a storage medium having stored thereon a plurality of instructions, wherein the instructions are adapted to be loaded and executed by a processor to implement the steps of the micro-expression recognition-based film and television scoring method of any of the above.
  • An intelligent terminal comprising: a processor, a storage medium communicatively coupled to the processor, the storage medium being adapted to store a plurality of instructions; the processor being adapted to invoke an instruction in the storage medium to perform an implementation The step of the micro-expression recognition-based film and television scoring method according to any of the above.
  • the present disclosure analyzes the micro-expressions when the user is watching the movie, and automatically scores the movie according to the change of the micro-expression, without the user manually inputting the score, so that the score result is more realistic, effectively avoiding malicious and false scoring Case.
  • FIG. 1 is a flow chart of a first preferred embodiment of a method for scoring a film based on micro-expression recognition in the present disclosure.
  • FIG. 2 is a flow chart of a micro-expression recognition method of the present disclosure.
  • FIG. 3 is a flow chart of a second preferred embodiment of a method for scoring a film based on micro-expression recognition according to the present disclosure
  • FIG. 4 is a functional block diagram of a smart terminal of the present disclosure.
  • FIG. 1 is a flowchart of a first preferred embodiment of a method for scoring a film based on micro-expression recognition.
  • the film recognition method based on micro-expression recognition comprises the following steps:
  • Step S100 Obtain a micro-expression when the user views the image, identify and analyze the micro-expression, and output the analysis result.
  • the step S100 specifically includes:
  • the camera in the smart terminal acquires the face image of the user when watching the movie in real time
  • the embodiment first installs a camera for capturing a user's face image on the smart terminal.
  • the embodiment discloses a specific flowchart of the micro-expression recognition, as shown in FIG. 2 .
  • Step 210 Acquire a face image.
  • the camera on the smart terminal will capture the face image of the user when watching the movie in real time.
  • Step 220 Pre-processing the face image.
  • an Adaboost (iterative algorithm) algorithm is used to detect a face in a micro-expression image, and the cropping is performed, and the size of the image is normalized by using a bilinear difference algorithm.
  • the size of the pre-processed face image is 180 ⁇ . 180 pixels, the image pixels after pre-processing are consistent, which is beneficial to the extraction of micro-expressions in the subsequent steps.
  • Step 230 Extract micro-feature features.
  • the face image is detected, and the micro-expression image sequence is marked.
  • a Birnbaum-Saunders distribution curve is used to establish a regression model to mark a sequence of facial micro-expression images, including a start frame Apex1, a duration and an end frame Apex2, which are used for facial micro-expressions, thereby completing facial micro-expression detection.
  • the dynamic spatiotemporal texture feature of the face micro-expression sequence is extracted using the CBP-TOP algorithm.
  • the 8 frames of the face micro-expression sequence are divided into 16 ⁇ 16 non-overlapping blocks, the CBP-TOP feature is extracted on each sub-block, and the histogram is subjected to histogram statistics of CBP-TOP features, and finally The feature histograms of all sub-blocks are connected in series to form a histogram of the entire facial micro-expression sequence, thereby extracting dynamic spatiotemporal texture features of the facial micro-expression sequence, that is, obtaining deformation information of the facial micro-expression sequence in the XY plane, and Motion information of the XT plane and the YT plane.
  • the CBP-TOP feature is a texture feature existing in the image itself, and the extraction process is calculated by the CBP code.
  • Step 240 feature judgment.
  • This embodiment uses the ELM classifier for training and prediction, one for verifying the validity of the CBP-TOP algorithm, and the other for classifying and identifying the features of the face micro-expression sequence to determine which type of person the extracted feature belongs to.
  • the face micro-expression recognizes the category to which the micro-expression belongs.
  • the CBP-TOP algorithm is as follows:
  • nj is the number of modes produced by the CBP operator in the jth plane
  • fj(x,y,t) represents the central pixel point (x,y, t)
  • Step 250 Output the analysis result.
  • the intelligent terminal After identifying the type of micro-expression, the intelligent terminal will count the types and quantities of micro-expressions that appear in the process of viewing the user, and output the analysis result.
  • the analysis results include: the types of micro-expressions in the process of viewing the user, and the actual proportion of each micro-expression.
  • the types of micro-expressions are divided into happiness, surprise, disgust, sadness, fear, and anger.
  • the above method for micro-expression recognition is more accurate, so that the smart terminal can accurately extract the micro-expressions from the face images, identify and classify them, and determine which category the micro-expressions belong to.
  • step S200 the category of the currently played movie and television drama is obtained, and the micro-expression data corresponding to the category of the current movie and television drama is obtained from the preset micro-expression database.
  • the step S200 specifically includes:
  • the smart terminal acquires the type of the currently played movie drama
  • the embodiment further establishes in the smart terminal a micro-expression database for acquiring corresponding micro-expression data according to the currently played movie drama category.
  • the micro-expression database includes: a theoretical micro-expression type corresponding to each movie drama and a theoretical proportion of each micro-expression, that is, pre-defining the types of micro-expressions that the user will appear for each type of film and television drama, and each type The theoretical proportion of micro-expressions is used as a reference for scoring film and television dramas in subsequent steps.
  • the categories of film and television dramas in this embodiment include: comedy, adventure, horror, crime, science fiction, martial arts, and love.
  • the smart terminal acquires the currently-played movie drama category, and searches for the theoretical micro-expression type corresponding to the category and the theoretical proportion of each micro-expression from the preset micro-expression database according to the movie drama category.
  • the currently-played film and television drama is a sci-fi film and television drama
  • the sci-fi film and television drama theory micro-expressions are both surprised and pleasant, and the surprised micro-expressions account for 40% of the theoretical proportion of the micro-expressions.
  • the theoretical ratio is 20%.
  • step S300 the movie drama is automatically scored according to the analysis result and the acquired micro-expression data, and the final score is output.
  • the step S300 specifically includes:
  • the movie drama is automatically rated as Pass score.
  • the present embodiment presets a standard condition, and when the type of the micro-expression and the proportion of each micro-expression in the user's viewing are in accordance with the standard condition, the film and television drama is rated as a passing score.
  • the pass score is 6 points (the total score is ten basis).
  • the types of micro-expressions in the process of viewing the user and the actual proportion of each micro-expression are respectively compared with the types of theoretical micro-expressions obtained from the micro-expression data and the expression of each micro-expression. The theoretical proportions are compared to determine whether the user's micro-expressions meet the conditions, and the match is scored 6 points.
  • the theoretical micro-expression of comedy-like movies and TV shows is pleasant. If the user's pleasant micro-expression reaches 60%, the score can be 6 points. The theoretical micro-expression of the adventure film and television drama is surprised. If the user's surprised micro-expression reaches 60%.
  • horror film and television drama theory micro-expression is fear and surprise, if the user's threatened micro-expression reaches 40% and the surprised micro-expression reaches 20%, you can score 6 points; criminal film and television drama theory micro-expression is disgust, fear And astonish, if the user's disgusting micro-expression reaches 10%, the fearful micro-expression reaches 20%, and the disgusting micro-expression reaches 30%, you can rate 6 points; the theoretical micro-expression of science fiction movies and TV dramas is surprised and pleasant, if the user is surprised The micro-expression reaches 40% and the pleasant micro-expression reaches 20% to score 6 points; the theoretical micro-expression of the martial arts drama is surprised, if the user's surprised micro-expression reaches 60%, then 6 points can be scored; The theoretical micro-expression is pleasure or sadness, and if the user's pleasant or sad micro-expression reaches 60%, he can score 6 points.
  • the proportion of micro-expressions in each type of movie can be set according to the actual situation.
  • the movie drama The score increases the score on the passing score (6 points); if the type of micro-expressions in the user's viewing process is lower than the theoretical micro-expression type, and the actual proportion of each micro-expression is lower than that of each micro-expression
  • the theoretical proportion of the film and television drama score on the passing score (6 points) reduced scores, the final score is 0-10 points.
  • the embodiment also obtains the viewing time of the user when the user performs the viewing. If the viewing time of the current user is less than the preset standard duration, the user does not watch the complete movie. The user's micro-expressions cannot be used to score movies and TV shows to ensure that all users who score are valid and authentic.
  • the embodiment also performs data encryption processing on the user's micro-expression and the user's score to ensure the security of the user data.
  • the present disclosure also provides a second preferred embodiment of the film recognition method based on micro-expression recognition. As shown in FIG. 3, the method includes the following steps:
  • Step 310 Collect a face image.
  • the smart terminal collects the face image of the viewing user by using a preset camera.
  • Step 320 micro-expression recognition.
  • the micro-feature feature is identified from the acquired face image.
  • Step 330 Record various micro-expression categories and proportions.
  • Step 340 Determine whether the movie ends. When yes, step 350 is performed; when no, step 310 is performed.
  • Step 350 analysis processing. Comparing the types of micro-expressions in the process of viewing the user and the actual proportion of each micro-expression with the theoretical micro-expressions obtained from the micro-expression data and the theoretical proportion of each micro-expression, thereby judging Whether the user's micro-expression is eligible and automatically scores.
  • Step 360 Output a movie score.
  • an embodiment of the present disclosure further provides an intelligent terminal, as shown in FIG. 4, including: a processor 10, a storage medium 20 connected to the processor 10; wherein the processing The device 10 is configured to invoke program instructions in the storage medium 20 to perform the method provided by the above embodiments, for example, to perform:
  • the movie drama is automatically scored according to the analysis result and the acquired micro-expression data, and the final score is output.
  • the present disclosure also provides a storage medium on which computer instructions are stored, the computer instructions causing a computer to perform the methods provided by the various embodiments described above.
  • All or part of the steps of the above embodiments may be completed by hardware, or may be executed by a program to execute related hardware.
  • the program may be stored in a computer readable storage medium, and the storage medium may be a non-transitory storage medium, including a USB flash drive.
  • the micro-expression recognition-based film and television scoring method, the storage medium and the intelligent terminal provide an analysis of the micro-expressions when the user is watching the movie, and automatically score the video according to the change of the micro-expression, without the user manually inputting the score, so that the user does not need to manually input the score.
  • the scores are more realistic and effectively avoid malicious and false scoring.

Abstract

The present disclosure relates to a microexpression recognition-based film and television scoring method, a storage medium, and an intelligent terminal. The method comprises: obtaining a microexpression of a user during watching of a film or television, recognizing and analyzing the microexpression, and outputting an analysis result; obtaining a category of the currently played film or television, and obtaining microexpression data corresponding to the category of the current film or television from a preset microexpression database; and automatically scoring the film or television according to the analysis result and the obtained microexpression data, and outputting a final score. A microexpression of a user during watching of a film or television is analyzed, and the film or television is automatically scored according to change of the microexpression, so that there is no need for the user to manually input a score, a scoring result is more real, and malicious and false scoring is effectively avoided.

Description

基于微表情识别的影视评分方法、存储介质及智能终端Film and television scoring method based on micro-expression recognition, storage medium and intelligent terminal 技术领域Technical field
本公开涉及消费性电子产品技术领域,例如涉及一种基于微表情识别的影视评分方法、存储介质及智能终端。The present disclosure relates to the field of consumer electronic products, for example, to a film recognition method based on micro-expression recognition, a storage medium, and a smart terminal.
背景技术Background technique
随着电视以及影视行业的发展,人们对于影视剧的质量要求也越来越高,不同的人对于影视剧的喜好程度也不尽相同,因此,各大影视网站也都推出了为影视评分的功能,方便用户之间的互动以及参考。With the development of television and film and television industry, people's quality requirements for film and television dramas are getting higher and higher. Different people have different preferences for film and television dramas. Therefore, major film and television websites have also launched ratings for film and television. Features, user-friendly interaction and reference.
目前绝大多数影视评分网站都使用10分制,例如豆瓣和时光等众多网站都有自己的算法来统计用户给某一部电影的打分。但是目前大多数网站还是采用手动输入评分的人工打分系统,这样会使得每个人给的分数过于主观,甚至有虚假、恶意的打分,导致最终的电影评分不真实。At present, most film and television scoring websites use a 10-point system. For example, many websites such as Douban and Time have their own algorithms to count users' scores for a certain movie. However, most websites currently use manual scoring system with manual input scores, which will make each person's scores too subjective, and even have false and malicious scores, resulting in the final film score is not true.
因此,现有技术还有待于改进和发展。Therefore, the prior art has yet to be improved and developed.
发明内容Summary of the invention
本公开要解决的技术问题在于,针对现有技术的上述缺陷,提供一种基于微表情识别的影视评分方法、存储介质及智能终端,旨在解决现有技术中的影视评分采用手动输入评分的方式,使得评分过于主观,导致评分不真实等问题。The technical problem to be solved by the present disclosure is to provide a film and television scoring method, a storage medium and a smart terminal based on micro-expression recognition, aiming at solving the prior art video scores by manually inputting scores. The way, the score is too subjective, leading to problems such as untrue scores.
本公开解决技术问题所采用的技术方案如下:The technical solution adopted by the present disclosure to solve the technical problem is as follows:
一种基于微表情识别的影视评分方法,其中,所述方法包括:A film and television scoring method based on micro-expression recognition, wherein the method comprises:
获取用户观影时的微表情,并对所述微表情进行识别与分析,输出分析结果;Obtaining a micro-expression when the user views the image, and identifying and analyzing the micro-expression, and outputting the analysis result;
获取当前播放的影视剧的类别,并从预设的微表情数据库中获取当前影视剧的类别所对应的微表情数据;Obtaining the category of the currently playing movie and TV drama, and obtaining the micro-expression data corresponding to the category of the current movie and television drama from the preset micro-expression database;
根据分析结果以及获取的微表情数据自动对所述影视剧进行打分,并输出最终评分。The movie drama is automatically scored according to the analysis result and the acquired micro-expression data, and the final score is output.
可选地,述获取用户观影时的微表情,并对所述微表情进行识别与分析,输出分析结果具体包括:Optionally, the micro-expressions when the user is watching are obtained, and the micro-expressions are identified and analyzed, and the output analysis results specifically include:
智能终端中的摄像头实时获取用户观影时的人脸图像;The camera in the smart terminal acquires the face image of the user when watching the movie in real time;
对所述人脸图像进行检测,并标记出微表情图像序列;Detecting the face image and marking the micro-expression image sequence;
提取所述微表情图像序列的特征,并根据提取的特征识别出所述微表情所属的类别;Extracting a feature of the micro-expression image sequence, and identifying a category to which the micro-expression belongs according to the extracted feature;
对所述微表情的类别以及数量进行统计分析,输出分析结果。Statistical analysis is performed on the types and quantities of the micro-expressions, and the analysis results are output.
可选地,所述获取用户观影时的微表情,并对所述微表情进行识别与分析,输出分析结果还包括:Optionally, the obtaining a micro-expression when the user is watching the image, and identifying and analyzing the micro-expression, the output analysis result further includes:
对人脸图像进行预处理,以使经过预处理后的人脸图像的像素一致。The face image is preprocessed so that the pixels of the preprocessed face image are identical.
可选地,所述对人脸图像进行预处理具体包括:Optionally, the preprocessing the facial image specifically includes:
利用Adaboost算法检测图像中的人脸,并进行裁剪;The Adaboost algorithm is used to detect the face in the image and perform cropping;
采用双线性差值算法对裁剪后的图像实现尺寸归一化。The bilinear difference algorithm is used to normalize the size of the cropped image.
可选地,所述微表情图像序列包括:微表情出现的开始帧、持续时间和结束帧。Optionally, the micro-expression image sequence includes: a start frame, a duration, and an end frame of the appearance of the micro-expression.
可选地,所述提取所述微表情图像序列的特征包括:Optionally, the extracting the feature of the micro-expression image sequence includes:
使用CBP-TOP算法对微表情图像序列的动态时空纹理特征进行提取。The dynamic spatiotemporal texture features of the micro-expression image sequence are extracted using the CBP-TOP algorithm.
可选地,所述分析结果包括:用户观影过程中的微表情的种类,以及每种微表情所占的实际比例;所述微表情的种类包括:愉快、惊讶、厌恶、悲伤、恐惧以及愤怒。Optionally, the analysis result includes: a type of micro-expression in a user watching process, and an actual proportion of each micro-expression; the types of the micro-expression include: happiness, surprise, disgust, sadness, fear, and anger.
可选地,所述获取当前播放的影视剧的类别,并从预设的微表情数据库中获取当前影视剧的类别所对应的微表情数据之前还包括:Optionally, before the obtaining the category of the currently playing movie and TV and obtaining the micro-expression data corresponding to the category of the current movie and TV drama from the preset micro-expression database, the method further includes:
预先建立一用于根据当前播放的影视剧类别获取对应的微表情数据的微表情数据库;Pre-establishing a micro-expression database for acquiring corresponding micro-expression data according to the currently played movie drama category;
所述微表情数据库中包括:每种影视剧所对应的理论微表情种类以及每种微表情所占的理论比例;The micro-expression database includes: a theoretical micro-expression type corresponding to each film and television drama and a theoretical proportion of each micro-expression;
所述影视剧类别包括:喜剧类、冒险类、恐怖类、犯罪类、科幻类、武侠类以及爱情类。The categories of film and television dramas include: comedy, adventure, horror, crime, science fiction, martial arts, and love.
可选地,所述获取当前播放的影视剧的类别,并从预设的微表情数据库中获取当前影视剧的类别所对应的微表情数据具体包括:Optionally, the obtaining the category of the currently played movie and TV drama, and obtaining the micro-expression data corresponding to the category of the current movie and television drama from the preset micro-expression database specifically includes:
智能终端获取当前播放的影视剧的种类;The smart terminal acquires the type of the currently played movie drama;
根据所获取的影视剧的种类从微表情数据中查找对应的微表情数据;Finding corresponding micro-expression data from the micro-expression data according to the type of the film and television drama obtained;
获取所述微表情数据中包含的当前影视剧类别所对应的理论微表情种类以及每种微表情所占的理论比例。Obtaining a theoretical micro-expression type corresponding to the current movie drama category included in the micro-expression data and a theoretical proportion of each micro-expression.
可选地,所述根据分析结果以及获取的微表情数据自动对所述影视剧进行打分,并输出最终评分具体包括:Optionally, the automatically scoring the movie drama according to the analysis result and the acquired micro-expression data, and outputting the final score specifically includes:
将分析出的用户观影过程中的微表情的种类,以及每一种微表情所占的实际比例与获取的微表情数据中的理论微表情种类以及每种微表情所占的理论比例进行比较;Compare the types of micro-expressions in the user's viewing process, and the actual proportion of each micro-expression to the theoretical micro-expressions in the acquired micro-expression data and the theoretical proportion of each micro-expression. ;
若用户观影过程中的微表情的种类恰好与理论微表情种类相符,且每一种微表情所占的实际比例等于每种微表情所占的理论比例,则自动对所述影视剧评为及格分数;所述及格分数为6分。If the type of micro-expression in the process of watching the video coincides with the type of theoretical micro-expression, and the actual proportion of each micro-expression is equal to the theoretical proportion of each micro-expression, the movie drama is automatically rated as Pass score; the pass score is 6 points.
可选地,所述根据分析结果以及获取的微表情数据自动对所述影视剧进行打分,并输出最终评分还包括:Optionally, the automatically scoring the movie drama according to the analysis result and the acquired micro-expression data, and outputting the final score further includes:
若用户观影过程中的微表情的种类高于理论微表情种类,且每一种微表情所占的实际比例高于每种微表情所占的理论比例,则对所述影视剧的评分在及格分数上增加分数;If the type of micro-expressions in the process of viewing is higher than the theoretical micro-expressions, and the actual proportion of each micro-expression is higher than the theoretical proportion of each micro-expression, the rating of the film and television drama is Increase the score on the passing score;
若用户观影过程中的微表情的种类低于理论微表情种类,且每一种微表情所占的实际比例低于每种微表情所占的理论比例,则对所述影视剧的评分在及格分数上减少分数。If the type of micro-expressions in the process of viewing is lower than the theoretical micro-expressions, and the actual proportion of each micro-expression is lower than the theoretical proportion of each micro-expression, the rating of the film and television drama is Reduce the score on the passing score.
可选地,所述方法还包括:Optionally, the method further includes:
获取当前用户的观影时长,若当前用户的观影时长小于预设的标准时长,则该用户的微表情无法用来对影视剧进行评分。Obtain the current user's viewing time. If the current user's viewing time is less than the preset standard duration, the user's micro-expression cannot be used to score the movie and TV drama.
可选地,所述方法还包括:Optionally, the method further includes:
智能终端对用户的微表情以及用户的最终评分进行加密处理。The smart terminal encrypts the user's micro-expression and the user's final score.
一种存储介质,其上存储有多条指令,其中,所述指令适于由处理器加载并执行,以实现上述任一项所述的基于微表情识别的影视评分方法的步骤。A storage medium having stored thereon a plurality of instructions, wherein the instructions are adapted to be loaded and executed by a processor to implement the steps of the micro-expression recognition-based film and television scoring method of any of the above.
一种智能终端,其中,包括:处理器、与处理器通信连接的存储介质,所述存储介质适于存储多条指令;所述处理器适于调用所述存储介质中的指令,以执行实现上述任一项所述的基于微表情识别的影视评分方法的步骤。An intelligent terminal, comprising: a processor, a storage medium communicatively coupled to the processor, the storage medium being adapted to store a plurality of instructions; the processor being adapted to invoke an instruction in the storage medium to perform an implementation The step of the micro-expression recognition-based film and television scoring method according to any of the above.
本公开的有益效果:本公开通过对用户观影时候的微表情进行分析,并根据微表情的变化自动对影视进行评分,无需用户手动输入评分,使得打分结果更加真实,有效避免恶意、虚假打分的情况。The beneficial effects of the present disclosure: the present disclosure analyzes the micro-expressions when the user is watching the movie, and automatically scores the movie according to the change of the micro-expression, without the user manually inputting the score, so that the score result is more realistic, effectively avoiding malicious and false scoring Case.
附图说明DRAWINGS
图1是本公开基于微表情识别的影视评分方法的第一较佳实施例的流程图。1 is a flow chart of a first preferred embodiment of a method for scoring a film based on micro-expression recognition in the present disclosure.
图2是本公开的微表情识别方法的流程图。2 is a flow chart of a micro-expression recognition method of the present disclosure.
图3是本公开基于微表情识别的影视评分方法的第二较佳实施例的流程图3 is a flow chart of a second preferred embodiment of a method for scoring a film based on micro-expression recognition according to the present disclosure
图4是本公开的智能终端的功能原理框图。4 is a functional block diagram of a smart terminal of the present disclosure.
具体实施方式detailed description
为使本公开的目的、技术方案及优点更加清楚、明确,以下参照附图并举实施例对本公开进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本公开,并不用于限定本公开。The present disclosure will be further described in detail below with reference to the accompanying drawings and embodiments. It is understood that the specific embodiments described herein are merely illustrative of the disclosure and are not intended to limit the disclosure.
由于现有技术中各大影视网站对于影视剧评分的方式基本还是采用人工手动输入评分的方法,这样使得评分存在太大的主观性,导致影片的评分不真实。虽然有些网站的评分算法比较复杂,但是还是存在虚假、恶意打分的情况。Because the methods of scoring film and television dramas in the prior art basically adopt the method of manual manual input of scores, the scores are too subjective, and the scores of the films are not true. Although the scoring algorithms of some websites are more complicated, there are still cases of false and malicious scoring.
实施例1Example 1
本公开实施例提供一种基于微表情识别的影视评分方法,如图1所示,图1是本公开基于微表情识别的影视评分方法的第一较佳实施例的流程图。所述基于微表情识别的影视评分方法包括以下步骤:The embodiment of the present disclosure provides a film and television scoring method based on micro-expression recognition. As shown in FIG. 1 , FIG. 1 is a flowchart of a first preferred embodiment of a method for scoring a film based on micro-expression recognition. The film recognition method based on micro-expression recognition comprises the following steps:
步骤S100、获取用户观影时的微表情,并对所述微表情进行识别与分析,输出分析结果。Step S100: Obtain a micro-expression when the user views the image, identify and analyze the micro-expression, and output the analysis result.
较佳地,所述步骤S100具体包括:Preferably, the step S100 specifically includes:
智能终端中的摄像头实时获取用户观影时的人脸图像;The camera in the smart terminal acquires the face image of the user when watching the movie in real time;
对所述人脸图像进行检测,并标记出微表情图像序列;Detecting the face image and marking the micro-expression image sequence;
提取所述微表情图像序列的特征,并根据提取的特征识别出所述微表情所属的类别;Extracting a feature of the micro-expression image sequence, and identifying a category to which the micro-expression belongs according to the extracted feature;
对所述微表情的类别以及数量进行统计分析,输出分析结果。Statistical analysis is performed on the types and quantities of the micro-expressions, and the analysis results are output.
具体实施时,本实施例预先在智能终端上安装一用于拍摄用户人脸图像的摄 像头,较佳地,本实施例公开了微表情识别具体的流程图,如图2所示。In a specific implementation, the embodiment first installs a camera for capturing a user's face image on the smart terminal. Preferably, the embodiment discloses a specific flowchart of the micro-expression recognition, as shown in FIG. 2 .
步骤210、获取人脸图像。当用户在观影时,所述智能终端上的摄像头就会实时拍摄用户观影时的人脸图像。Step 210: Acquire a face image. When the user is watching, the camera on the smart terminal will capture the face image of the user when watching the movie in real time.
步骤220、人脸图像预处理。本实施例使用Adaboost(迭代算法)算法检测微表情图像中的人脸,并进行裁剪,采用双线性差值算法实现图像的尺寸归一化,经过预处理后的人脸图像大小为180×180像素,经过预处理后的图像像素一致,有利于后续步骤中对微表情的提取。Step 220: Pre-processing the face image. In this embodiment, an Adaboost (iterative algorithm) algorithm is used to detect a face in a micro-expression image, and the cropping is performed, and the size of the image is normalized by using a bilinear difference algorithm. The size of the pre-processed face image is 180×. 180 pixels, the image pixels after pre-processing are consistent, which is beneficial to the extraction of micro-expressions in the subsequent steps.
步骤230、提取微表情特征。本实施例对人脸图像进行检测,并标记出微表情图像序列。本实施例中使用Birnbaum-Saunders分布曲线建立回归模型,来标记人脸微表情图像序列,其中包括人脸微表情出现的开始帧Apex1,持续时间和结束帧Apex2,由此完成人脸微表情检测。进一步地,本实施例使用CBP-TOP算法对人脸微表情序列的动态时空纹理特征进行提取。并将人脸微表情序列的8帧图像分成16×16个非重叠的块,在每一子块上提取CBP-TOP特征,并对该子块进行CBP-TOP特征的直方图统计,最后将所有子块的特征直方图串联成整个人脸微表情序列的特征直方图,由此提取人脸微表情序列的动态时空纹理特征,即获取人脸微表情序列在XY平面的形变信息,以及在XT平面和YT平面的运动信息。所述CBP-TOP特征是图像本身存在的纹理特征,提取过程是通过CBP码计算得到的。Step 230: Extract micro-feature features. In this embodiment, the face image is detected, and the micro-expression image sequence is marked. In this embodiment, a Birnbaum-Saunders distribution curve is used to establish a regression model to mark a sequence of facial micro-expression images, including a start frame Apex1, a duration and an end frame Apex2, which are used for facial micro-expressions, thereby completing facial micro-expression detection. . Further, in this embodiment, the dynamic spatiotemporal texture feature of the face micro-expression sequence is extracted using the CBP-TOP algorithm. The 8 frames of the face micro-expression sequence are divided into 16×16 non-overlapping blocks, the CBP-TOP feature is extracted on each sub-block, and the histogram is subjected to histogram statistics of CBP-TOP features, and finally The feature histograms of all sub-blocks are connected in series to form a histogram of the entire facial micro-expression sequence, thereby extracting dynamic spatiotemporal texture features of the facial micro-expression sequence, that is, obtaining deformation information of the facial micro-expression sequence in the XY plane, and Motion information of the XT plane and the YT plane. The CBP-TOP feature is a texture feature existing in the image itself, and the extraction process is calculated by the CBP code.
步骤240、特征判断。本实施例使用ELM分类器进行训练和预测,一则是用以验证CBP-TOP算法的有效性,二则是对人脸微表情序列的特征进行分类识别,判断提取的特征究竟属于哪类人脸微表情,识别出所述微表情所属的类别。上述微表情的识别方法中,所述CBP-TOP算法如下: Step 240, feature judgment. This embodiment uses the ELM classifier for training and prediction, one for verifying the validity of the CBP-TOP algorithm, and the other for classifying and identifying the features of the face micro-expression sequence to determine which type of person the extracted feature belongs to. The face micro-expression recognizes the category to which the micro-expression belongs. In the above method for identifying a micro-expression, the CBP-TOP algorithm is as follows:
CBP-TOP=Σx,y,tI{fj(x,y,t)=i},i=0,...,nj-1;j=0,1,2---(1),]]>CBP-TOP=Σx,y,tI{fj(x,y,t)=i},i=0,...,nj-1;j=0,1,2---(1),]] >
其中:nj为CBP算子在第j个平面产生的模式数目,j=0,1,2分别表示XY、XT和YT平面,fj(x,y,t)表示中心像素点(x,y,t)在第j个平面的CBP码的十进制值,函数I(f)的定义如下:Where: nj is the number of modes produced by the CBP operator in the jth plane, j=0,1,2 represent the XY, XT and YT planes respectively, and fj(x,y,t) represents the central pixel point (x,y, t) The decimal value of the CBP code in the jth plane, the definition of the function I(f) is as follows:
I(f)=1,if f is True0,if f is False---(2).]]>I(f)=1, if f is True0, if f is False---(2).]]>
步骤250、输出分析结果。智能终端在识别出微表情的种类之后会对用户观影过程中的出现的微表情种类及数量进行统计,并输出分析结果。所述分析结果 包括:用户观影过程中的微表情的种类,以及每种微表情所占的实际比例。较佳地,本实施例中将微表情的种类分为愉快、惊讶、厌恶、悲伤、恐惧以及愤怒。上述微表情识别的方法更加准确,从而使得智能终端能够准确地从人脸图像中提取出微表情,并进行识别与分类,判断出微表情具体属于哪一类。Step 250: Output the analysis result. After identifying the type of micro-expression, the intelligent terminal will count the types and quantities of micro-expressions that appear in the process of viewing the user, and output the analysis result. The analysis results include: the types of micro-expressions in the process of viewing the user, and the actual proportion of each micro-expression. Preferably, in this embodiment, the types of micro-expressions are divided into happiness, surprise, disgust, sadness, fear, and anger. The above method for micro-expression recognition is more accurate, so that the smart terminal can accurately extract the micro-expressions from the face images, identify and classify them, and determine which category the micro-expressions belong to.
进一步地,步骤S200、获取当前播放的影视剧的类别,并从预设的微表情数据库中获取当前影视剧的类别所对应的微表情数据。Further, in step S200, the category of the currently played movie and television drama is obtained, and the micro-expression data corresponding to the category of the current movie and television drama is obtained from the preset micro-expression database.
较佳地,所述步骤S200具体包括:Preferably, the step S200 specifically includes:
智能终端获取当前播放的影视剧的种类;The smart terminal acquires the type of the currently played movie drama;
根据所获取的影视剧的种类从微表情数据中查找对应的微表情数据;Finding corresponding micro-expression data from the micro-expression data according to the type of the film and television drama obtained;
获取所述微表情数据中包含的当前影视剧类别所对应的理论微表情种类以及每种微表情所占的理论比例。Obtaining a theoretical micro-expression type corresponding to the current movie drama category included in the micro-expression data and a theoretical proportion of each micro-expression.
具体实施时,本实施例还预先在智能终端中建立一用于根据当前播放的影视剧类别获取对应的微表情数据的微表情数据库。所述微表情数据库中包括:每种影视剧所对应的理论微表情种类以及每种微表情所占的理论比例,即预先定义对于不同类型的影视剧,用户会出现的微表情种类以及每种微表情所占的理论比例,用于后续步骤中对影视剧进行评分的参考。本实施例中的影视剧类别包括:喜剧类、冒险类、恐怖类、犯罪类、科幻类、武侠类以及爱情类。In a specific implementation, the embodiment further establishes in the smart terminal a micro-expression database for acquiring corresponding micro-expression data according to the currently played movie drama category. The micro-expression database includes: a theoretical micro-expression type corresponding to each movie drama and a theoretical proportion of each micro-expression, that is, pre-defining the types of micro-expressions that the user will appear for each type of film and television drama, and each type The theoretical proportion of micro-expressions is used as a reference for scoring film and television dramas in subsequent steps. The categories of film and television dramas in this embodiment include: comedy, adventure, horror, crime, science fiction, martial arts, and love.
当用户在观影时,智能终端获取当前播放的影视剧类别,并根据影视剧类别从预设的微表情数据库中查找该类别所对应的理论微表情种类以及每种微表情所占的理论比例。例如,若当前播放的影视剧为科幻类影视剧,那科幻类影视剧理论微表情为惊讶和愉悦这两类,且惊讶的微表情所占的理论比例为40%、愉悦的微表情所占的理论比例为20%。When the user is watching the movie, the smart terminal acquires the currently-played movie drama category, and searches for the theoretical micro-expression type corresponding to the category and the theoretical proportion of each micro-expression from the preset micro-expression database according to the movie drama category. . For example, if the currently-played film and television drama is a sci-fi film and television drama, the sci-fi film and television drama theory micro-expressions are both surprised and pleasant, and the surprised micro-expressions account for 40% of the theoretical proportion of the micro-expressions. The theoretical ratio is 20%.
进一步地,步骤S300、根据分析结果以及获取的微表情数据自动对所述影视剧进行打分,并输出最终评分。Further, in step S300, the movie drama is automatically scored according to the analysis result and the acquired micro-expression data, and the final score is output.
较佳地,所述步骤S300具体包括:Preferably, the step S300 specifically includes:
将分析出的用户观影过程中的微表情的种类,以及每一种微表情所占的实际比例与获取的微表情数据中的理论微表情种类以及每种微表情所占的理论比例进行比较;Compare the types of micro-expressions in the user's viewing process, and the actual proportion of each micro-expression to the theoretical micro-expressions in the acquired micro-expression data and the theoretical proportion of each micro-expression. ;
若用户观影过程中的微表情的种类恰好与理论微表情种类相符,且每一种微 表情所占的实际比例等于每种微表情所占的理论比例,则自动对所述影视剧评为及格分数。If the type of micro-expression in the process of watching the video coincides with the type of theoretical micro-expression, and the actual proportion of each micro-expression is equal to the theoretical proportion of each micro-expression, the movie drama is automatically rated as Pass score.
具体实施时,本实施例预先设置一标准条件,当用户观影时的微表情种类以及每种微表情所占的比例符合所述标准条件时,则将该影视剧的评为及格分数。在本实施例中所述及格分数为6分(总分十分制)。具体地,本实施例分别将用户观影过程中的微表情的种类以及每一种微表情所占的实际比例与从微表情数据中获取的的理论微表情种类以及每种微表情所占的理论比例进行比较,从而判断出用户的微表情是否符合条件,符合则打6分。In a specific implementation, the present embodiment presets a standard condition, and when the type of the micro-expression and the proportion of each micro-expression in the user's viewing are in accordance with the standard condition, the film and television drama is rated as a passing score. In the present embodiment, the pass score is 6 points (the total score is ten basis). Specifically, in this embodiment, the types of micro-expressions in the process of viewing the user and the actual proportion of each micro-expression are respectively compared with the types of theoretical micro-expressions obtained from the micro-expression data and the expression of each micro-expression. The theoretical proportions are compared to determine whether the user's micro-expressions meet the conditions, and the match is scored 6 points.
例如,喜剧类影视剧的理论微表情是愉悦,如果用户愉悦的微表情达到60%即可评6分;冒险类影视剧的理论微表情是惊讶,如果用户惊讶的微表情达到60%即可评6分;恐怖类影视剧理论微表情是恐惧和惊讶,如果用户恐怖的微表情达到40%且惊讶的微表情达到20%即可评6分;犯罪类影视剧理论微表情是厌恶、恐惧和惊讶,如果用户厌恶的微表情达到10%、恐惧的微表情达到20%和厌恶的微表情达到30%即可评6分;科幻类影视剧的理论微表情是惊讶和愉悦,如果用户惊讶的微表情达到40%且愉悦的微表情达到20%即可评6分;武侠类影视剧的理论微表情是惊讶,如果用户惊讶的微表情达到60%即可评6分;爱情类影视剧的理论微表情是愉悦或悲伤,如果用户愉悦或悲伤的微表情达到60%即可评6分。每种类型的影片中微表情所占的比例可以根据实际情况自行进行设置。For example, the theoretical micro-expression of comedy-like movies and TV shows is pleasant. If the user's pleasant micro-expression reaches 60%, the score can be 6 points. The theoretical micro-expression of the adventure film and television drama is surprised. If the user's surprised micro-expression reaches 60%. Comment 6 points; horror film and television drama theory micro-expression is fear and surprise, if the user's horrible micro-expression reaches 40% and the surprised micro-expression reaches 20%, you can score 6 points; criminal film and television drama theory micro-expression is disgust, fear And astonish, if the user's disgusting micro-expression reaches 10%, the fearful micro-expression reaches 20%, and the disgusting micro-expression reaches 30%, you can rate 6 points; the theoretical micro-expression of science fiction movies and TV dramas is surprised and pleasant, if the user is surprised The micro-expression reaches 40% and the pleasant micro-expression reaches 20% to score 6 points; the theoretical micro-expression of the martial arts drama is surprised, if the user's surprised micro-expression reaches 60%, then 6 points can be scored; The theoretical micro-expression is pleasure or sadness, and if the user's pleasant or sad micro-expression reaches 60%, he can score 6 points. The proportion of micro-expressions in each type of movie can be set according to the actual situation.
进一步地,若用户观影过程中的微表情的种类高于理论微表情种类,且每一种微表情所占的实际比例高于每种微表情所占的理论比例,则对所述影视剧的评分在及格分数(6分)上增加分数;若用户观影过程中的微表情的种类低于理论微表情种类,且每一种微表情所占的实际比例低于每种微表情所占的理论比例,则对所述影视剧的评分在及格分数(6分)上减少分数,最终评分为0-10分。Further, if the type of the micro-expression in the process of viewing the user is higher than the theoretical micro-expression type, and the actual proportion of each micro-expression is higher than the theoretical proportion of each micro-expression, then the movie drama The score increases the score on the passing score (6 points); if the type of micro-expressions in the user's viewing process is lower than the theoretical micro-expression type, and the actual proportion of each micro-expression is lower than that of each micro-expression The theoretical proportion of the film and television drama score on the passing score (6 points) reduced scores, the final score is 0-10 points.
为了进一步减少虚假、恶意评分,本实施例在用户进行观影的时候还会获取用户的观影时长,若当前用户的观影时长小于预设的标准时长,则还用户没有观看完整的影视,该用户的微表情无法用来对影视剧进行评分,以保证所有评分的用户都是有效且真实的。此外,本实施例还对用户的微表情以及用户的评分进行数据加密处理,以保证用户数据的安全性。In order to further reduce the false and malicious scores, the embodiment also obtains the viewing time of the user when the user performs the viewing. If the viewing time of the current user is less than the preset standard duration, the user does not watch the complete movie. The user's micro-expressions cannot be used to score movies and TV shows to ensure that all users who score are valid and authentic. In addition, the embodiment also performs data encryption processing on the user's micro-expression and the user's score to ensure the security of the user data.
实施例2Example 2
进一步地,本公开还提供了基于微表情识别的影视评分方法的第二较佳实施例。如图3所示,所述方法包括以下步骤:Further, the present disclosure also provides a second preferred embodiment of the film recognition method based on micro-expression recognition. As shown in FIG. 3, the method includes the following steps:
步骤310、采集人脸图像。智能终端利用预设的摄像头对观影用户的人脸图像进行采集。Step 310: Collect a face image. The smart terminal collects the face image of the viewing user by using a preset camera.
步骤320、微表情识别。从采集的人脸图像中识别出微表情特征。 Step 320, micro-expression recognition. The micro-feature feature is identified from the acquired face image.
步骤330、记录各种微表情类别以及比例。Step 330: Record various micro-expression categories and proportions.
步骤340、判断影片是否结束。当是时,执行步骤350;当否时,执行步骤310。Step 340: Determine whether the movie ends. When yes, step 350 is performed; when no, step 310 is performed.
步骤350、分析处理。将用户观影过程中的微表情的种类以及每一种微表情所占的实际比例与从微表情数据中获取的的理论微表情种类以及每种微表情所占的理论比例进行比较,从而判断出用户的微表情是否符合条件,并自动进行评分。 Step 350, analysis processing. Comparing the types of micro-expressions in the process of viewing the user and the actual proportion of each micro-expression with the theoretical micro-expressions obtained from the micro-expression data and the theoretical proportion of each micro-expression, thereby judging Whether the user's micro-expression is eligible and automatically scores.
步骤360、输出影片评分。Step 360: Output a movie score.
实施例3Example 3
基于上述实施例,本公开实施例还提供了一种智能终端,如图4所示,包括:处理器(processor)10、与处理器10连接的存储介质(memory)20;其中,所述处理器10用于调用所述存储介质20中的程序指令,以执行上述实施例所提供的方法,例如执行:Based on the foregoing embodiment, an embodiment of the present disclosure further provides an intelligent terminal, as shown in FIG. 4, including: a processor 10, a storage medium 20 connected to the processor 10; wherein the processing The device 10 is configured to invoke program instructions in the storage medium 20 to perform the method provided by the above embodiments, for example, to perform:
获取用户观影时的微表情,并对所述微表情进行识别与分析,输出分析结果;Obtaining a micro-expression when the user views the image, and identifying and analyzing the micro-expression, and outputting the analysis result;
获取当前播放的影视剧的类别,并从预设的微表情数据库中获取当前影视剧的类别所对应的微表情数据;Obtaining the category of the currently playing movie and TV drama, and obtaining the micro-expression data corresponding to the category of the current movie and television drama from the preset micro-expression database;
根据分析结果以及获取的微表情数据自动对所述影视剧进行打分,并输出最终评分。The movie drama is automatically scored according to the analysis result and the acquired micro-expression data, and the final score is output.
本公开还提供一种存储介质,所述存储介质上存储计算机指令,所述计算机指令使计算机执行上述各实施例所提供的方法。The present disclosure also provides a storage medium on which computer instructions are stored, the computer instructions causing a computer to perform the methods provided by the various embodiments described above.
上述实施例的全部或部分步骤可以通过硬件来完成,也可以通过程序来指令相关的硬件完成,该程序可以存储于一计算机可读存储介质中,存储介质可以是 非暂态存储介质,包括U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁盘或光盘等等多种可以存储程序代码的介质,也可以是暂态存储介质。All or part of the steps of the above embodiments may be completed by hardware, or may be executed by a program to execute related hardware. The program may be stored in a computer readable storage medium, and the storage medium may be a non-transitory storage medium, including a USB flash drive. A removable hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk or an optical disk, and the like, which can store a program code, or a transient storage medium.
工业实用性Industrial applicability
本公开提供的基于微表情识别的影视评分方法,存储介质及智能终端,通过对用户观影时候的微表情进行分析,并根据微表情的变化自动对影视进行评分,无需用户手动输入评分,使得打分结果更加真实,有效避免恶意、虚假打分的情况。The micro-expression recognition-based film and television scoring method, the storage medium and the intelligent terminal provide an analysis of the micro-expressions when the user is watching the movie, and automatically score the video according to the change of the micro-expression, without the user manually inputting the score, so that the user does not need to manually input the score. The scores are more realistic and effectively avoid malicious and false scoring.
应当理解的是,本公开的应用不限于上述的举例,对本领域普通技术人员来说,可以根据上述说明加以改进或变换,所有这些改进和变换都应属于本公开所附权利要求的保护范围。It is to be understood that the application of the present disclosure is not limited to the above-described examples, and those skilled in the art can make modifications and changes in accordance with the above description, all of which are within the scope of the appended claims.

Claims (15)

  1. 一种基于微表情识别的影视评分方法,其特征在于,所述方法包括:A film and television scoring method based on micro-expression recognition, characterized in that the method comprises:
    获取用户观影时的微表情,并对所述微表情进行识别与分析,输出分析结果;Obtaining a micro-expression when the user views the image, and identifying and analyzing the micro-expression, and outputting the analysis result;
    获取当前播放的影视剧的类别,并从预设的微表情数据库中获取当前影视剧的类别所对应的微表情数据;Obtaining the category of the currently playing movie and TV drama, and obtaining the micro-expression data corresponding to the category of the current movie and television drama from the preset micro-expression database;
    根据分析结果以及获取的微表情数据自动对所述影视剧进行打分,并输出最终评分。The movie drama is automatically scored according to the analysis result and the acquired micro-expression data, and the final score is output.
  2. 根据权利要求1中所述的基于微表情识别的影视评分方法,其特征在于,所述获取用户观影时的微表情,并对所述微表情进行识别与分析,输出分析结果具体包括:The micro-expression recognition-based film and television scoring method according to claim 1, wherein the micro-expressions when the user views the images are acquired, and the micro-expressions are identified and analyzed, and the output analysis results specifically include:
    智能终端中的摄像头实时获取用户观影时的人脸图像;The camera in the smart terminal acquires the face image of the user when watching the movie in real time;
    对所述人脸图像进行检测,并标记出微表情图像序列;Detecting the face image and marking the micro-expression image sequence;
    提取所述微表情图像序列的特征,并根据提取的特征识别出所述微表情所属的类别;Extracting a feature of the micro-expression image sequence, and identifying a category to which the micro-expression belongs according to the extracted feature;
    对所述微表情的类别以及数量进行统计分析,输出分析结果。Statistical analysis is performed on the types and quantities of the micro-expressions, and the analysis results are output.
  3. 根据权利要求2中所述的基于微表情识别的影视评分方法,其特征在于,所述获取用户观影时的微表情,并对所述微表情进行识别与分析,输出分析结果还包括:The micro-expression recognition-based film and television scoring method according to claim 2, wherein the micro-expressions when the user views the images are acquired, and the micro-expressions are identified and analyzed, and the output analysis results further include:
    对人脸图像进行预处理,以使经过预处理后的人脸图像的像素一致。The face image is preprocessed so that the pixels of the preprocessed face image are identical.
  4. 根据权利要求3中所述的基于微表情识别的影视评分方法,其特征在于,所述对人脸图像进行预处理具体包括:The method for pre-processing a face image according to the micro-expression recognition method according to claim 3, wherein the pre-processing of the face image comprises:
    利用Adaboost算法检测图像中的人脸,并进行裁剪;The Adaboost algorithm is used to detect the face in the image and perform cropping;
    采用双线性差值算法对裁剪后的图像实现尺寸归一化。The bilinear difference algorithm is used to normalize the size of the cropped image.
  5. 根据权利要求2中所述的基于微表情识别的影视评分方法,其特征在于,所述微表情图像序列包括:微表情出现的开始帧、持续时间和结束帧。The micro-expression recognition-based film and television scoring method according to claim 2, wherein the micro-expression image sequence comprises: a start frame, a duration, and an end frame of the appearance of the micro-expression.
  6. 根据权利要求2中所述的基于微表情识别的影视评分方法,其特征在于,所述提取所述微表情图像序列的特征包括:The micro-expression recognition-based film and television scoring method according to claim 2, wherein the extracting the features of the micro-expression image sequence comprises:
    使用CBP-TOP算法对微表情图像序列的动态时空纹理特征进行提取。The dynamic spatiotemporal texture features of the micro-expression image sequence are extracted using the CBP-TOP algorithm.
  7. 根据权利要求2中所述的基于微表情识别的影视评分方法,其特征在于,所述分析结果包括:用户观影过程中的微表情的种类,以及每种微表情所占的实 际比例;所述微表情的种类包括:愉快、惊讶、厌恶、悲伤、恐惧以及愤怒。The micro-expression recognition-based film and television scoring method according to claim 2, wherein the analysis result comprises: a type of micro-expression in a user's viewing process, and an actual proportion of each micro-expression; The types of micro-expressions include: happiness, surprise, disgust, sadness, fear, and anger.
  8. 根据权利要求1中所述的基于微表情识别的影视评分方法,其特征在于,所述获取当前播放的影视剧的类别,并从预设的微表情数据库中获取当前影视剧的类别所对应的微表情数据之前还包括:The method for scoring a film based on micro-expression according to claim 1, wherein the acquiring the category of the currently played movie and TV drama and obtaining the category of the current film and television drama from the preset micro-expression database Micro-expression data also includes:
    预先建立一用于根据当前播放的影视剧类别获取对应的微表情数据的微表情数据库;Pre-establishing a micro-expression database for acquiring corresponding micro-expression data according to the currently played movie drama category;
    所述微表情数据库中包括:每种影视剧所对应的理论微表情种类以及每种微表情所占的理论比例;The micro-expression database includes: a theoretical micro-expression type corresponding to each film and television drama and a theoretical proportion of each micro-expression;
    所述影视剧类别包括:喜剧类、冒险类、恐怖类、犯罪类、科幻类、武侠类以及爱情类。The categories of film and television dramas include: comedy, adventure, horror, crime, science fiction, martial arts, and love.
  9. 根据权利要求1中所述的基于微表情识别的影视评分方法,其特征在于,所述获取当前播放的影视剧的类别,并从预设的微表情数据库中获取当前影视剧的类别所对应的微表情数据具体包括:The method for scoring a film based on micro-expression according to claim 1, wherein the acquiring the category of the currently played movie and TV drama and obtaining the category of the current film and television drama from the preset micro-expression database The micro-expression data specifically includes:
    智能终端获取当前播放的影视剧的种类;The smart terminal acquires the type of the currently played movie drama;
    根据所获取的影视剧的种类从微表情数据中查找对应的微表情数据;Finding corresponding micro-expression data from the micro-expression data according to the type of the film and television drama obtained;
    获取所述微表情数据中包含的当前影视剧类别所对应的理论微表情种类以及每种微表情所占的理论比例。Obtaining a theoretical micro-expression type corresponding to the current movie drama category included in the micro-expression data and a theoretical proportion of each micro-expression.
  10. 根据权利要求1中所述的基于微表情识别的影视评分方法,其特征在于,所述根据分析结果以及获取的微表情数据自动对所述影视剧进行打分,并输出最终评分具体包括:The micro-expression recognition-based film and television scoring method according to claim 1, wherein the automatically scoring the movie drama according to the analysis result and the acquired micro-expression data, and outputting the final score specifically includes:
    将分析出的用户观影过程中的微表情的种类,以及每一种微表情所占的实际比例与获取的微表情数据中的理论微表情种类以及每种微表情所占的理论比例进行比较;Compare the types of micro-expressions in the user's viewing process, and the actual proportion of each micro-expression to the theoretical micro-expressions in the acquired micro-expression data and the theoretical proportion of each micro-expression. ;
    若用户观影过程中的微表情的种类恰好与理论微表情种类相符,且每一种微表情所占的实际比例等于每种微表情所占的理论比例,则自动对所述影视剧评为及格分数;所述及格分数为6分。If the type of micro-expression in the process of watching the video coincides with the type of theoretical micro-expression, and the actual proportion of each micro-expression is equal to the theoretical proportion of each micro-expression, the movie drama is automatically rated as Pass score; the pass score is 6 points.
  11. 根据权利要求10中所述的基于微表情识别的影视评分方法,其特征在于,所述根据分析结果以及获取的微表情数据自动对所述影视剧进行打分,并输出最终评分还包括:The micro-expression recognition-based film and television scoring method according to claim 10, wherein the automatically scoring the movie drama according to the analysis result and the acquired micro-expression data, and outputting the final score further includes:
    若用户观影过程中的微表情的种类高于理论微表情种类,且每一种微表情所占的实际比例高于每种微表情所占的理论比例,则对所述影视剧的评分在及格分数上增加分数;If the type of micro-expressions in the process of viewing is higher than the theoretical micro-expressions, and the actual proportion of each micro-expression is higher than the theoretical proportion of each micro-expression, the rating of the film and television drama is Increase the score on the passing score;
    若用户观影过程中的微表情的种类低于理论微表情种类,且每一种微表情所占的实际比例低于每种微表情所占的理论比例,则对所述影视剧的评分在及格分数上减少分数。If the type of micro-expressions in the process of viewing is lower than the theoretical micro-expressions, and the actual proportion of each micro-expression is lower than the theoretical proportion of each micro-expression, the rating of the film and television drama is Reduce the score on the passing score.
  12. 根据权利要求1中所述的基于微表情识别的影视评分方法,其特征在于,所述方法还包括:The method according to claim 1, wherein the method further comprises:
    获取当前用户的观影时长,若当前用户的观影时长小于预设的标准时长,则该用户的微表情无法用来对影视剧进行评分。Obtain the current user's viewing time. If the current user's viewing time is less than the preset standard duration, the user's micro-expression cannot be used to score the movie and TV drama.
  13. 根据权利要求1中所述的基于微表情识别的影视评分方法,其特征在于,所述方法还包括:The method according to claim 1, wherein the method further comprises:
    智能终端对用户的微表情以及用户的最终评分进行加密处理。The smart terminal encrypts the user's micro-expression and the user's final score.
  14. 一种存储介质,其上存储有多条指令,其特征在于,所述指令适于由处理器加载并执行,以实现上述权利要求1-13任一项所述的基于微表情识别的影视评分方法的步骤。A storage medium having stored thereon a plurality of instructions, wherein the instructions are adapted to be loaded and executed by a processor to implement the micro-expression recognition based on the above-described claims 1-13. The steps of the method.
  15. 一种智能终端,其特征在于,包括:处理器、与处理器通信连接的存储介质,所述存储介质适于存储多条指令;所述处理器适于调用所述存储介质中的指令,以执行实现上述权利要求1-13任一项所述的基于微表情识别的影视评分方法的步骤。An intelligent terminal, comprising: a processor, a storage medium communicatively coupled to the processor, the storage medium being adapted to store a plurality of instructions; the processor being adapted to invoke an instruction in the storage medium to The step of performing the micro-expression recognition-based film and television scoring method according to any one of claims 1 to 13 above.
PCT/CN2018/109820 2018-03-28 2018-10-11 Microexpression recognition-based film and television scoring method, storage medium, and intelligent terminal WO2019184299A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810264552.1 2018-03-28
CN201810264552.1A CN108509893A (en) 2018-03-28 2018-03-28 Video display methods of marking, storage medium and intelligent terminal based on micro- Expression Recognition

Publications (1)

Publication Number Publication Date
WO2019184299A1 true WO2019184299A1 (en) 2019-10-03

Family

ID=63378942

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/109820 WO2019184299A1 (en) 2018-03-28 2018-10-11 Microexpression recognition-based film and television scoring method, storage medium, and intelligent terminal

Country Status (2)

Country Link
CN (1) CN108509893A (en)
WO (1) WO2019184299A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112633215A (en) * 2020-12-29 2021-04-09 安徽兰臣信息科技有限公司 Embedded image acquisition device for recognizing behavior and emotion of children

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108509893A (en) * 2018-03-28 2018-09-07 深圳创维-Rgb电子有限公司 Video display methods of marking, storage medium and intelligent terminal based on micro- Expression Recognition
CN109145880A (en) * 2018-10-09 2019-01-04 深圳市亿联智能有限公司 A kind of high intelligent film marking mode
CN109583970A (en) * 2018-12-14 2019-04-05 深圳壹账通智能科技有限公司 Advertisement placement method, device, computer equipment and storage medium
CN109784977A (en) * 2018-12-18 2019-05-21 深圳壹账通智能科技有限公司 Service methods of marking, device, computer equipment and storage medium
CN109784185A (en) * 2018-12-18 2019-05-21 深圳壹账通智能科技有限公司 Client's food and drink evaluation automatic obtaining method and device based on micro- Expression Recognition
CN109872183A (en) * 2019-01-16 2019-06-11 深圳壹账通智能科技有限公司 Intelligent Service evaluation method, computer readable storage medium and terminal device
CN110175526A (en) * 2019-04-28 2019-08-27 平安科技(深圳)有限公司 Dog Emotion identification model training method, device, computer equipment and storage medium
CN112115756A (en) * 2020-03-22 2020-12-22 张冬梅 Block chain management platform for content analysis
CN114842539B (en) * 2022-05-30 2023-04-07 山东大学 Micro-expression discovery method and system based on attention mechanism and one-dimensional convolution sliding window

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103530788A (en) * 2012-07-02 2014-01-22 纬创资通股份有限公司 Multimedia evaluating system, multimedia evaluating device and multimedia evaluating method
CN103716661A (en) * 2013-12-16 2014-04-09 乐视致新电子科技(天津)有限公司 Video scoring reporting method and device
CN104298682A (en) * 2013-07-18 2015-01-21 广州华久信息科技有限公司 Information recommendation effect evaluation method and mobile phone based on facial expression images
CN107590459A (en) * 2017-09-11 2018-01-16 广东欧珀移动通信有限公司 The method and apparatus for delivering evaluation
CN108509893A (en) * 2018-03-28 2018-09-07 深圳创维-Rgb电子有限公司 Video display methods of marking, storage medium and intelligent terminal based on micro- Expression Recognition

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104299225A (en) * 2014-09-12 2015-01-21 姜羚 Method and system for applying facial expression recognition in big data analysis
CN107437052A (en) * 2016-05-27 2017-12-05 深圳市珍爱网信息技术有限公司 Blind date satisfaction computational methods and system based on micro- Expression Recognition

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103530788A (en) * 2012-07-02 2014-01-22 纬创资通股份有限公司 Multimedia evaluating system, multimedia evaluating device and multimedia evaluating method
CN104298682A (en) * 2013-07-18 2015-01-21 广州华久信息科技有限公司 Information recommendation effect evaluation method and mobile phone based on facial expression images
CN103716661A (en) * 2013-12-16 2014-04-09 乐视致新电子科技(天津)有限公司 Video scoring reporting method and device
CN107590459A (en) * 2017-09-11 2018-01-16 广东欧珀移动通信有限公司 The method and apparatus for delivering evaluation
CN108509893A (en) * 2018-03-28 2018-09-07 深圳创维-Rgb电子有限公司 Video display methods of marking, storage medium and intelligent terminal based on micro- Expression Recognition

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112633215A (en) * 2020-12-29 2021-04-09 安徽兰臣信息科技有限公司 Embedded image acquisition device for recognizing behavior and emotion of children

Also Published As

Publication number Publication date
CN108509893A (en) 2018-09-07

Similar Documents

Publication Publication Date Title
WO2019184299A1 (en) Microexpression recognition-based film and television scoring method, storage medium, and intelligent terminal
Korshunov et al. Deepfakes: a new threat to face recognition? assessment and detection
US9985963B2 (en) Method and system for authenticating liveness face, and computer program product thereof
US20130300891A1 (en) Identifying Facial Expressions in Acquired Digital Images
KR101781358B1 (en) Personal Identification System And Method By Face Recognition In Digital Image
US20060008173A1 (en) Device and method for correcting image including person area
US20100189358A1 (en) Facial expression recognition apparatus and method, and image capturing apparatus
CN110866466A (en) Face recognition method, face recognition device, storage medium and server
US10922531B2 (en) Face recognition method
CN106446753A (en) Negative expression identifying and encouraging system
CN106056083B (en) A kind of information processing method and terminal
JP2011237970A (en) Facial expression variation measurement device, program thereof and broadcast interest level measurement device
US20200258236A1 (en) Person segmentations for background replacements
WO2021042850A1 (en) Item recommending method and related device
CN109410138B (en) Method, device and system for modifying double chin
CN111079687A (en) Certificate camouflage identification method, device, equipment and storage medium
Ali et al. A robust and efficient system to detect human faces based on facial features
CN111611973B (en) Target user identification method, device and storage medium
CN113468925B (en) Occlusion face recognition method, intelligent terminal and storage medium
US8879805B2 (en) Automated image identification method
Chen et al. Hierarchical cross-modal talking face generationwith dynamic pixel-wise loss
WO2022117096A1 (en) First person point-of-view image recognition method and apparatus, and computer-readable storage medium
CN113259734B (en) Intelligent broadcasting guide method, device, terminal and storage medium for interactive scene
Manjare et al. Skin detection for face recognition based on HSV color space
Heni et al. Facial emotion detection of smartphone games users

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18912629

Country of ref document: EP

Kind code of ref document: A1