CN110781810B - Face emotion recognition method - Google Patents

Face emotion recognition method Download PDF

Info

Publication number
CN110781810B
CN110781810B CN201911015332.6A CN201911015332A CN110781810B CN 110781810 B CN110781810 B CN 110781810B CN 201911015332 A CN201911015332 A CN 201911015332A CN 110781810 B CN110781810 B CN 110781810B
Authority
CN
China
Prior art keywords
micro
expression
image data
output result
change
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911015332.6A
Other languages
Chinese (zh)
Other versions
CN110781810A (en
Inventor
陶铁之
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei Shengdong Information Technology Co ltd
Original Assignee
Hefei Shengdong Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei Shengdong Information Technology Co ltd filed Critical Hefei Shengdong Information Technology Co ltd
Priority to CN201911015332.6A priority Critical patent/CN110781810B/en
Publication of CN110781810A publication Critical patent/CN110781810A/en
Application granted granted Critical
Publication of CN110781810B publication Critical patent/CN110781810B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris

Abstract

The invention discloses a face emotion recognition method, which comprises the following steps: (1) collecting facial emotion expression video image data; (2) Cutting the video image data according to the time frames to obtain video image data under each time frame; (3) Intercepting a plurality of micro expression areas in video image data under each time frame respectively; (4) Comparing the image data of the same micro-expression area of adjacent time frames, judging whether micro-expression exists or not, and outputting a first output result when the micro-expression exists; (5) comparing the first output result with a time threshold; (6) comparing the first output result with a frequency threshold; (7) Comparing the first output result with a preset image change threshold value to obtain an effective output result; (8) And through effectively outputting the result, facial emotion recognition based on micro-expressions is realized. The invention improves the accuracy of face emotion recognition.

Description

Face emotion recognition method
Technical Field
The invention relates to the field of face recognition methods, in particular to a face emotion recognition method.
Background
Face recognition can be used for recognizing identities through facial feature information, and can also be used for recognizing current emotion of experimenters based on facial feature information, and facial emotion recognition can be used in the fields of criminal investigation and the like. In the prior art, the facial emotion recognition method relies on feature information of significant facial changes to judge, but when an experimenter deliberately masks emotion, the significant facial feature changes do not necessarily reflect real emotion changes, for example, when the experimenter face represents sad emotion and the mind actually has opposite emotion, the facial features may have significant sad expression features, and at this time, the traditional method of judging by means of the significant facial change features is difficult to accurately judge the actual emotion of the experimenter.
The invention aims to provide a face emotion recognition method for solving the problem that the face emotion judgment method in the prior art is difficult to judge the true emotion when the emotion is covered.
In order to achieve the above purpose, the technical scheme adopted by the invention is as follows:
a face emotion recognition method is characterized in that: the method comprises the following steps:
(1) Collecting video image data of facial emotion expression in a period of time;
(2) Dividing the acquisition time in the step (1) into a plurality of time frames, and cutting video image data according to the time frames to obtain video image data under each time frame;
(3) Setting a plurality of micro-expression areas, and respectively intercepting image data corresponding to the set micro-expression areas from video image data under each time frame;
(4) Comparing the same micro-expression area image data of adjacent time frames, and if the micro-expression area image has a change, selecting the micro-expression area image data corresponding to the time frame at the beginning of the change, the micro-expression area image data corresponding to the time frame at the end of the change, and the micro-expression area image data corresponding to each time frame in the time interval between the beginning of the change and the end of the change as a first output result;
(5) Comparing the time period between the beginning of the change and the end of the change in the first output result obtained in the step (4) with a preset time threshold range, if the time period is within the time threshold range, reserving the first output result, otherwise, discarding the first output result;
(6) Counting the number of times of occurrence of the first output result of each micro-expression area reserved in the step (5), comparing the number of times with a preset number of times threshold, reserving the first output result if the number of times is larger than the preset number of times threshold, otherwise, discarding the first output result;
(7) Finding out the micro-expression region image corresponding to the maximum change from the micro-expression region image data of each time frame in the first output result reserved in the step (6), comparing the micro-expression region image corresponding to the maximum change with the micro-expression region image which is not changed in the acquisition time period, and comparing the obtained comparison result with a preset image change threshold again, if the comparison result is larger than the preset image change threshold, reserving the first output result as an effective output result, otherwise, discarding the first output result;
(8) And through effectively outputting the result, facial emotion recognition based on micro-expressions is realized.
The face emotion recognition method is characterized by comprising the following steps of: the micro-expression region in the step (3) refers to a mouth region, a nose region, an eye region, and each eyebrow region.
The face emotion recognition method is characterized by comprising the following steps of: in the step (3), the image data corresponding to each set micro-expression region is intercepted from the video image data under each time frame, then the image data of each micro-expression region is subjected to graying treatment to obtain micro-expression region gray level images, in the step (4), the micro-expression region gray level images of adjacent time frames are compared to determine the corresponding micro-expression region gray level images with variation, and the corresponding micro-expression region images are obtained to form a first output result.
The face emotion recognition method is characterized by comprising the following steps of: in the step (3), the gray level of the gray level image of the same micro-expression area in different time frames is the same, and the gray level of the gray level image of different micro-expression areas is different; in the eye region gray level image, the eye white and the eyeball are respectively set to different gray levels.
The face emotion recognition method is characterized by comprising the following steps of: the time threshold range in the step (5) and the frequency threshold in the step (6), and the preset threshold in the step (7) are obtained according to expert experience or according to observation experiments of specific individuals.
The face emotion recognition method is characterized by comprising the following steps of: in the preset image change threshold in the step (7), when the micro-expression area is a mouth area, a nose area and an eyebrow area, the preset image change threshold refers to the change amount of the geometric shape; when the micro-expression area is an eye area, the preset image change threshold value refers to the change amount of the geometric shape of the eye outline and the change amount of the eyeball position.
The invention constructs a method for identifying facial emotion based on micro-expression according to the micro-expression theory. By setting a plurality of micro-expression areas and dividing time frames in a time period, the change of the same micro-expression area of adjacent time frames is compared to judge whether micro-expressions exist. Meanwhile, the validity of the micro-expression is monitored based on a preset threshold value, so that real and reliable micro-expression information is obtained, and finally the obtained real and reliable micro-expression information is used as a basis for facial emotion recognition.
Compared with the prior art, the facial emotion recognition method based on the micro-expression can realize facial emotion recognition based on the micro-expression, and can acquire real emotion information when masking emotion by combining the emotion recognition method based on the facial salient feature change in the prior art, so that the accuracy of facial emotion recognition is improved.
Drawings
Fig. 1 is a flow chart of the present invention.
Detailed Description
The invention will be further described with reference to the drawings and examples.
As shown in fig. 1, a face emotion recognition method includes the following steps:
(1) Collecting video image data of facial emotion expression in a period of time;
(2) Dividing the acquisition time in the step (1) into a plurality of time frames, and cutting video image data according to the time frames to obtain video image data under each time frame; the finer the time frame segmentation, the more accurate the result is in the subsequent image data comparison.
(3) Setting a plurality of micro-expression areas, and respectively intercepting image data corresponding to the set micro-expression areas from video image data under each time frame;
based on the micro-expression theory, the invention selects a plurality of areas of a mouth, a nose, eyes and eyebrows with micro-expression function as micro-expression areas.
And respectively intercepting image data corresponding to each set multiple micro-expression areas from video image data under each time frame, and then respectively carrying out graying treatment on each micro-expression area image data to obtain a micro-expression area gray level image. The purpose of carrying out gray processing on the micro-expression area image data is to simplify the data comparison work to improve the efficiency when judging whether the micro-expression exists in the subsequent step (4).
The gray level of the gray level images of the same micro-expression area in different time frames is the same, namely the mouth area is set to the same gray level, the nose area is set to the same gray level, the eyebrow area is set to the same gray level, but the gray level of the gray level images of different micro-expression areas is different. Since the eye region has specificity in the expression of the micro-expression, namely, on one hand, the expansion of the outline of the eye is one of the modes of the expression of the micro-expression of the region, and the movement of the eyeball and the eyeball white is the other mode of the expression of the micro-expression of the region, the outline, the eyeball white and the eyeball are respectively set to different gray levels in the gray level image of the eye region.
(4) Comparing the same micro-expression area image data of adjacent time frames, if the micro-expression area image has a change, primarily judging that the micro-expression area has micro-expression, and selecting the micro-expression area image data corresponding to the time frame at the beginning of the change and the micro-expression area image data corresponding to the time frame at the end of the change, and the micro-expression area image data corresponding to each time frame in the time interval between the beginning of the change and the end of the change as a first output result;
specifically, in the step (4), the gray level images of the micro-expression areas of adjacent time frames are compared, the gray level image of the corresponding micro-expression area with the change is determined, and the corresponding micro-expression area image is obtained to form a first output result.
(5) Comparing the time period between the beginning of the change and the end of the change in the first output result obtained in the step (4) with a preset time threshold range, if the time period is within the time threshold range, reserving the first output result, otherwise, discarding the first output result;
in the step (5), the time threshold range may be determined by an expert, that is, the expert determines, according to a statistical study, a time range in which the mouth, nose, eyebrows, and eyes make micro-expression when the face expresses a certain surface emotion as the time threshold range. Or for a specific person, experimental observation statistics can be carried out on the person in advance, and the time range of micro expression of the mouth, nose, eyebrows and eyes when the person expresses a certain surface emotion is determined as the time threshold range.
(6) Counting the number of times of occurrence of the first output result of each micro-expression area reserved in the step (5), comparing the number of times with a preset number of times threshold, reserving the first output result if the number of times is larger than the preset number of times threshold, otherwise, discarding the first output result.
Step (6) is the same as step (5), and the times of micro-expression of the mouth, nose, eyebrow and eyes of the face when expressing a certain surface emotion can be determined as the times threshold by an expert according to statistical study, or the times of micro-expression of the mouth, nose, eyebrow and eyes of the face when expressing a certain surface emotion can be determined as the times threshold by experimental observation statistics of the person.
(7) Finding out the micro-expression region image corresponding to the maximum change from the micro-expression region image data of each time frame in the first output result reserved in the step (6), comparing the micro-expression region image corresponding to the maximum change with the micro-expression region image which is not changed in the acquisition time period, and comparing the obtained comparison result with a preset image change threshold again, if the comparison result is larger than the preset image change threshold, reserving the first output result as an effective output result, otherwise, discarding the first output result;
for different expression of the micro-expression region, when the micro-expression region is a mouth region, a nose region or an eyebrow region, the preset image change threshold value refers to the change amount of the geometric shape, and the preset image change threshold value can be confirmed by expert statistical study or obtained by observation statistics of individuals. The preset image change threshold value includes the change amount of the geometric shape of the eye outline and the change amount of the eyeball and the eye white position because of the specificity of the eye region when the micro expression is carried out.
(8) Finally, by effectively outputting the result and combining the traditional face emotion recognition method, whether the emotion expressed by the micro-expression is consistent with the surface emotion reflected by the change of the obvious facial features can be judged, and further accurate emotion recognition information can be obtained.
The embodiments of the present invention are merely described in terms of preferred embodiments of the present invention, and are not intended to limit the spirit and scope of the present invention, and various modifications and improvements made by those skilled in the art to the technical solutions of the present invention should fall within the protection scope of the present invention, and the technical content of the present invention as claimed is fully described in the claims.

Claims (6)

1. A face emotion recognition method is characterized in that: the method comprises the following steps:
(1) Collecting video image data of facial emotion expression in a period of time;
(2) Dividing the acquisition time in the step (1) into a plurality of time frames, and cutting video image data according to the time frames to obtain video image data under each time frame;
(3) Setting a plurality of micro-expression areas, and respectively intercepting image data corresponding to the set micro-expression areas from video image data under each time frame;
(4) Comparing the same micro-expression area image data of adjacent time frames, and if the micro-expression area image has a change, selecting the micro-expression area image data corresponding to the time frame at the beginning of the change, the micro-expression area image data corresponding to the time frame at the end of the change, and the micro-expression area image data corresponding to each time frame in the time interval between the beginning of the change and the end of the change as a first output result;
(5) Comparing the time period between the beginning of the change and the end of the change in the first output result obtained in the step (4) with a preset time threshold range, if the time period is within the time threshold range, reserving the first output result, otherwise, discarding the first output result;
(6) Counting the number of times of occurrence of the first output result of each micro-expression area reserved in the step (5), comparing the number of times with a preset number of times threshold, reserving the first output result if the number of times is larger than the preset number of times threshold, otherwise, discarding the first output result;
(7) Finding out the micro-expression region image corresponding to the maximum change from the micro-expression region image data of each time frame in the first output result reserved in the step (6), comparing the micro-expression region image corresponding to the maximum change with the micro-expression region image which is not changed in the acquisition time period, and comparing the obtained comparison result with a preset image change threshold again, if the comparison result is larger than the preset image change threshold, reserving the first output result as an effective output result, otherwise, discarding the first output result;
(8) And through effectively outputting the result, facial emotion recognition based on micro-expressions is realized.
2. A face emotion recognition method as claimed in claim 1, characterized in that: the micro-expression region in the step (3) refers to a mouth region, a nose region, an eye region, and each eyebrow region.
3. A face emotion recognition method as claimed in claim 1, characterized in that: in the step (3), the image data corresponding to each set micro-expression region is intercepted from the video image data under each time frame, then the image data of each micro-expression region is subjected to graying treatment to obtain micro-expression region gray level images, in the step (4), the micro-expression region gray level images of adjacent time frames are compared to determine the corresponding micro-expression region gray level images with variation, and the corresponding micro-expression region images are obtained to form a first output result.
4. A face emotion recognition method according to claim 2 or 3, characterized in that: in the step (3), the gray level of the gray level image of the same micro-expression area in different time frames is the same, and the gray level of the gray level image of different micro-expression areas is different; in the eye region gray level image, the eye white and the eyeball are respectively set to different gray levels.
5. A face emotion recognition method as claimed in claim 1, characterized in that: the time threshold range in the step (5) and the frequency threshold in the step (6), and the preset threshold in the step (7) are obtained according to expert experience or according to observation experiments of specific individuals.
6. The face emotion recognition method of claim 4, characterized by: in the preset image change threshold in the step (7), when the micro-expression area is a mouth area, a nose area and an eyebrow area, the preset image change threshold refers to the change amount of the geometric shape; when the micro-expression area is an eye area, the preset image change threshold value refers to the change amount of the geometric shape of the eye outline and the change amount of the eyeball position.
CN201911015332.6A 2019-10-24 2019-10-24 Face emotion recognition method Active CN110781810B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911015332.6A CN110781810B (en) 2019-10-24 2019-10-24 Face emotion recognition method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911015332.6A CN110781810B (en) 2019-10-24 2019-10-24 Face emotion recognition method

Publications (2)

Publication Number Publication Date
CN110781810A CN110781810A (en) 2020-02-11
CN110781810B true CN110781810B (en) 2024-02-27

Family

ID=69387367

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911015332.6A Active CN110781810B (en) 2019-10-24 2019-10-24 Face emotion recognition method

Country Status (1)

Country Link
CN (1) CN110781810B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116665281A (en) * 2023-06-28 2023-08-29 湖南创星科技股份有限公司 Key emotion extraction method based on doctor-patient interaction

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016149063A (en) * 2015-02-13 2016-08-18 オムロン株式会社 Emotion estimation system and emotion estimation method
CN107480622A (en) * 2017-08-07 2017-12-15 深圳市科迈爱康科技有限公司 Micro- expression recognition method, device and storage medium
CN107895146A (en) * 2017-11-01 2018-04-10 深圳市科迈爱康科技有限公司 Micro- expression recognition method, device, system and computer-readable recording medium
CN110197107A (en) * 2018-08-17 2019-09-03 平安科技(深圳)有限公司 Micro- expression recognition method, device, computer equipment and storage medium
WO2019184125A1 (en) * 2018-03-30 2019-10-03 平安科技(深圳)有限公司 Micro-expression-based risk identification method and device, equipment and medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016149063A (en) * 2015-02-13 2016-08-18 オムロン株式会社 Emotion estimation system and emotion estimation method
CN107480622A (en) * 2017-08-07 2017-12-15 深圳市科迈爱康科技有限公司 Micro- expression recognition method, device and storage medium
WO2019029261A1 (en) * 2017-08-07 2019-02-14 深圳市科迈爱康科技有限公司 Micro-expression recognition method, device and storage medium
CN107895146A (en) * 2017-11-01 2018-04-10 深圳市科迈爱康科技有限公司 Micro- expression recognition method, device, system and computer-readable recording medium
WO2019085495A1 (en) * 2017-11-01 2019-05-09 深圳市科迈爱康科技有限公司 Micro-expression recognition method, apparatus and system, and computer-readable storage medium
WO2019184125A1 (en) * 2018-03-30 2019-10-03 平安科技(深圳)有限公司 Micro-expression-based risk identification method and device, equipment and medium
CN110197107A (en) * 2018-08-17 2019-09-03 平安科技(深圳)有限公司 Micro- expression recognition method, device, computer equipment and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
基于3D梯度投影描述的微表情捕捉与识别;刘欣;解仑;王志良;付冬梅;;华中科技大学学报(自然科学版);20141231(第12期);全文 *
基于三维数字图像相关方法的面部表情变形测量研究;赵明珠;王志勇;王世斌;李林安;孙颖;李毓;;实验力学;20171231(第02期);全文 *
基于差分定位与光流特征提取的微表情识别;许刚;赵中原;谈元鹏;;计算机应用与软件;20170115(第01期);全文 *

Also Published As

Publication number Publication date
CN110781810A (en) 2020-02-11

Similar Documents

Publication Publication Date Title
CN104794464B (en) A kind of biopsy method based on relative priority
US8983152B2 (en) Image masks for face-related selection and processing in images
CN104077579B (en) Facial expression recognition method based on expert system
CN105447432B (en) A kind of face method for anti-counterfeit based on local motion mode
CN104361316B (en) Dimension emotion recognition method based on multi-scale time sequence modeling
CN105513053B (en) One kind is used for background modeling method in video analysis
CN110309799B (en) Camera-based speaking judgment method
JP2007257087A (en) Skin color area detecting device and skin color area detecting method
CN108369644B (en) Method for quantitatively detecting human face raised line, intelligent terminal and storage medium
CN110781810B (en) Face emotion recognition method
CN113657195A (en) Face image recognition method, face image recognition equipment, electronic device and storage medium
Zhang et al. Hand gesture detection and segmentation based on difference background image with complex background
Dey et al. Computer vision based gender detection from facial image
Wang et al. Hand vein recognition based on improved template matching
Zhou et al. Real-time Gender Recognition based on Eigen-features selection from Facial Images
CN112287863A (en) Computer portrait recognition system
Phung et al. Skin colour based face detection
CN102314611B (en) A kind of recognition methods of smiling face's image and recognition device
Jida et al. Face segmentation and detection using Voronoi diagram and 2D histogram
CN112818728B (en) Age identification method and related products
CN111898464A (en) Face recognition method based on 5G network
JP2011141799A (en) Object detection recognition apparatus, object detection recognition method, and program
CN111325131B (en) Micro-expression detection method based on self-adaptive transition frame depth network removal
Subban et al. Face detection in color images based on explicitly-defined skin color model
CN113435320B (en) Human body posture estimation method with multiple models configured in self-adaption mode

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant