CN108268838A - Facial expression recognizing method and facial expression recognition system - Google Patents

Facial expression recognizing method and facial expression recognition system Download PDF

Info

Publication number
CN108268838A
CN108268838A CN201810001358.4A CN201810001358A CN108268838A CN 108268838 A CN108268838 A CN 108268838A CN 201810001358 A CN201810001358 A CN 201810001358A CN 108268838 A CN108268838 A CN 108268838A
Authority
CN
China
Prior art keywords
expression
facial
feature
face
expression recognition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810001358.4A
Other languages
Chinese (zh)
Other versions
CN108268838B (en
Inventor
付璐斯
周盛宗
于志刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujian Institute of Research on the Structure of Matter of CAS
Original Assignee
Fujian Institute of Research on the Structure of Matter of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujian Institute of Research on the Structure of Matter of CAS filed Critical Fujian Institute of Research on the Structure of Matter of CAS
Priority to CN201810001358.4A priority Critical patent/CN108268838B/en
Publication of CN108268838A publication Critical patent/CN108268838A/en
Application granted granted Critical
Publication of CN108268838B publication Critical patent/CN108268838B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

本申请公开了一种人脸表情识别方法,包括:从原始图像中检测出人脸;对检测的人脸进行人脸对齐和特征点定位;从人脸图像中提取出面部特征信息;根据获取的特征数据,进行表情分类,实现人脸表情识别。本申请通过人脸检测、特征点定位、特征提取,表情分类从而进行人脸表情进行最大可能性的预测,保证了表情识别的准确性,具有广泛的应用前景。

The present application discloses a facial expression recognition method, which includes: detecting a human face from an original image; performing face alignment and feature point positioning on the detected human face; extracting facial feature information from a human face image; The characteristic data of facial expressions are classified to realize facial expression recognition. This application uses face detection, feature point location, feature extraction, and expression classification to predict the maximum possibility of human facial expressions, which ensures the accuracy of expression recognition and has a wide range of application prospects.

Description

人脸表情识别方法及人脸表情识别系统Facial expression recognition method and human facial expression recognition system

技术领域technical field

本申请涉及一种人脸表情识别方法及人脸表情识别系统,属于人脸表情识别技术领域。The present application relates to a facial expression recognition method and a human facial expression recognition system, which belong to the technical field of human facial expression recognition.

背景技术Background technique

人的情感的产生是一个很复杂的心理过程,情感的表达也伴随多种表现方式,常被计算机学家用于研究的表达方式主要有三种:表情、语音、动作。在这三种情感表达方式中,表情所贡献的情感比例高达55%,随着人机交互技术的应用日益广泛,在人机交互领域中,人脸表情识别技术具有非常重要的意义。作为模式识别与机器学习领域的主要研究方法之一,己经有大量的人脸表情识别算法被提出。The generation of human emotions is a very complicated psychological process, and the expression of emotions is accompanied by a variety of expressions. There are mainly three expressions that are often used by computer scientists for research: facial expressions, voice, and actions. Among these three ways of expressing emotions, expressions contribute up to 55% of the emotion. With the increasing application of human-computer interaction technology, facial expression recognition technology is of great significance in the field of human-computer interaction. As one of the main research methods in the field of pattern recognition and machine learning, a large number of facial expression recognition algorithms have been proposed.

然而人脸表情识别技术也有其弱点:1、不同的人表情变化:人脸表情会根据不同的人表现方式的区别而产生差异性;2、同一人上下文变化:同一个人的表情在现实生活中的实时性;3、外界的条件,如:背景,光照、角度、距离等对表情识别影响较大。以上这些都会影响到人脸表情识别的准确性。However, facial expression recognition technology also has its weaknesses: 1. Different facial expression changes: facial expressions will be different according to the differences in different people's expressions; 2. The same person's context changes: the same person's expression in real life 3. External conditions, such as: background, lighting, angle, distance, etc., have a great influence on expression recognition. All of the above will affect the accuracy of facial expression recognition.

发明内容Contents of the invention

针对现有技术中人脸表情识别不精确,影响到人脸表情识别的准确性的技术问题,本申请目的在于提供一种人脸表情识别方法及系统,可以实现表情的精确识别。In view of the inaccurate recognition of facial expressions in the prior art, which affects the accuracy of facial expression recognition, the purpose of this application is to provide a method and system for facial expression recognition, which can realize accurate recognition of facial expressions.

为实现上述目的,本发明提供了一种人脸表情识别方法。To achieve the above object, the present invention provides a method for recognizing facial expressions.

所述人脸表情识别方法,其特征在于,包括:The facial expression recognition method is characterized in that, comprising:

从原始图像中检测出人脸;Detect faces from raw images;

对检测的人脸进行人脸对齐和特征点定位;Perform face alignment and feature point positioning on the detected faces;

从人脸图像中提取出面部特征信息;Extract facial feature information from face images;

根据获取的特征数据,进行表情分类,实现人脸表情识别。According to the acquired feature data, expression classification is carried out to realize facial expression recognition.

所述人脸检测:即从各种场景的原始图像中检测出人脸的存在,并准确分离出人脸区域。The human face detection: that is to detect the existence of human faces from the original images of various scenes, and accurately separate the human face regions.

进一步地,所述从原始图像中检测出人脸包括:Further, the detection of human face from the original image includes:

基于局部二进制模式逐行扫描原始图像,得到响应图像;Scan the original image progressively based on local binary patterns to obtain a response image;

采用AdaBoost算法对所述响应图像进行人脸检测,检测出人脸的存在;Adopt AdaBoost algorithm to carry out human face detection to described response image, detect the existence of human face;

采用AdaBoost算法进行人眼检测,分离出人脸区域。The AdaBoost algorithm is used for human eye detection, and the face area is separated.

可选地,所述采用AdaBoost算法进行检测过程中按照1.25-0.9进行多尺度检测。Optionally, the multi-scale detection is performed according to 1.25-0.9 in the detection process using the AdaBoost algorithm.

进一步地,所述对检测的人脸进行人脸对齐和特征点定位包括:Further, said performing face alignment and feature point location on the detected faces includes:

采用局部约束模型对面部特征点进行标注。A locally constrained model is used to annotate facial feature points.

可选地,利用所述局部约束模型对面部特征点进行标注,获取到特征点坐标后,选取体现各类表情之间差异性的区域,提取基于形变的表情特征和基于运动的表情特征的两种类型的特征;Optionally, use the local constraint model to mark the facial feature points, and after obtaining the coordinates of the feature points, select an area that reflects the differences between various expressions, and extract the two expressions based on deformation and motion. types of features;

采用递归特征消除及线性向量机做特征评估,对选取的特征进一步进行特征选择。Recursive feature elimination and linear vector machine are used for feature evaluation, and the selected features are further selected for feature selection.

进一步地,所述从人脸图像中提取出面部特征信息包括:Further, the extraction of facial feature information from the face image includes:

选取体现各类表情之间差异性的区域,提取基于形变的表情特征和基于运动的表情特征的两种类型的特征;Select the area that reflects the difference between various expressions, and extract two types of features: deformation-based expression features and motion-based expression features;

采用递归特征消除及线性向量机做特征评估,对选取的特征进一步进行特征选择。Recursive feature elimination and linear vector machine are used for feature evaluation, and the selected features are further selected for feature selection.

可选地,所述体现各类表情之间差异性的区域包括眼睛、鼻尖、嘴角点、眉毛以及人脸各部件轮廓点。Optionally, the regions reflecting the differences among various expressions include eyes, nose tip, corners of mouth, eyebrows, and contour points of various parts of human face.

进一步地,所述从人脸图像中提取出面部特征信息还包括:对提取的面部特征信息进行特征选择,获取面部特征子集,保存面部特征信息,用于表情识别。Further, the extraction of facial feature information from the face image further includes: performing feature selection on the extracted facial feature information, obtaining a subset of facial features, and saving the facial feature information for expression recognition.

进一步地,所述根据获取的特征数据,进行表情分类,实现人脸表情识别包括:Further, according to the acquired feature data, performing expression classification to realize facial expression recognition includes:

根据提取的面部特征信息,选取样本,利用先验知识训练表情分类器,每个样本对应相应的表情标签;According to the extracted facial feature information, select samples, use prior knowledge to train the expression classifier, and each sample corresponds to the corresponding expression label;

通过表情分类器,采用最小二乘规则,实现表情分类。Through the expression classifier, the expression classification is realized by using the least squares rule.

进一步地,所述根据获取的特征数据,进行表情分类,实现人脸表情识别还包括:Further, said performing facial expression classification according to the acquired feature data to realize facial expression recognition also includes:

用已知标签的表情特征制造基向量空间,待测表情通过将其特征投影到此空间来判断表情类别,进行人脸表情识别。The base vector space is created by using the expression features of the known labels, and the expression to be tested is projected into this space to judge the expression category and perform facial expression recognition.

作为一种具体的实施方式,所述人脸表情识别方法,所述方法包括以下步骤:(1)从原始图像中检测出人脸;(2)对检测的人脸进行人脸对齐和特征点定位;(3)从人脸图像中提取出面部特征信息;(4)根据获取的特征数据,进行表情分类,实现人脸表情识别。As a specific implementation, the method for recognizing facial expressions includes the following steps: (1) detecting a human face from an original image; (2) performing face alignment and feature points on the detected human face Positioning; (3) Extract facial feature information from the face image; (4) According to the acquired feature data, perform expression classification and realize facial expression recognition.

其中,步骤(1)进一步包括:(11)基于局部二进制模式逐行扫描原始图像,得到一响应图像;(12)采用AdaBoost算法对所述响应图像进行人脸检测,检测出人脸的存在;(13)采用AdaBoost算法进行人眼检测,分离出人脸区域。Wherein, step (1) further comprises: (11) progressively scanning original image based on partial binary pattern, obtains a response image; (12) adopts AdaBoost algorithm to carry out face detection to described response image, detects the existence of human face; (13) The AdaBoost algorithm is used for human eye detection, and the face area is separated.

进一步,采用AdaBoost算法进行人脸检测或人眼检测过程中按照1.25-0.9进行多尺度检测。Further, AdaBoost algorithm is used for face detection or human eye detection to perform multi-scale detection according to 1.25-0.9.

步骤(2)进一步包括:采用局部约束模型对面部特征点进行标注。Step (2) further includes: using a local constraint model to mark facial feature points.

步骤(3)进一步包括:(31)选取嘴、眉毛、眼睛三个体现各类表情之间差异性的主要区域,提取基于形变的表情特征和基于运动的表情特征的两种类型的特征;(32)采用递归特征消除及线性向量机做特征评估,对选取的特征进一步进行特征选择。Step (3) further includes: (31) selecting the mouth, eyebrows, and eyes as three main areas that reflect the differences between various expressions, and extracting two types of features based on deformation-based expression features and motion-based expression features; ( 32) Use recursive feature elimination and linear vector machine for feature evaluation, and further perform feature selection on the selected features.

进一步,对提取的面部特征信息进行特征选择,获取面部特征子集,保存面部特征信息,用于表情识别。Further, feature selection is performed on the extracted facial feature information, a subset of facial features is obtained, and the facial feature information is saved for expression recognition.

步骤(4)进一步包括:(41)根据提取的面部特征信息,选取样本,利用先验知识训练表情分类器,每个样本对应相应的表情标签;(42)通过表情分类器,采用最小二乘规则,实现表情分类。Step (4) further includes: (41) selecting samples according to the extracted facial feature information, and using prior knowledge to train an expression classifier, each sample corresponds to a corresponding expression label; (42) through the expression classifier, using least squares Rules to achieve expression classification.

进一步,用已知标签的表情特征制造基向量空间,待测表情通过将其特征投影到此空间来判断表情类别,进行人脸表情识别。Further, the base vector space is created with the expression features of the known labels, and the expression to be tested is projected into this space to judge the expression category and perform facial expression recognition.

本申请的另一方面,提供了一种人脸表情识别系统,其特征在于,所述系统包括:人脸检测模块、特征点定位模块、特征提取模块、人脸表情识别模块;Another aspect of the present application provides a facial expression recognition system, characterized in that the system includes: a human face detection module, a feature point positioning module, a feature extraction module, and a facial expression recognition module;

所述人脸表情识别模块,用于从原始图像中检测出人脸;The human facial expression recognition module is used to detect the human face from the original image;

所述特征点定位模块与所述人脸检测模块相连,用于对检测的人脸进行人脸对齐和特征点定位;The feature point positioning module is connected to the face detection module, and is used for face alignment and feature point positioning of the detected faces;

所述特征提取模块与所述特征点定位模块相连,用于从人脸图像中提取出面部特征信息;The feature extraction module is connected with the feature point positioning module, and is used to extract facial feature information from the face image;

所述人脸表情识别模块与所述特征提取模块相连,用于根据提取的面部特征信息,将待识别的人脸表情数据通过所训练的表情分类器进行最大可能性的预测,找出可能性最高的表情类别,实现人脸表情识别。The facial expression recognition module is connected with the feature extraction module, and is used to predict the maximum possibility of the facial expression data to be recognized through the trained expression classifier according to the extracted facial feature information, and find out the possibility The highest expression category, realizing facial expression recognition.

可选地,所述人脸检测模块基于局部二进制模式逐行扫描原始图像,得到响应图像;Optionally, the face detection module progressively scans the original image based on local binary patterns to obtain a response image;

采用AdaBoost算法对所述响应图像进行人脸检测,检测出人脸的存在;Adopt AdaBoost algorithm to carry out human face detection to described response image, detect the existence of human face;

采用AdaBoost算法进行人眼检测,分离出人脸区域。The AdaBoost algorithm is used for human eye detection, and the face area is separated.

可选地,所述采用AdaBoost算法进行检测过程中按照1.25-0.9进行多尺度检测。Optionally, the multi-scale detection is performed according to 1.25-0.9 in the detection process using the AdaBoost algorithm.

可选地,所述特征点定位模块采用局部约束模型对面部特征点进行标注。Optionally, the feature point positioning module uses a local constraint model to mark facial feature points.

可选地,所述特征提取模块选取体现各类表情之间差异性的区域,提取基于形变的表情特征和基于运动的表情特征的两种类型的特征;Optionally, the feature extraction module selects regions reflecting the differences between various types of expressions, and extracts two types of features: deformation-based expression features and motion-based expression features;

采用递归特征消除及线性向量机做特征评估,对选取的特征进一步进行特征选择。Recursive feature elimination and linear vector machine are used for feature evaluation, and the selected features are further selected for feature selection.

可选地,所述体现各类表情之间差异性的区域包括嘴、眉毛、眼睛、鼻尖中的至少一个。Optionally, the regions reflecting the differences among various expressions include at least one of mouth, eyebrows, eyes, and nose tip.

可选地,所述特征提取模块对提取的面部特征信息进行特征选择,获取面部特征子集,保存面部特征信息,用于表情识别。Optionally, the feature extraction module performs feature selection on the extracted facial feature information, acquires a subset of facial features, and saves the facial feature information for expression recognition.

可选地,所述人脸表情识别模块根据获取的特征数据,进行表情分类,实现人脸表情识别包括:根据提取的面部特征信息,选取样本,利用先验知识训练表情分类器,每个样本对应相应的表情标签;Optionally, the facial expression recognition module performs expression classification according to the acquired feature data, and realizing facial expression recognition includes: selecting samples according to the extracted facial feature information, and using prior knowledge to train an expression classifier, each sample Corresponding to the corresponding emoticon label;

通过表情分类器,采用最小二乘规则,实现表情分类。Through the expression classifier, the expression classification is realized by using the least squares rule.

可选地,所述人脸表情识别模块用已知标签的表情特征制造基向量空间,待测表情通过将其特征投影到此空间来判断表情类别,进行人脸表情识别。Optionally, the facial expression recognition module uses the expression features of known labels to create a base vector space, and the expression to be tested judges the expression category by projecting its features into this space to perform facial expression recognition.

本申请能产生的有益效果包括:The beneficial effect that this application can produce comprises:

本申请通过人脸检测、特征点定位、特征提取,表情分类从而进行人脸表情进行最大可能性的预测,保证了表情识别的准确性,具有广泛的应用前景。This application uses face detection, feature point location, feature extraction, and expression classification to predict the maximum possibility of human facial expressions, which ensures the accuracy of expression recognition and has a wide range of application prospects.

附图说明Description of drawings

图1为本申请所述人脸识别方法的流程示意图。FIG. 1 is a schematic flow chart of the face recognition method described in the present application.

图2为本申请所述人脸识别系统的架构示意图。FIG. 2 is a schematic diagram of the architecture of the face recognition system described in this application.

具体实施方式Detailed ways

下面结合实施例详述本申请,但本申请并不局限于这些实施例。The present application is described in detail below in conjunction with the examples, but the present application is not limited to these examples.

实施例1Example 1

下面结合附图对本发明提供的人脸表情识别方法及系统做详细说明。The facial expression recognition method and system provided by the present invention will be described in detail below in conjunction with the accompanying drawings.

参考图1,本发明所述人脸表情识别方法的流程示意图。所述方法包括以下步骤:S11:从原始图像中检测出人脸;S12:对检测的人脸进行人脸对齐和特征点定位;S13:从人脸图像中提取出面部特征信息;S14:根据获取的特征数据,进行表情分类,实现人脸表情识别。以下结合附图对上述步骤进行详细说明。Referring to FIG. 1 , it is a schematic flow chart of the facial expression recognition method of the present invention. The method comprises the following steps: S11: detecting the human face from the original image; S12: performing face alignment and feature point positioning on the detected human face; S13: extracting facial feature information from the human face image; S14: according to The acquired feature data is used to classify facial expressions and realize facial expression recognition. The above steps will be described in detail below in conjunction with the accompanying drawings.

S11:从原始图像中检测出人脸。S11: Detect the human face from the original image.

人脸检测:即从各种场景的原始图像中检测出人脸的存在,并准确分离出人脸区域。作为优选的实施方式,步骤S11进一步可以采用下述步骤完成:11)基于局部二进制模式逐行扫描原始图像,得到一响应图像;12)采用AdaBoost算法对所述响应图像进行人脸检测,检测出人脸的存在;13)采用AdaBoost算法进行人眼检测,分离出人脸区域。Face detection: It detects the existence of human faces from the original images of various scenes, and accurately separates the human face area. As a preferred embodiment, step S11 can further be completed by the following steps: 11) progressively scan the original image based on the partial binary pattern to obtain a response image; 12) use the AdaBoost algorithm to perform face detection on the response image, and detect Existence of human face; 13) AdaBoost algorithm is used for human eye detection, and the human face area is separated.

局部二进制模式(LBP)作为一种有效的纹理描述算子,其对图像局部纹理特征具有卓越描绘能力。应用LBP算子过程类似于滤波过程中的模板操作,逐行扫描原始图像;对于原始图像中的每一个像素点,以该点的灰度值为阈值,对其周围3×3的8领域进行二值化;按照一定的顺序将二值化的结果组成一个8位二进制数,以此二进制数的值(0~255)作为该点响应。As an effective texture description operator, Local Binary Pattern (LBP) has excellent ability to describe local texture features of images. Applying the LBP operator process is similar to the template operation in the filtering process, and scans the original image line by line; for each pixel in the original image, the gray value of the point is used as the threshold value, and the surrounding 3×3 8 fields are processed. Binarization: according to a certain order, the result of binarization is composed into an 8-bit binary number, and the value of this binary number (0~255) is used as the response of this point.

如表1所示一实施例中原始图像对应灰度值,对于表1中的3×3区域的中心点,以其灰度值88作为阈值,对其8领域进行二值化,并从左上点开始按照顺时针方向(顺序可以任意,但要统一)将二值化的结果组成一个二进制数10001011,即十进制的139,作为中心的响应。在整个逐行扫描过程结束后,得到一个LBP响应图像,这个响应图像可以作为后续工作的特征;所得响应图像对应灰度值如表2所示。As shown in Table 1, the corresponding gray value of the original image in one embodiment, for the center point of the 3×3 area in Table 1, use its gray value of 88 as the threshold, binarize its 8 fields, and start from the upper left The point starts to follow the clockwise direction (the order can be arbitrary, but it must be unified) and the binarization result is composed of a binary number 10001011, which is 139 in decimal, as the response of the center. After the entire progressive scanning process is over, a LBP response image is obtained, which can be used as a feature of follow-up work; the corresponding gray value of the obtained response image is shown in Table 2.

180180 5252 55 213213 8888 7979 158158 8484 156156

表1一实施例中原始图像对应灰度值。The corresponding gray value of the original image in Table 1-Example.

11 00 00 11 139139 00 11 00 11

表2所得响应图像对应灰度值。The response image obtained in Table 2 corresponds to the gray value.

AdaBoost算法是Freund和Schapire根据在线分配算法提出的,AdaBoost算法允许设计者不断地加入新的弱分类器,直到达到某个预订的足够小的误差率。在AdaBoost算法中每个训练样本都被赋予一个权重,表面它被某个分量分类器选入训练集的概率。如果某个样本点已经被准确地分类,那么在构造下一个训练集中,它被选中的概率就被降低;相反,如果某个样本点没有被正确分类,那么它的权重就得到提高。通过T轮这样的训练,AdaBoost算法能够聚焦于那些较困难的样本上,综合得出用于目标检测的强分类器。The AdaBoost algorithm was proposed by Freund and Schapire based on the online distribution algorithm. The AdaBoost algorithm allows the designer to continuously add new weak classifiers until a predetermined small enough error rate is reached. In the AdaBoost algorithm, each training sample is assigned a weight, indicating the probability that it is selected into the training set by a component classifier. If a certain sample point has been classified accurately, then its probability of being selected will be reduced in constructing the next training set; on the contrary, if a certain sample point has not been correctly classified, its weight will be increased. Through T rounds of training, the AdaBoost algorithm can focus on those more difficult samples and synthesize a strong classifier for target detection.

AdaBoost算法描述如下:The AdaBoost algorithm is described as follows:

1)给定标定的训练样本集(x1,y1),(x2,y2),……(xL,yL)。其中,gj(xi)代表第i个训练图像的第j个Haar-Like特征,xi∈X,表示输入的训练样本,yi∈Y={1,-1}分别表示真假样本。1) Given a calibrated training sample set (x 1 , y 1 ), (x 2 , y 2 ),...(x L , y L ). Among them, g j ( xi ) represents the j-th Haar-Like feature of the i-th training image, x i ∈ X, represents the input training sample, y i ∈ Y={1,-1} represents the true and false samples respectively .

2)初始化权重w1,i=1/2m,1/2n,其中m,n分别表示真、假样本的数据,总样本数L=m+n。2) Initialize the weights w 1, i = 1/2m, 1/2n, where m and n represent the data of true and false samples respectively, and the total number of samples L=m+n.

3)对于T轮训练,For t=1,2,…,T。3) For T rounds of training, For t=1,2,...,T.

对于所有样本的权重进行归一化:Normalize the weights for all samples:

对于每个样本中的第j个Haar-Like特征,可以得到一个简单分类器,也就是确定阈值θj和偏置Pj,使得误差εj达到最小:For the jth Haar-Like feature in each sample, a simple classifier can be obtained, which is to determine the threshold θ j and bias P j so that the error ε j reaches the minimum:

其中, in,

偏置Pj决定不等式方向,只有±1两种情况。The bias P j determines the direction of the inequality, and there are only two cases of ±1.

在确定的简单分类器中,找出一个具有最小误差εt的弱分类器htAmong the determined simple classifiers, find a weak classifier h t with the smallest error ε t .

4)对所有的样本的权重进行更新:4) Update the weights of all samples:

其中,βt=εt/(1-εt),如果xi被hi正确分类,则ei=0,反之ei=1。Among them, β tt /(1-ε t ), if x i is correctly classified by h i , then e i =0, otherwise e i =1.

5)最后得到的强分类器为:5) The final strong classifier obtained is:

其中,αt=ln(1/βt)是根据ht的预测错误衡量的。where α t =ln(1/β t ) is scaled in terms of the prediction error of h t .

至此,通过上述步骤就可以对人脸进行检测了。在检测过程中可以按照1.25-0.9进行多尺度检测,最后合并窗口,输出结果。So far, through the above steps, the face can be detected. During the detection process, multi-scale detection can be performed according to 1.25-0.9, and finally the windows are merged and the results are output.

在检测到人脸的基础上,将AdaBoost算法用于人眼检测。人眼检测的基本原理与人脸检测相同,此处不再赘述。在人眼检测过程中,可以按照1.25-0.9进行多尺度检测,并建立剔除机制(可根据人眼的位置,大小等特征建立)。On the basis of detected faces, the AdaBoost algorithm is used for human eye detection. The basic principle of human eye detection is the same as that of face detection, and will not be repeated here. In the process of human eye detection, multi-scale detection can be carried out according to 1.25-0.9, and the elimination mechanism can be established (according to the position, size and other characteristics of human eyes).

S12:对检测的人脸进行人脸对齐和特征点定位。S12: Perform face alignment and feature point positioning on the detected faces.

特征点定位:即根据输入的人脸图像,自动定位出面部关键特征点,如眼睛、鼻尖、嘴角点、眉毛以及人脸各部件轮廓点。作为优选的实施方式,步骤S12进一步可以采用下述步骤完成:采用局部约束模型对面部特征点进行标注。Feature point positioning: According to the input face image, the key feature points of the face are automatically located, such as eyes, nose tip, mouth corner points, eyebrows, and contour points of various parts of the face. As a preferred implementation manner, step S12 can be further completed by using the following steps: using a local constraint model to mark facial feature points.

局部约束模型(CLM)通过初始化平均脸的位置,然后让每个平均脸上的特征点在其邻域位置上进行搜索匹配来完成人脸特征点检测。整个过程分两个阶段:模型构建阶段和点拟合阶段。模型构建阶段又可以细分两个不同模型的构建:形状模型构建和Patch模型构建。形状模型构建就是对人脸模型形状进行建模,它描述了形状变化遵循的准则。而Patch模型则是对每个特征点周围邻域进行建模,建立一个特征点匹配准则,判断特征点的最佳匹配。The Local Constraint Model (CLM) completes face feature point detection by initializing the position of the average face, and then letting the feature points of each average face search and match on its neighborhood position. The whole process is divided into two stages: model building stage and point fitting stage. The model construction phase can be subdivided into the construction of two different models: shape model construction and patch model construction. Shape model construction is to model the shape of the face model, which describes the guidelines followed by shape changes. The Patch model is to model the neighborhood around each feature point, establish a feature point matching criterion, and judge the best match of feature points.

局部约束模型(CLM)算法描述如下:The locally constrained model (CLM) algorithm is described as follows:

1)形状模型构建1) Shape model construction

计算训练集中所有人脸样本对齐后的平均形状。假设有M张图片,每张图片有N个特征点,每个特征点的坐标假设为(xi,yi),一张图像上的N个特征点的坐标组成的向量用x=[x1 y1 x2 y2 … xN yN]T表示,可得所有图像的平均脸:Computes the aligned average shape of all face samples in the training set. Suppose there are M pictures, each picture has N feature points, and the coordinates of each feature point are assumed to be ( xi , y i ), the vector composed of the coordinates of N feature points on an image is represented by x=[x 1 y 1 x 2 y 2 … x N y N ] T , the average face of all images can be obtained:

计算每个样本图像的形状向量和平均脸的差值,可以得到一个零均值的形状变化矩阵X:Calculate the difference between the shape vector of each sample image and the average face, and a zero-mean shape change matrix X can be obtained:

对矩阵X进行PCA变换便可以得到决定人脸形状变化的主要成分,即The main components that determine the shape change of the face can be obtained by performing PCA transformation on the matrix X, namely

求得主要的特征值λi和对应的特征向量pi。因为较大的特征值对应的特征向量一般都包含了样本的主要信息,因此选择最大的k个特征值对应的特征向量组成正交矩阵P=(p1,p2,…,pk)。Obtain the main eigenvalue λ i and the corresponding eigenvector p i . Because the eigenvectors corresponding to the larger eigenvalues generally contain the main information of the sample, the eigenvectors corresponding to the largest k eigenvalues are selected to form an orthogonal matrix P=(p 1 ,p 2 ,…,p k ).

形状变化的权重向量b=(b1,b2,…,bk)T,b的每个分量表示其在对应的特征向量方向的大小:The weight vector of shape change b=(b 1 ,b 2 ,…,b k ) T , each component of b represents its magnitude in the direction of the corresponding feature vector:

则对任意的人脸测试图像,其样本形状向量可以表示为:Then for any face test image, its sample shape vector can be expressed as:

2)Patch模型构建2) Patch model construction

假设训练样本中有M幅人脸图像,在每个图像上选择N个人脸关键特征点,在每个特征点周围选取固定大小的patch区域,将包含特征点的patch区域标记为正样本;然后在非特征点区域截取同样大小的patch标记为负样本。Assuming that there are M face images in the training sample, select N key feature points of the face on each image, select a fixed-size patch area around each feature point, and mark the patch area containing the feature points as a positive sample; then A patch of the same size is intercepted in the non-feature point area and marked as a negative sample.

假设每个特征点总共有r个patch,将其组成一个向量(x(1),x(2),…x(r))T,对样本集中的每幅图像有那么输出就只有正样本和负样本,即patch为特征点区域和非特征点区域。那么y(i)={-1,1}i=1,2,…r,其中y(i)=1为正样本标记,y(i)=-1为负样本标记。则训练的线性支持向量机为:Assuming that each feature point has a total of r patches, it is composed of a vector (x (1) ,x (2) ,…x (r) ) T , and each image in the sample set has Then the output is only positive samples and negative samples, that is, the patch is a feature point area and a non-feature point area. Then y (i) ={-1,1}i=1,2,...r, where y (i) =1 is a positive sample label, and y (i) =-1 is a negative sample label. Then the trained linear support vector machine is:

其中xi表示样本集的子空间向量,即支持向量,αi是权重系数,Ms是每个特征点支持向量的数量,b为偏移量。可得:Among them, xi represents the subspace vector of the sample set, that is, the support vector, α i is the weight coefficient, M s is the number of support vectors for each feature point, and b is the offset. Available:

y(i)=wT·x(i)y (i) = w T x (i) + θ

wT=[w1 w2 … wn]是每个支持向量的权重系数,θ就是偏移量。这样就为每个特征点建立了patch模型。w T = [w 1 w 2 … w n ] is the weight coefficient of each support vector, and θ is the offset. In this way, a patch model is established for each feature point.

3)点拟合3) Point fitting

通过在当前估计的特征点位置的限制区域内进行局部搜索,对每个特征点生成一个相似响应图,标识为R(X,Y)。A similar response map is generated for each feature point, identified as R(X,Y), by performing a local search within the restricted area of the currently estimated feature point location.

对响应图拟合一个二次函数,假设R(X,Y)是在邻域范围内(x0,y0)处得到最大值,我们对这个位置拟合一个函数,使得位置和最大值R(X,Y)一一对应。二次函数可以描述如下:Fit a quadratic function to the response graph, assuming that R(X,Y) obtains the maximum value at (x 0 , y 0 ) in the neighborhood, we fit a function to this position so that the position and the maximum value R (X, Y) correspond one to one. A quadratic function can be described as follows:

r(x,y)=a(x-x0)2+b(y-y0)2+cr(x,y)=a(xx 0 ) 2 +b(yy 0 ) 2 +c

其中a,b,c是二次函数的系数,求解方法就是使二次函数r(x,y)和R(X,Y)之间的误差最小,即完成一个最小二乘法计算:Among them, a, b, and c are the coefficients of the quadratic function, and the solution method is to minimize the error between the quadratic function r(x,y) and R(X,Y), that is, to complete a least squares calculation:

有了参数a,b,c,那么r(x,y)就是一个关于特征点位置的目标代价函数,然后再加上一个形变约束代价函数就构成了特征点查找的目标函数,目标函数如下所示:With the parameters a, b, and c, then r(x, y) is an objective cost function about the position of the feature point, and then a deformation constraint cost function is added to form the objective function for feature point search. The objective function is as follows Show:

每次优化这个目标函数得到一个新的特征点位置,然后在迭代更新,直到收敛到最大值,就完成了人脸点拟合。Each time the objective function is optimized to obtain a new feature point position, and then iteratively updated until it converges to the maximum value, the face point fitting is completed.

S13:从人脸图像中提取出面部特征信息。S13: Extract facial feature information from the face image.

特征提取:即从归一化后的人脸图像中提取出人脸具有代表性的特征信息。作为优选的实施方式,步骤S13进一步可以采用下述步骤完成:(31)选取嘴、眉毛、眼睛三个体现各类表情之间差异性的主要区域,提取基于形变的表情特征和基于运动的表情特征的两种类型的特征;(32)采用递归特征消除及线性向量机做特征评估,对选取的特征进一步进行特征选择。Feature extraction: that is, to extract representative feature information of the face from the normalized face image. As a preferred implementation, step S13 can be further completed by the following steps: (31) Select the mouth, eyebrows, and eyes as three main areas that reflect the differences between various expressions, and extract deformation-based expression features and motion-based expressions Two types of features of features; (32) Use recursive feature elimination and linear vector machine for feature evaluation, and further feature selection for the selected features.

利用局部约束模型对面部特征点的标注,获取到特征点坐标后,选取嘴巴、眉毛、眼睛三个主要区域的形状特征,即计算这三个区域内关键点之间的相关斜率信息,提取基于形变的表情特征。同时对三个区域内的关键点进行跟踪,提取相应位移信息,并提取了表情图片的特定特征点之间的距离信息,将距离均与平静图片作差,得到距离的变化信息,提取基于运动的表情特征。Using the local constraint model to mark the facial feature points, after obtaining the coordinates of the feature points, select the shape features of the three main areas of the mouth, eyebrows, and eyes, that is, calculate the relevant slope information between the key points in these three areas, and extract the information based on Deformed facial features. At the same time, the key points in the three areas are tracked, the corresponding displacement information is extracted, and the distance information between the specific feature points of the expression picture is extracted, and the distance is compared with the calm picture to obtain the change information of the distance. facial features.

采用递归特征消除及线性向量机做特征评估,采用支持向量机计算的权值大小作为排序准则,对选取的特征进一步进行去噪。Recursive feature elimination and linear vector machine are used for feature evaluation, and the weight value calculated by support vector machine is used as the sorting criterion to further denoise the selected features.

特征选择算法描述如下:The feature selection algorithm is described as follows:

输入:训练样本集l为类别数Input: training sample set l is the number of categories

输出:特征排序集ROutput: Ranked set of features R

1)初始化原始特征集合S={1,2,…,D},特征排序集R=[]1) Initialize the original feature set S = {1, 2, ..., D}, feature sorting set R = []

2)生成(l(l-1))/2个训练样本:2) Generate (l(l-1))/2 training samples:

在训练样本中找出不同类别的两两组合得到最后的训练样本:in the training sample Find the pairwise combinations of different categories to get the final training samples:

循环一下过程直至S=[]:Loop the process until S=[]:

3)获取用l个训练子样本Xj(j=1,2,…,(l(l-1))/2);3) Obtain l training sub-samples X j (j=1,2,...,(l(l-1))/2);

分别用Xj训练支持向量机,分别得到wj(j=1,2,…,l);Use X j to train the support vector machine respectively, and get w j (j=1,2,…,l) respectively;

计算排序准则得分 Computing the sorting criterion score

找出排序准则得分最小的特征 Find the feature with the smallest ranking criterion score

更新特征集R={p}∪R;Update feature set R={p}∪R;

在S中去除此特征S=S/p.Remove this feature in S S=S/p.

S14:根据获取的特征数据,进行表情分类,实现人脸表情识别。S14: According to the acquired feature data, perform expression classification to realize facial expression recognition.

分类:即人类表情被大致分为七类,分别是喜悦、愤怒、悲伤、厌恶、惊讶、恐惧以及中性。作为优选的实施方式,步骤S14进一步可以采用下述步骤完成:(41)根据提取的面部特征信息,选取样本,利用先验知识训练表情分类器,每个样本对应相应的表情标签;(42)通过表情分类器,采用最小二乘规则,实现表情分类。Classification: That is, human expressions are roughly divided into seven categories, namely joy, anger, sadness, disgust, surprise, fear, and neutrality. As a preferred embodiment, step S14 can further be completed by following steps: (41) select samples according to the extracted facial feature information, and utilize prior knowledge to train an expression classifier, and each sample corresponds to a corresponding expression label; (42) Through the expression classifier, the expression classification is realized by using the least squares rule.

训练表情分类器:利用支持向量机算法对提取的面部特征进行训练,训练完成后就会得到一个表情分类器。Training expression classifier: Use the support vector machine algorithm to train the extracted facial features, and an expression classifier will be obtained after the training is completed.

支持向量机(SVM)算法描述:Support vector machine (SVM) algorithm description:

输入训练集其中xi∈RD,yi∈{+1,-1},xi为第i个样本,N为样本量,D为样本特征数。SVM寻找最优的分类超平面w·x+b=0。input training set Where x i ∈ R D , y i ∈ {+1,-1}, x i is the i-th sample, N is the sample size, and D is the number of sample features. SVM looks for the optimal classification hyperplane w·x+b=0.

SVM需要求解的优化问题为:The optimization problem that SVM needs to solve is:

s.t.yi(w·xi+b)≥1-ξi i=1,2,…,Nsty i (w x i +b)≥1-ξ i i=1,2,…,N

ξi≥0,i=1,2,…,Nξ i ≥0,i=1,2,…,N

而原始问题可以转化为对偶问题:The original problem can be transformed into a dual problem:

其中,αi为拉格朗日乘子。Among them, α i is the Lagrangian multiplier.

最后w的解为:The final solution of w is:

SVM的判别函数为:The discriminant function of SVM is:

表情分类:把提取到的面部特征信息输入训练好的分类器,让分类器给出一个表情预测的值。即应用最小二乘法规则,通过最小化误差的平方和寻找数据的最佳函数匹配。至此,一个完整的人脸识别流程就完成了。Expression classification: Input the extracted facial feature information into the trained classifier, and let the classifier give an expression prediction value. That is, the rule of least squares is applied to find the best function match to the data by minimizing the sum of the squares of the errors. At this point, a complete face recognition process is completed.

参考图2,本发明所述人脸表情识别系统的架构示意图;所述系统包括:一人脸检测模块21、一特征点定位模块22、一特征提取模块23以及一人脸表情识别模块24。With reference to Fig. 2, the structural representation of facial expression recognition system of the present invention; Described system comprises: a human face detection module 21, a feature point positioning module 22, a feature extraction module 23 and a facial expression recognition module 24.

所述人脸检测模块21,用于从原始图像中检测出人脸。所述人脸检测模块21可以基于局部二进制模式逐行扫描原始图像,得到一响应图像;再采用AdaBoost算法对所述响应图像进行人脸检测,检测出人脸的存在;然后采用AdaBoost算法进行人眼检测,分离出人脸区域。人脸检测具体实现方式参照前述方法流程,此处不再赘述。The human face detection module 21 is used to detect human faces from the original image. The face detection module 21 can scan the original image progressively based on local binary patterns to obtain a response image; then use the AdaBoost algorithm to perform face detection on the response image to detect the presence of a human face; then use the AdaBoost algorithm to perform human face detection. Eye detection to separate the face area. For the specific implementation of face detection, refer to the aforementioned method flow, which will not be repeated here.

所述特征点定位模块22与所述人脸检测模块21相连,用于对检测的人脸进行人脸对齐和特征点定位。采用局部约束模型对面部特征点进行标注,定位出面部关键特征点,如眼睛、鼻尖、嘴角点、眉毛以及人脸各部件轮廓点。特征点定位具体实现方式参照前述方法流程,此处不再赘述。The feature point location module 22 is connected to the face detection module 21 and is used for face alignment and feature point location for the detected faces. The local constraint model is used to mark the facial feature points, and the key facial feature points are located, such as eyes, nose tip, mouth corner points, eyebrows and contour points of various parts of the face. For the specific implementation of feature point positioning, refer to the aforementioned method flow, which will not be repeated here.

所述特征提取模块23与所述特征点定位模块22相连,用于从人脸图像中提取出面部特征信息。所述特征提取模块23可以通过选取嘴、眉毛、眼睛三个体现各类表情之间差异性的主要区域,提取基于形变的表情特征和基于运动的表情特征的两种类型的特征;之后采用递归特征消除及线性向量机做特征评估,对选取的特征进一步进行特征选择。特征提取阶段对提取的面部特征信息进行特征选择,获取面部特征子集,保存面部特征信息,用于表情识别。具体实现方式参照前述方法流程,此处不再赘述。The feature extraction module 23 is connected to the feature point location module 22, and is used to extract facial feature information from the face image. The feature extraction module 23 can extract two types of features based on deformation-based expression features and motion-based expression features by selecting three main areas that reflect the differences between various expressions: mouth, eyebrows, and eyes; Feature elimination and linear vector machine for feature evaluation, and further feature selection for the selected features. In the feature extraction stage, feature selection is performed on the extracted facial feature information, a subset of facial features is obtained, and the facial feature information is saved for expression recognition. For the specific implementation manner, refer to the aforementioned method flow, which will not be repeated here.

所述人脸识别模块24与所述特征提取模块23相连,用于根据获取的特征数据,进行表情分类,实现人脸表情识别。所述特征提取模块24可以根据提取的面部特征信息,选取样本,利用先验知识训练表情分类器,每个样本对应相应的表情标签;之后通过表情分类器,采用最小二乘规则,实现表情分类。所述分类过程就是用已知标签的表情特征制造基向量空间,待测表情通过将其特征投影到此空间来判断表情类别,进行人脸表情识别。其具体实现方式参照前述方法流程,此处不再赘述。The face recognition module 24 is connected to the feature extraction module 23, and is used to classify facial expressions according to the acquired feature data to realize facial expression recognition. The feature extraction module 24 can select samples according to the facial feature information extracted, and utilize prior knowledge to train an expression classifier, and each sample corresponds to a corresponding expression label; afterwards, the expression classifier adopts the least squares rule to realize expression classification . The classification process is to use the expression features of known labels to create a base vector space, and the expression to be tested judges the expression category by projecting its features into this space, and performs facial expression recognition. For its specific implementation, refer to the aforementioned method flow, which will not be repeated here.

实施例2人脸表情识别方法Embodiment 2 facial expression recognition method

本实施例中的人脸表情识别的方法如下步骤:The method for facial expression recognition in the present embodiment is as follows:

步骤11:从原始图像中检测出人脸;Step 11: Detect faces from the original image;

在本步骤中,一种具体的实施方式包括步骤101、步骤102和步骤103。In this step, a specific implementation manner includes step 101 , step 102 and step 103 .

步骤101:基于局部二进制模式逐行扫描原始图像,得到一响应图像。Step 101: Scan the original image line by line based on local binary patterns to obtain a response image.

步骤102:采用AdaBoost算法对所述响应图像进行人脸检测,检测出人脸的存在。Step 102: Using the AdaBoost algorithm to perform face detection on the response image to detect the presence of a human face.

步骤103:采用AdaBoost算法进行人眼检测,分离出人脸区域。Step 103: Use the AdaBoost algorithm to detect human eyes, and separate the human face area.

在一种具体的方式中,采用AdaBoost算法进行人脸检测或人眼检测过程中按照1.25-0.9进行多尺度检测。In a specific manner, AdaBoost algorithm is used to perform multi-scale detection according to 1.25-0.9 in the process of face detection or human eye detection.

步骤12:对检测的人脸进行人脸对齐和特征点定位;Step 12: Perform face alignment and feature point positioning on the detected faces;

在本步骤中,一种具体的实施方式为:采用局部约束模型对面部特征点进行标注。In this step, a specific implementation manner is: using a local constraint model to mark facial feature points.

步骤13:从人脸图像中提取出面部特征信息;Step 13: Extract facial feature information from the face image;

在本步骤中,一种具体的实施方式包括步骤301和步骤302。In this step, a specific implementation manner includes step 301 and step 302 .

步骤301:选取嘴、眉毛、眼睛三个体现各类表情之间差异性的主要区域,提取基于形变的表情特征和基于运动的表情特征的两种类型的特征;Step 301: Select the mouth, eyebrows, and eyes as three main areas that reflect the differences between various expressions, and extract two types of features: deformation-based expression features and motion-based expression features;

在本步骤中,另一种具体的实施方式为:选取眼睛、鼻尖、嘴角点、眉毛以及人脸各部件轮廓点体现各类表情之间差异性的主要区域,提取基于形变的表情特征和基于运动的表情特征的两种类型的特征;In this step, another specific implementation method is: select the eyes, the tip of the nose, the corners of the mouth, the eyebrows, and the contour points of each part of the face to reflect the differences between various expressions, and extract the expression features based on deformation and based on Two types of features for expressive features of motion;

步骤302:采用递归特征消除及线性向量机做特征评估,对选取的特征进一步进行特征选择。Step 302: Use recursive feature elimination and linear vector machine for feature evaluation, and further perform feature selection on the selected features.

在一种具体的实施方式中,对提取的面部特征信息进行特征选择,获取面部特征子集,保存面部特征信息,用于表情识别。In a specific implementation manner, feature selection is performed on the extracted facial feature information, a subset of facial features is obtained, and the facial feature information is saved for expression recognition.

步骤14:根据获取的特征数据,进行表情分类,实现人脸表情识别。Step 14: According to the acquired feature data, perform expression classification to realize facial expression recognition.

在本步骤中,一种具体的实施方式包括步骤401和步骤402。In this step, a specific implementation manner includes step 401 and step 402 .

步骤401:根据提取的面部特征信息,选取样本,利用先验知识训练表情分类器,每个样本对应相应的表情标签;Step 401: Select samples according to the extracted facial feature information, and use prior knowledge to train an expression classifier, each sample corresponds to a corresponding expression label;

步骤402:通过表情分类器,采用最小二乘规则,实现表情分类。Step 402: Using the expression classifier and adopting the least squares rule, the expression classification is realized.

在一种具体的实施方式中,用已知标签的表情特征制造基向量空间,待测表情通过将其特征投影到此空间来判断表情类别,进行人脸表情识别。In a specific implementation, the expression features of known labels are used to create a base vector space, and the expression to be tested is projected into this space to determine the expression category and perform facial expression recognition.

本实施例中涉及的各种算法同实施例1。Various algorithms involved in this embodiment are the same as those in Embodiment 1.

实施例3人脸表情识别系统Embodiment 3 facial expression recognition system

本实施例中的人脸表情识别系统包括:人脸检测模块、特征点定位模块、特征提取模块、人脸表情识别模块;The facial expression recognition system in this embodiment includes: a human face detection module, a feature point positioning module, a feature extraction module, and a facial expression recognition module;

所述人脸表情识别模块,用于从原始图像中检测出人脸;The human facial expression recognition module is used to detect the human face from the original image;

一种具体的实施方式中,所述人脸检测模块基于局部二进制模式逐行扫描原始图像,得到响应图像;In a specific implementation manner, the face detection module scans the original image progressively based on local binary patterns to obtain a response image;

采用AdaBoost算法对所述响应图像进行人脸检测,检测出人脸的存在;Adopt AdaBoost algorithm to carry out human face detection to described response image, detect the existence of human face;

采用AdaBoost算法进行人眼检测,分离出人脸区域。The AdaBoost algorithm is used for human eye detection, and the face area is separated.

其中一种具体的实施方式中,所述采用AdaBoost算法进行检测过程中按照1.25-0.9进行多尺度检测。In one specific implementation manner, the multi-scale detection is performed according to 1.25-0.9 in the detection process using the AdaBoost algorithm.

所述特征点定位模块与所述人脸检测模块相连,用于对检测的人脸进行人脸对齐和特征点定位;The feature point positioning module is connected to the face detection module, and is used for face alignment and feature point positioning of the detected faces;

一种具体的实施方式中,所述特征点定位模块采用局部约束模型对面部特征点进行标注。In a specific implementation manner, the feature point location module uses a local constraint model to mark facial feature points.

所述特征提取模块与所述特征点定位模块相连,用于从人脸图像中提取出面部特征信息;The feature extraction module is connected with the feature point positioning module, and is used to extract facial feature information from the face image;

一种具体的实施方式中,所述特征提取模块选取体现各类表情之间差异性的区域,提取基于形变的表情特征和基于运动的表情特征的两种类型的特征;In a specific implementation manner, the feature extraction module selects regions that reflect the differences between various types of expressions, and extracts two types of features: deformation-based expression features and motion-based expression features;

采用递归特征消除及线性向量机做特征评估,对选取的特征进一步进行特征选择。Recursive feature elimination and linear vector machine are used for feature evaluation, and the selected features are further selected for feature selection.

其中一种具体的实施方式中,所述选取体现各类表情之间差异性的区域包括眼睛、鼻尖、嘴角点、眉毛以及人脸各部件轮廓点。In one specific implementation manner, the regions selected to reflect the differences among various expressions include eyes, nose tip, mouth corners, eyebrows, and outline points of various parts of the human face.

所述人脸表情识别模块与所述特征提取模块相连,用于根据提取的面部特征信息,将待识别的人脸表情数据通过所训练的表情分类器进行最大可能性的预测,找出可能性最高的表情类别,实现人脸表情识别;The facial expression recognition module is connected with the feature extraction module, and is used to predict the maximum possibility of the facial expression data to be recognized through the trained expression classifier according to the extracted facial feature information, and find out the possibility The highest expression category, realizing facial expression recognition;

一种具体的实施方式中:所述人脸表情识别模块根据获取的特征数据,进行表情分类,实现人脸表情识别包括:根据提取的面部特征信息,选取样本,利用先验知识训练表情分类器,每个样本对应相应的表情标签;In a specific implementation manner: the facial expression recognition module performs expression classification according to the acquired feature data, and realizing facial expression recognition includes: selecting samples according to the extracted facial feature information, and using prior knowledge to train an expression classifier , each sample corresponds to the corresponding expression label;

通过表情分类器,采用最小二乘规则,实现表情分类。Through the expression classifier, the expression classification is realized by using the least squares rule.

一种具体的实施方式中,所述人脸表情识别模块根据获取的特征数据,进行表情分类,实现人脸表情识别包括:根据提取的面部特征信息,选取样本,利用先验知识训练表情分类器,每个样本对应相应的表情标签;In a specific embodiment, the facial expression recognition module performs expression classification according to the acquired feature data, and realizing facial expression recognition includes: selecting samples according to the extracted facial feature information, and using prior knowledge to train an expression classifier , each sample corresponds to the corresponding expression label;

通过表情分类器,采用最小二乘规则,实现表情分类;Through the expression classifier, the expression classification is realized by using the least squares rule;

所述人脸表情识别模块用已知标签的表情特征制造基向量空间,待测表情通过将其特征投影到此空间来判断表情类别,进行人脸表情识别。The human facial expression recognition module uses the expression features of known labels to create a base vector space, and the expression to be tested judges the expression category by projecting its features into this space, and performs human facial expression recognition.

本实施例中涉及的各种算法同实施例1。Various algorithms involved in this embodiment are the same as those in Embodiment 1.

以上所述,仅是本申请的几个实施例,并非对本申请做任何形式的限制,虽然本申请以较佳实施例揭示如上,然而并非用以限制本申请,任何熟悉本专业的技术人员,在不脱离本申请技术方案的范围内,利用上述揭示的技术内容做出些许的变动或修饰均等同于等效实施案例,均属于技术方案范围内。The above are only a few embodiments of the application, and do not limit the application in any form. Although the application is disclosed as above with preferred embodiments, it is not intended to limit the application. Any skilled person familiar with this field, Without departing from the scope of the technical solution of the present application, any changes or modifications made using the technical content disclosed above are equivalent to equivalent implementation cases, and all belong to the scope of the technical solution.

Claims (10)

1.一种人脸表情识别方法,其特征在于,包括:1. A facial expression recognition method, characterized in that, comprising: 从原始图像中检测出人脸;Detect faces from raw images; 对检测的人脸进行人脸对齐和特征点定位;Perform face alignment and feature point positioning on the detected faces; 从人脸图像中提取出面部特征信息;Extract facial feature information from face images; 根据获取的特征数据,进行表情分类,实现人脸表情识别。According to the acquired feature data, expression classification is carried out to realize facial expression recognition. 2.根据权利要求1所述的方法,其特征在于,所述从原始图像中检测出人脸包括:2. method according to claim 1, is characterized in that, described detecting people's face from original image comprises: 基于局部二进制模式逐行扫描原始图像,得到响应图像;Scan the original image progressively based on local binary patterns to obtain a response image; 采用AdaBoost算法对所述响应图像进行人脸检测,检测出人脸的存在;Adopt AdaBoost algorithm to carry out human face detection to described response image, detect the existence of human face; 采用AdaBoost算法进行人眼检测,分离出人脸区域;Use the AdaBoost algorithm to detect human eyes and separate the face area; 优选地,所述采用AdaBoost算法进行检测过程中按照1.25-0.9进行多尺度检测。Preferably, the multi-scale detection is carried out according to 1.25-0.9 in the detection process using the AdaBoost algorithm. 3.根据权利要求1所述的方法,其特征在于,所述对检测的人脸进行人脸对齐和特征点定位包括:3. The method according to claim 1, wherein said face alignment and feature point location are carried out to the detected faces comprising: 采用局部约束模型对面部特征点进行标注。A locally constrained model is used to annotate facial feature points. 4.根据权利要求1所述的方法,其特征在于,所述从人脸图像中提取出面部特征信息包括:4. method according to claim 1, is characterized in that, described extracting facial feature information from face image comprises: 选取体现各类表情之间差异性的区域,提取基于形变的表情特征和基于运动的表情特征的两种类型的特征;Select the area that reflects the difference between various expressions, and extract two types of features: deformation-based expression features and motion-based expression features; 采用递归特征消除及线性向量机做特征评估,对选取的特征进一步进行特征选择;Use recursive feature elimination and linear vector machine for feature evaluation, and further feature selection for the selected features; 优选地,所述体现各类表情之间差异性的区域包括眼睛、鼻尖、嘴角点、眉毛以及人脸各部件轮廓点;Preferably, the regions that reflect the differences between various expressions include eyes, nose tip, mouth corners, eyebrows, and contour points of various parts of the human face; 优选地,所述从人脸图像中提取出面部特征信息还包括:对提取的面部特征信息进行特征选择,获取面部特征子集,保存面部特征信息,用于表情识别。Preferably, the extracting facial feature information from the face image further includes: performing feature selection on the extracted facial feature information, obtaining a subset of facial features, and saving the facial feature information for expression recognition. 5.根据权利要求1所述的方法,其特征在于,所述根据获取的特征数据,进行表情分类,实现人脸表情识别包括:5. method according to claim 1, is characterized in that, described according to the feature data that obtains, carries out expression classification, realizes facial expression recognition and comprises: 根据提取的面部特征信息,选取样本,利用先验知识训练表情分类器,每个样本对应相应的表情标签;According to the extracted facial feature information, select samples, use prior knowledge to train the expression classifier, and each sample corresponds to the corresponding expression label; 通过表情分类器,采用最小二乘规则,实现表情分类;Through the expression classifier, the expression classification is realized by using the least squares rule; 优选地,所述根据获取的特征数据,进行表情分类,实现人脸表情识别还包括:Preferably, said according to the acquired characteristic data, carries out expression classification, realizes facial expression recognition and also includes: 用已知标签的表情特征制造基向量空间,待测表情通过将其特征投影到此空间来判断表情类别,进行人脸表情识别。The base vector space is created by using the expression features of the known labels, and the expression to be tested is projected into this space to judge the expression category and perform facial expression recognition. 6.一种人脸表情识别系统,其特征在于,所述系统包括:人脸检测模块、特征点定位模块、特征提取模块、人脸表情识别模块;6. a facial expression recognition system, characterized in that, said system comprises: a human face detection module, a feature point location module, a feature extraction module, a facial expression recognition module; 所述人脸表情识别模块,用于从原始图像中检测出人脸;The human facial expression recognition module is used to detect the human face from the original image; 所述特征点定位模块与所述人脸检测模块相连,用于对检测的人脸进行人脸对齐和特征点定位;The feature point positioning module is connected to the face detection module, and is used for face alignment and feature point positioning of the detected faces; 所述特征提取模块与所述特征点定位模块相连,用于从人脸图像中提取出面部特征信息;The feature extraction module is connected with the feature point positioning module, and is used to extract facial feature information from the face image; 所述人脸表情识别模块与所述特征提取模块相连,用于根据提取的面部特征信息,将待识别的人脸表情数据通过所训练的表情分类器进行最大可能性的预测,找出可能性最高的表情类别,实现人脸表情识别。The facial expression recognition module is connected with the feature extraction module, and is used to predict the maximum possibility of the facial expression data to be recognized through the trained expression classifier according to the extracted facial feature information, and find out the possibility The highest expression category, realizing facial expression recognition. 7.根据权利要求6所述的系统,其特征在于,所述人脸检测模块基于局部二进制模式逐行扫描原始图像,得到响应图像;7. The system according to claim 6, wherein the face detection module scans the original image progressively based on local binary patterns to obtain a response image; 采用AdaBoost算法对所述响应图像进行人脸检测,检测出人脸的存在;Adopt AdaBoost algorithm to carry out human face detection to described response image, detect the existence of human face; 采用AdaBoost算法进行人眼检测,分离出人脸区域。The AdaBoost algorithm is used for human eye detection, and the face area is separated. 8.根据权利要求6所述的系统,其特征在于,所述特征点定位模块采用局部约束模型对面部特征点进行标注。8. The system according to claim 6, wherein the feature point location module uses a local constraint model to mark facial feature points. 9.根据权利要求6所述的系统,其特征在于,所述特征提取模块选取体现各类表情之间差异性的区域,提取基于形变的表情特征和基于运动的表情特征的两种类型的特征;9. system according to claim 6, is characterized in that, described feature extraction module selects the region that embodies the difference between various expressions, extracts the feature of two types of expression features based on deformation and based on motion ; 采用递归特征消除及线性向量机做特征评估,对选取的特征进一步进行特征选择;Use recursive feature elimination and linear vector machine for feature evaluation, and further feature selection for the selected features; 优选地,所述体现各类表情之间差异性的区域包括眼睛、鼻尖、嘴角点、眉毛以及人脸各部件轮廓点;Preferably, the regions that reflect the differences between various expressions include eyes, nose tip, mouth corners, eyebrows, and contour points of various parts of the human face; 优选地,所述特征提取模块对提取的面部特征信息进行特征选择,获取面部特征子集,保存面部特征信息,用于表情识别。Preferably, the feature extraction module performs feature selection on the extracted facial feature information, obtains a subset of facial features, and saves the facial feature information for expression recognition. 10.根据权利要求6所述的系统,其特征在于,所述人脸表情识别模块根据获取的特征数据,进行表情分类,实现人脸表情识别包括:根据提取的面部特征信息,选取样本,利用先验知识训练表情分类器,每个样本对应相应的表情标签;10. system according to claim 6, it is characterized in that, described human facial expression recognition module carries out expression classification according to the feature data that obtains, and realizing human facial expression recognition comprises: according to the facial characteristic information of extraction, select sample, utilize Prior knowledge training expression classifier, each sample corresponds to the corresponding expression label; 通过表情分类器,采用最小二乘规则,实现表情分类;Through the expression classifier, the expression classification is realized by using the least squares rule; 优选地,所述人脸表情识别模块用已知标签的表情特征制造基向量空间,待测表情通过将其特征投影到此空间来判断表情类别,进行人脸表情识别。Preferably, the facial expression recognition module uses the expression features of known labels to create a base vector space, and the expression to be tested is projected into this space to determine the expression category and perform facial expression recognition.
CN201810001358.4A 2018-01-02 2018-01-02 Facial expression recognition method and facial expression recognition system Active CN108268838B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810001358.4A CN108268838B (en) 2018-01-02 2018-01-02 Facial expression recognition method and facial expression recognition system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810001358.4A CN108268838B (en) 2018-01-02 2018-01-02 Facial expression recognition method and facial expression recognition system

Publications (2)

Publication Number Publication Date
CN108268838A true CN108268838A (en) 2018-07-10
CN108268838B CN108268838B (en) 2020-12-29

Family

ID=62773093

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810001358.4A Active CN108268838B (en) 2018-01-02 2018-01-02 Facial expression recognition method and facial expression recognition system

Country Status (1)

Country Link
CN (1) CN108268838B (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109409273A (en) * 2018-10-17 2019-03-01 中联云动力(北京)科技有限公司 A kind of motion state detection appraisal procedure and system based on machine vision
CN109712144A (en) * 2018-10-29 2019-05-03 百度在线网络技术(北京)有限公司 Processing method, training method, equipment and the storage medium of face-image
CN109948541A (en) * 2019-03-19 2019-06-28 西京学院 A facial emotion recognition method and system
CN109948672A (en) * 2019-03-05 2019-06-28 张智军 A kind of wheelchair control method and system
CN110020638A (en) * 2019-04-17 2019-07-16 唐晓颖 Facial expression recognizing method, device, equipment and medium
CN110059650A (en) * 2019-04-24 2019-07-26 京东方科技集团股份有限公司 Information processing method, device, computer storage medium and electronic equipment
CN110166836A (en) * 2019-04-12 2019-08-23 深圳壹账通智能科技有限公司 A kind of TV program switching method, device, readable storage medium storing program for executing and terminal device
CN110334643A (en) * 2019-06-28 2019-10-15 广东奥园奥买家电子商务有限公司 A kind of feature evaluation method and device based on recognition of face
CN110348899A (en) * 2019-06-28 2019-10-18 广东奥园奥买家电子商务有限公司 A kind of commodity information recommendation method and device
CN110941993A (en) * 2019-10-30 2020-03-31 东北大学 Dynamic Person Classification and Storage Method Based on Face Recognition
CN111144374A (en) * 2019-12-31 2020-05-12 泰康保险集团股份有限公司 Facial expression recognition method and device, storage medium and electronic equipment
WO2020125386A1 (en) * 2018-12-18 2020-06-25 深圳壹账通智能科技有限公司 Expression recognition method and apparatus, computer device, and storage medium
WO2020133072A1 (en) * 2018-12-27 2020-07-02 Zhejiang Dahua Technology Co., Ltd. Systems and methods for target region evaluation and feature point evaluation
CN112132117A (en) * 2020-11-16 2020-12-25 黑龙江大学 A Converged Identity Authentication System Assisting Coercion Detection
CN112307942A (en) * 2020-10-29 2021-02-02 广东富利盛仿生机器人股份有限公司 Facial expression quantitative representation method, system and medium
CN112560685A (en) * 2020-12-16 2021-03-26 北京嘀嘀无限科技发展有限公司 Facial expression recognition method and device and storage medium
US20210406525A1 (en) * 2019-06-03 2021-12-30 Tencent Technology (Shenzhen) Company Limited Facial expression recognition method and apparatus, electronic device and storage medium

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1794265A (en) * 2005-12-31 2006-06-28 北京中星微电子有限公司 Method and device for distinguishing face expression based on video frequency
CN1996344A (en) * 2006-12-22 2007-07-11 北京航空航天大学 Method for extracting and processing human facial expression information
US20130163829A1 (en) * 2011-12-21 2013-06-27 Electronics And Telecommunications Research Institute System for recognizing disguised face using gabor feature and svm classifier and method thereof
CN104021384A (en) * 2014-06-30 2014-09-03 深圳市创冠智能网络技术有限公司 Face recognition method and device
CN104268580A (en) * 2014-10-15 2015-01-07 南京大学 Class cartoon layout image management method based on scene classification
CN104951743A (en) * 2015-03-04 2015-09-30 苏州大学 Active-shape-model-algorithm-based method for analyzing face expression
CN105069447A (en) * 2015-09-23 2015-11-18 河北工业大学 Facial expression identification method
CN106022391A (en) * 2016-05-31 2016-10-12 哈尔滨工业大学深圳研究生院 Hyperspectral image characteristic parallel extraction and classification method
CN106407958A (en) * 2016-10-28 2017-02-15 南京理工大学 Double-layer-cascade-based facial feature detection method
US20170132408A1 (en) * 2015-11-11 2017-05-11 Samsung Electronics Co., Ltd. Methods and apparatuses for adaptively updating enrollment database for user authentication
CN106919884A (en) * 2015-12-24 2017-07-04 北京汉王智远科技有限公司 Human facial expression recognition method and device
CN106934375A (en) * 2017-03-15 2017-07-07 中南林业科技大学 The facial expression recognizing method of distinguished point based movement locus description
US20170301121A1 (en) * 2013-05-02 2017-10-19 Emotient, Inc. Anonymization of facial images

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1794265A (en) * 2005-12-31 2006-06-28 北京中星微电子有限公司 Method and device for distinguishing face expression based on video frequency
CN1996344A (en) * 2006-12-22 2007-07-11 北京航空航天大学 Method for extracting and processing human facial expression information
US20130163829A1 (en) * 2011-12-21 2013-06-27 Electronics And Telecommunications Research Institute System for recognizing disguised face using gabor feature and svm classifier and method thereof
US20170301121A1 (en) * 2013-05-02 2017-10-19 Emotient, Inc. Anonymization of facial images
CN104021384A (en) * 2014-06-30 2014-09-03 深圳市创冠智能网络技术有限公司 Face recognition method and device
CN104268580A (en) * 2014-10-15 2015-01-07 南京大学 Class cartoon layout image management method based on scene classification
CN104951743A (en) * 2015-03-04 2015-09-30 苏州大学 Active-shape-model-algorithm-based method for analyzing face expression
CN105069447A (en) * 2015-09-23 2015-11-18 河北工业大学 Facial expression identification method
US20170132408A1 (en) * 2015-11-11 2017-05-11 Samsung Electronics Co., Ltd. Methods and apparatuses for adaptively updating enrollment database for user authentication
CN106919884A (en) * 2015-12-24 2017-07-04 北京汉王智远科技有限公司 Human facial expression recognition method and device
CN106022391A (en) * 2016-05-31 2016-10-12 哈尔滨工业大学深圳研究生院 Hyperspectral image characteristic parallel extraction and classification method
CN106407958A (en) * 2016-10-28 2017-02-15 南京理工大学 Double-layer-cascade-based facial feature detection method
CN106934375A (en) * 2017-03-15 2017-07-07 中南林业科技大学 The facial expression recognizing method of distinguished point based movement locus description

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
KHAN, MASOOD MEHMOOD ET AL: "Automated Facial Expression Classification and affect interpretation using infrared measurement of facial skin temperature variations", 《TRANSACTIONS ON AUTONOMOUS AND ADAPTIVE SYSTEMS》 *
PUAL VIOLA ET AL: "Rapid Object Detection using a Boosted Cascade of SimpleFeatures", 《PROCEEDINGS OF THE 2001 IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 *
师亚亭等: "基于嘴巴状态约束的人脸特征点定位算法", 《智能系统学报》 *
蒋政: "人脸识别中特征提取算法的研究与实现", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
马飞: "基于几何特征的表情识别研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109409273A (en) * 2018-10-17 2019-03-01 中联云动力(北京)科技有限公司 A kind of motion state detection appraisal procedure and system based on machine vision
CN109712144A (en) * 2018-10-29 2019-05-03 百度在线网络技术(北京)有限公司 Processing method, training method, equipment and the storage medium of face-image
WO2020125386A1 (en) * 2018-12-18 2020-06-25 深圳壹账通智能科技有限公司 Expression recognition method and apparatus, computer device, and storage medium
CN113302619B (en) * 2018-12-27 2023-11-14 浙江大华技术股份有限公司 System and method for evaluating target area and characteristic points
US12026600B2 (en) 2018-12-27 2024-07-02 Zhejiang Dahua Technology Co., Ltd. Systems and methods for target region evaluation and feature point evaluation
WO2020133072A1 (en) * 2018-12-27 2020-07-02 Zhejiang Dahua Technology Co., Ltd. Systems and methods for target region evaluation and feature point evaluation
CN109948672A (en) * 2019-03-05 2019-06-28 张智军 A kind of wheelchair control method and system
CN109948541A (en) * 2019-03-19 2019-06-28 西京学院 A facial emotion recognition method and system
CN110166836B (en) * 2019-04-12 2022-08-02 深圳壹账通智能科技有限公司 A TV program switching method, device, readable storage medium and terminal device
CN110166836A (en) * 2019-04-12 2019-08-23 深圳壹账通智能科技有限公司 A kind of TV program switching method, device, readable storage medium storing program for executing and terminal device
CN110020638A (en) * 2019-04-17 2019-07-16 唐晓颖 Facial expression recognizing method, device, equipment and medium
CN110059650A (en) * 2019-04-24 2019-07-26 京东方科技集团股份有限公司 Information processing method, device, computer storage medium and electronic equipment
US20210406525A1 (en) * 2019-06-03 2021-12-30 Tencent Technology (Shenzhen) Company Limited Facial expression recognition method and apparatus, electronic device and storage medium
CN110348899A (en) * 2019-06-28 2019-10-18 广东奥园奥买家电子商务有限公司 A kind of commodity information recommendation method and device
CN110334643A (en) * 2019-06-28 2019-10-15 广东奥园奥买家电子商务有限公司 A kind of feature evaluation method and device based on recognition of face
CN110334643B (en) * 2019-06-28 2023-05-23 知鱼智联科技股份有限公司 Feature evaluation method and device based on face recognition
CN110941993A (en) * 2019-10-30 2020-03-31 东北大学 Dynamic Person Classification and Storage Method Based on Face Recognition
CN111144374A (en) * 2019-12-31 2020-05-12 泰康保险集团股份有限公司 Facial expression recognition method and device, storage medium and electronic equipment
CN111144374B (en) * 2019-12-31 2023-10-13 泰康保险集团股份有限公司 Facial expression recognition method and device, storage medium and electronic equipment
CN112307942A (en) * 2020-10-29 2021-02-02 广东富利盛仿生机器人股份有限公司 Facial expression quantitative representation method, system and medium
CN112132117A (en) * 2020-11-16 2020-12-25 黑龙江大学 A Converged Identity Authentication System Assisting Coercion Detection
CN112560685A (en) * 2020-12-16 2021-03-26 北京嘀嘀无限科技发展有限公司 Facial expression recognition method and device and storage medium

Also Published As

Publication number Publication date
CN108268838B (en) 2020-12-29

Similar Documents

Publication Publication Date Title
CN108268838B (en) Facial expression recognition method and facial expression recognition system
Mahmood et al. WHITE STAG model: Wise human interaction tracking and estimation (WHITE) using spatio-temporal and angular-geometric (STAG) descriptors
Jiang et al. Multi-layered gesture recognition with Kinect.
CN106682598B (en) Multi-pose face feature point detection method based on cascade regression
Ding et al. Features versus context: An approach for precise and detailed detection and delineation of faces and facial features
CN103632132B (en) Face detection and recognition method based on skin color segmentation and template matching
CN100478979C (en) Status identification method by using body information matched human face information
Dahmani et al. User-independent system for sign language finger spelling recognition
He et al. Counting and exploring sizes of Markov equivalence classes of directed acyclic graphs
WO2016110005A1 (en) Gray level and depth information based multi-layer fusion multi-modal face recognition device and method
JP2008310796A (en) Computer implemented method for constructing classifier from training data detecting moving object in test data using classifier
CN112381047B (en) Enhanced recognition method for facial expression image
Youssif et al. Arabic sign language (arsl) recognition system using hmm
CN107220598B (en) Iris image classification method based on deep learning features and Fisher Vector coding model
Cheng et al. Chinese Sign Language Recognition Based on DTW‐Distance‐Mapping Features
CN114973389A (en) Eye movement tracking method based on coupling cascade regression
Li et al. A novel art gesture recognition model based on two channel region-based convolution neural network for explainable human-computer interaction understanding
Srininvas et al. A framework to recognize the sign language system for deaf and dumb using mining techniques
Caplier et al. Comparison of 2D and 3D analysis for automated cued speech gesture recognition
Curran et al. The use of neural networks in real-time face detection
Kawulok Energy-based blob analysis for improving precision of skin segmentation
Leung et al. Matching of complex patterns by energy minimization
Wagner et al. Framework for a portable gesture interface
El-Bashir et al. Face Recognition Model Based on Covariance Intersection Fusion for Interactive devices
Rasines et al. Feature selection for hand pose recognition in human-robot object exchange scenario

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant