WO2017167313A1 - 一种表情识别方法及装置 - Google Patents

一种表情识别方法及装置 Download PDF

Info

Publication number
WO2017167313A1
WO2017167313A1 PCT/CN2017/079376 CN2017079376W WO2017167313A1 WO 2017167313 A1 WO2017167313 A1 WO 2017167313A1 CN 2017079376 W CN2017079376 W CN 2017079376W WO 2017167313 A1 WO2017167313 A1 WO 2017167313A1
Authority
WO
WIPO (PCT)
Prior art keywords
expression
key
facial
points
face
Prior art date
Application number
PCT/CN2017/079376
Other languages
English (en)
French (fr)
Inventor
陆平
杨帆
贾霞
郑文明
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2017167313A1 publication Critical patent/WO2017167313A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • G06V40/176Dynamic expression

Definitions

  • the present application relates to, but is not limited to, the field of communication technologies, and in particular, to an expression recognition method and apparatus.
  • facial expressions occupy about 55% of the effect, and facial expressions can be recognized by facial images, so that the facial facial images contain considerable information.
  • the present application provides an expression recognition method and apparatus for solving the problem that the facial expression cannot be quickly and accurately monitored in real time in the related art.
  • an embodiment of the present invention provides an expression recognition method, including: locating various key expression points of a face, the coverage positions of the key expression points include an eyebrow, an eye, a nose, a mouth, and a cheek;
  • the facial expression features are respectively extracted from the central facial subspace; the facial expressions are recognized according to the extracted facial expression features.
  • each key expression point of the positioning face includes: positioning a key expression point of the face by a CLM (Constrained Local Model) feature point detection method.
  • CLM Consstrained Local Model
  • the extracting the expression features on the face subspace centered on each of the key expression points includes: establishing a face subspace for each of the key expression points centering on each of the key expression points; By dynamically capturing facial expressions, each of the key expression points in the captured multi-frame image Corresponding rectangular face subspaces respectively extract expression features.
  • the extracting the expression features on the face subspace centered on each of the key expression points includes: centering on each of the key expression points, and using a preset length as a side length, in the same expression image
  • a proportional face subspace is separately established for each of the key emoticons at different zoom ratios; and the emoticon features are respectively extracted in the proportional face subspaces.
  • the extracting the expression features in the proportional face subspace respectively comprises: extracting the expression features in the proportional face subspaces in the captured multi-frame images by dynamically capturing the facial expressions.
  • the recognizing the facial expression according to the extracted facial expression feature comprises: classifying the extracted facial expression features by a classifier to identify the facial expression.
  • the application further provides a computer readable storage medium storing computer executable instructions that are implemented when the computer executable instructions are executed.
  • an expression recognition device includes: a positioning unit configured to locate each key expression point of the face, the coverage position of the key expression point includes an eyebrow, an eye, a nose, a mouth and a cheek; and an extracting unit configured to The key emoticons respectively extract facial expression features on the central facial subspace; the recognition unit is configured to recognize facial expressions according to the extracted facial expression features.
  • the positioning unit is configured to locate each key expression point of the face by using a CLM feature point detection method.
  • the extracting unit includes: an establishing module, configured to establish a facial subspace for each of the key emoticons centered on each of the key emoticons; and an extracting module configured to dynamically capture the facial expressions, The rectangular facial subspace corresponding to each of the key emoticons in the captured multi-frame image respectively extracts an emoticon feature.
  • the extracting unit includes: a proportion establishing module, configured to be centered on each of the key emoticons, with a preset length as a side length, and each of the different scales of the same emotic image
  • the key emoticons establish a proportional facial subspace
  • the proportional extraction module is configured to extract the emoticons respectively in the proportional facial subspaces.
  • the ratio extraction module is configured to capture the facial expression by dynamically capturing The proportional face subspace in the multi-frame image extracts the expression features, respectively.
  • the identifying unit is configured to classify the extracted facial expression features by the classifier to identify facial expressions.
  • the expression recognition method and device provided by the embodiment of the invention can locate each key expression point of the face, and then extract the expression feature on the face subspace centered on each key expression point, and recognize the facial expression according to the extracted expression feature. .
  • the features corresponding to the facial expressions in the different expressions can be accurately obtained, so that the corresponding expressions can be accurately obtained according to the feature changes of the key expression points.
  • FIG. 1 is a flow chart of an expression recognition method according to an embodiment of the present invention.
  • FIG. 2 is a schematic diagram of a selection effect of face subspaces of different sizes in an embodiment of the present invention
  • FIG. 3 is a schematic diagram of a recognition result of a real-time test flash light source interference condition according to an expression recognition method provided by an embodiment of the present invention
  • FIG. 4 is a schematic diagram showing the result of real-time testing under the non-positive condition of the expression recognition method according to an embodiment of the present invention
  • FIG. 5 is a schematic diagram of a recognition result of an expression recognition method according to an embodiment of the present invention.
  • FIG. 6 is a schematic structural diagram of an expression recognition apparatus according to an embodiment of the present invention.
  • an embodiment of the present invention provides an expression recognition method, including:
  • S11 positioning key facial expression points of the face, the coverage positions of the key expression points include eyebrows, eyes, nose, mouth and cheeks;
  • the expression recognition method provided by the embodiment of the invention can locate the key expression points of the face, and then extract the expression features respectively on the face subspace centered on each key expression point, and recognize the facial expression according to the extracted expression features. In this way, by extracting the feature points of the key expression points in the plurality of regions, the features corresponding to the facial expressions in the different expressions can be accurately obtained, so that the corresponding expressions can be accurately obtained according to the feature changes of the key expression points. Quick and accurate real-time monitoring of facial expressions.
  • the position of the point closely related to the expression in each position of the human face may be locked according to a plurality of algorithms, that is, the key expression points of the face are located.
  • the characteristics of these key expression points should vary according to the different expressions of the person.
  • these key emoticons are detected using CLM feature points to determine the coordinates of these key emoticons.
  • These key expressions can cover the eyebrows, eyes, nose, mouth and cheeks.
  • 68 facial feature points are detected by the CLM feature point detection method, considering that 17 points on the facial contour do not contribute substantially to the expression, and the feature point dense region is just the expression motion multiple. Position, so 17 points on the face contour are ignored when extracting features, while the remaining 51 points are used.
  • AAM Active Appearance Model
  • ASM Active Shape Model
  • ESP Explicit Shape
  • step S12 After detecting the key expression points, in step S12, the expression features can be separately extracted on the face subspace centered on each of the key expression points.
  • a facial subspace can be established for each key emotic point centering on each key emotic point; then, by dynamically capturing the facial expression, the rectangular facial face corresponding to each of the key emoticons in the captured multi-frame image
  • the subspace extracts the expression features separately.
  • the 51 key expression points detected in the foregoing embodiment are still taken as an example. Rectangle boxes can be created around this 51, and LBP features are extracted in these rectangles.
  • the only variable factor in this feature extraction scheme is the size of the rectangular box near the feature point.
  • the relative scale is used when demarcating regions at key points. Specifically, the longitudinal coordinate difference D of the 28th and 31th key expression points (ie, the intersection of the two eyeballs and the bridge of the nose and the tip of the nose) can be detected as a normalized scale at the feature point.
  • the subspaces are set, and the LBP features are extracted in these subspaces, and then the LBP features of each subspace are cascaded. .
  • the facial expression is captured in a dynamic manner, a series of images about the facial expression or motion of the user can be obtained, such that each of the key expressions in the captured multi-frame image After extracting the facial features from the corresponding rectangular facial subspaces, the obtained facial features have temporal continuity and causality, and more effective information, so that they can be more accurately used for facial expression recognition.
  • the key expression points are Extracting facial expression features on the central facial subspace may include:
  • the preset length is a side length, and a proportional face subspace is established for each of the key expression points under different scaling ratios of the same expression image;
  • the same frame image is scaled differently, so that the size of the sub-space of the key expression point obtained by the same preset length as the side length is related.
  • some other expression features of the face subspace can be obtained, so that the expression recognition has higher accuracy and better robustness.
  • the time factor may also be combined to obtain more expression feature information.
  • the facial features can be extracted separately from the proportional facial subspaces in the captured multi-frame images.
  • the facial expressions can be specifically identified based on the extracted expression features.
  • the statistical feature vector in each subspace can be reduced from 256 dimensions to 59 dimensions by using an equivalent mode.
  • the method for dimension reduction of features in this embodiment is a PCA algorithm.
  • the dimension after dimensionality reduction varies with the size of the feature vector and the number of training samples. For example, using 500 CK+ face data as the training sample, the feature dimension of PCA dimension reduction (retaining 90% valid information) is 400 dimensions. about.
  • the expression can be classified for expression recognition.
  • these facial expressions can be classified using a variety of classifiers, such as a Decision Tree, KNN (K-Nearest Neighbour), Support Vector Machine (SVM), and the like.
  • SVM is used for classification.
  • SVM is based on Vapnik's structural risk minimization principle and is able to maintain between classifier capacity and training error Very good balance and high learning generalization ability. In other words, it can not only handle small sample problems, but also work well in high-dimensional (or even infinite-dimensional) space.
  • the support vector machine is a convex optimization problem, and the local optimal solution is also the global optimal solution, which can prevent over-learning. This feature is beyond the reach of many learning algorithms such as neural network algorithms.
  • a support vector machine is used for classification, and a radial basis (RBF) kernel function is used.
  • RBF radial basis
  • the test is performed in the CK+ and PIE databases, and the test results are shown in Table 2 and Table 3.
  • Table 2 is the comparison between the recognition rate of the present application and the common LBP feature extraction scheme under different deflection angles of the Muti-PIE database.
  • Table 3 shows the average recognition rate of the present application and the common LBP feature extraction scheme under the 43 illumination conditions of the PIE database. Comparison. The results show that the present application can indeed improve the expression recognition rate.
  • the facial expression recognition system of the related art has improved the application for the side face, the illumination environment is complicated and the occlusion is not good, and the actual scene is tested.
  • the results show that the present application can accurately identify in real time for various complex scenes.
  • FIG. 3 is the real-time test flash light source interference conditions under the identification results.
  • 4 is a real-time test for the recognition result under non-positive conditions
  • FIG. 5 is a recognition result under the occlusion condition in the real-time test. It can be seen that the present application can accurately identify in real time for various complicated scenes.
  • Embodiments of the present invention further provide a computer readable storage medium storing a computer executable The instructions, when the computer executable instructions are executed, implement the above method.
  • an embodiment of the present invention further provides an expression recognition apparatus, including:
  • the positioning unit 61 is configured to locate each key expression point of the face, and the coverage position of the key expression point includes an eyebrow, an eye, a nose, a mouth and a cheek;
  • the extracting unit 62 is configured to respectively extract expression features on the face subspace centered on each of the key emoticons;
  • the recognition unit 63 is configured to recognize the facial expression according to the extracted expression feature.
  • the expression recognition device can locate the key expression points of the face, and the extraction unit 62 can separately extract the expression features on the face subspace centered on each key expression point, and the recognition unit 63 can extract the The expression feature identifies a facial expression. In this way, by extracting the feature points of the key expression points in the plurality of regions, the features corresponding to the facial expressions in the different expressions can be accurately obtained, so that the corresponding expressions can be accurately obtained according to the feature changes of the key expression points. Quick and accurate real-time monitoring of facial expressions.
  • the positioning unit 61 is configured to locate each key expression point of the face by the CLM feature point detection method.
  • the extracting unit 62 includes: an establishing module, configured to establish a facial subspace for each of the key emoticons centered on each of the key emoticons; and an extracting module configured to dynamically capture facial expressions by The rectangular face subspace corresponding to each of the key emoticons in the captured multi-frame image respectively extracts an emoticon feature.
  • the extracting unit 62 may include:
  • a ratio establishing module is configured to establish a proportional face subspace for each of the key expression points under the different scaling ratios of the same expression image centering on each of the key expression points;
  • the ratio extraction module is configured to separately extract expression features in the proportional face subspace.
  • the proportional extraction module is configured to extract the facial expression features in the proportioned facial subspaces in the captured multi-frame images by dynamically capturing the facial expressions.
  • the identifying unit 63 is configured to classify the extracted facial expression features by the classifier to identify facial expressions.
  • computer storage medium includes volatile and nonvolatile, implemented in any method or technology for storing information, such as computer readable instructions, data structures, program modules or other data. Sex, removable and non-removable media.
  • Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disc (DVD) or other optical disc storage, magnetic cartridge, magnetic tape, magnetic disk storage or other magnetic storage device, or may Any other medium used to store the desired information and that can be accessed by the computer.
  • communication media typically includes computer readable instructions, data structures, program modules, or other data in a modulated data signal, such as a carrier wave or other transport mechanism, and can include any information delivery media. .
  • the expression recognition method and device provided by the embodiment of the invention can locate each key expression point of the face, and then extract the expression feature on the face subspace centered on each key expression point, and recognize the facial expression according to the extracted expression feature. .
  • the features corresponding to the facial expressions in the different expressions can be accurately obtained, so that the corresponding expressions can be accurately obtained according to the feature changes of the key expression points.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

公开一种表情识别方法及装置。所述方法包括:定位面部各关键表情点,所述关键表情点的覆盖位置包括眉毛、眼睛、鼻子、嘴巴和面颊;在以各所述关键表情点为中心的面部子空间上分别提取表情特征;根据提取的所述表情特征识别面部表情。

Description

一种表情识别方法及装置 技术领域
本申请涉及但不限于通信技术领域,特别是涉及一种表情识别方法及装置。
背景技术
在人类的交流活动中,面部表情占据了大约55%的作用,而通过面部图像可以识别出面部表情,由此可见人脸面部图像包含了相当多的信息。
表情识别技术在计算机模式识别等领域取得了蓬勃的发展。在人机交互和情感计算领域,如果没有自动表情和情感识别系统,计算机会是一直冷冰冰的无法理解使用者情感的状态,因此自动面部表情识别系统得到更多更新的关注。
发明概述
以下是对本文详细描述的主题的概述。本概述并非是为了限制权利要求的保护范围。
本申请提供一种表情识别方法及装置,用以解决相关技术中无法对面部表情进行快速准确的实时监测的问题。
一方面,本发明实施例提供一种表情识别方法,包括:定位面部各关键表情点,所述关键表情点的覆盖位置包括眉毛、眼睛、鼻子、嘴巴和面颊;在以各所述关键表情点为中心的面部子空间上分别提取表情特征;根据提取的所述表情特征识别面部表情。
可选的,所述定位面部各关键表情点包括:通过CLM(Constrained Local Model,约束局部模型)特征点检测法定位面部各关键表情点。
可选的,所述在以各所述关键表情点为中心的面部子空间上分别提取表情特征包括:以各所述关键表情点为中心,为每个所述关键表情点建立面部子空间;通过动态捕捉面部表情,在捕捉到的多帧图像中各所述关键表情点 对应的矩形面部子空间分别提取表情特征。
可选的,所述在以各所述关键表情点为中心的面部子空间上分别提取表情特征包括:以各所述关键表情点为中心,以预设长度为边长,在同一表情图像的不同缩放比例下分别为每个所述关键表情点建立比例面部子空间;在所述比例面部子空间分别提取表情特征。
可选的,所述在所述比例面部子空间分别提取表情特征包括:通过动态捕捉面部表情,在捕捉到的多帧图像中的所述比例面部子空间分别提取表情特征。
可选的,所述根据提取的所述表情特征识别面部表情包括:通过分类器对提取到的表情特征进行分类,以识别面部表情。
本申请另外提供一种计算机可读存储介质,存储有计算机可执行指令,所述计算机可执行指令被执行时实现上述方法。
相应的,一种表情识别装置,包括:定位单元,设置成定位面部各关键表情点,所述关键表情点的覆盖位置包括眉毛、眼睛、鼻子、嘴巴和面颊;提取单元,设置成在以各所述关键表情点为中心的面部子空间上分别提取表情特征;识别单元,设置成根据提取的所述表情特征识别面部表情。
可选的,所述定位单元,设置成通过CLM特征点检测法定位面部各关键表情点。
可选的,所述提取单元,包括:建立模块,设置成以各所述关键表情点为中心,为每个所述关键表情点建立面部子空间;提取模块,设置成通过动态捕捉面部表情,在捕捉到的多帧图像中各所述关键表情点对应的矩形面部子空间分别提取表情特征。
可选的,所述提取单元,包括:比例建立模块,设置成以各所述关键表情点为中心,以预设长度为边长,在同一表情图像的不同缩放比例下分别为每个所述关键表情点建立比例面部子空间;比例提取模块,设置成在所述比例面部子空间分别提取表情特征。
可选的,所述比例提取模块,设置成通过动态捕捉面部表情,在捕捉到 的多帧图像中的所述比例面部子空间分别提取表情特征。
可选的,所述识别单元,设置成通过分类器对提取到的表情特征进行分类,以识别面部表情。
本发明实施例提供的表情识别方法及装置,能够定位面部各关键表情点,然后在以各关键表情点为中心的面部子空间上分别提取表情特征,并根据提取的所述表情特征识别面部表情。这样,通过对面部多个区域的关键表情点进行特征提取,就能够准确获取影响面部表情的部位在不同表情时对应的特征,从而根据这些关键表情点的特征变化情况准确获知对应的表情,从而对面部表情进行快速准确的实时监测。
在阅读并理解了附图和详细描述后,可以明白其他方面。
附图概述
图1是本发明实施例提供的一种表情识别方法的一种流程图;
图2是本发明实施例中不同大小的面部子空间选取效果示意图;
图3为根据本发明实施例提供的表情识别方法进行实时测试闪光灯光源干扰条件下识别结果示意图;
图4为根据本发明实施例提供的表情识别方法进行实时测试非正面条件下识别结果示意图;
图5为根据本发明实施例提供的表情识别方法进行实时测试遮挡条件下识别结果示意图;
图6是本发明实施例提供的一种表情识别装置的一种结构示意图。
详述
以下结合附图对本申请进行详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不限定本申请。
相关技术的大多数自动面部表情识别系统仍然不能做到对面部表情进行快速准确的实时监测。
如图1所示,本发明实施例提供一种表情识别方法,包括:
S11,定位面部各关键表情点,所述关键表情点的覆盖位置包括眉毛、眼睛、鼻子、嘴巴和面颊;
S12,在以各所述关键表情点为中心的面部子空间上分别提取表情特征;
S13,根据提取的所述表情特征识别面部表情。
本发明实施例提供的表情识别方法,能够定位面部各关键表情点,然后在以各关键表情点为中心的面部子空间上分别提取表情特征,并根据提取的所述表情特征识别面部表情。这样,通过对面部多个区域的关键表情点进行特征提取,就能够准确获取影响面部表情的部位在不同表情时对应的特征,从而根据这些关键表情点的特征变化情况准确获知对应的表情,从而对面部表情进行快速准确的实时监测。
可选的,在步骤S11中,可以根据多种算法来锁定人面部的各个位置中与表情密切相关的点的位置,即定位面部各关键表情点。这些关键表情点的特征应根据人的不同表情而变化。例如,在本发明的一个实施例中,采用CLM特征点检测出这些关键表情点,确定出这些关键表情点的坐标。这些关键表情点可以覆盖眉毛、眼睛、鼻子、嘴巴和面颊等位置。可选的,本实施例中,通过CLM特征点检测法检测出68个面部特征点,考虑到面部轮廓上的17个点对表情基本没有贡献,而特征点密集的区域恰好为表情运动多发的位置,因此在提取特征时将面部轮廓上的17个点忽略,而使用剩余的51个点。
当然,在本发明的其他实施例中,还可以采用其他方法来检测关键表情点,例如AAM(Active Appearance Model,主动表观模型),ASM(Active Shape Model,主动形状模型),ESP(Explicit Shape Regression,显式形状回归)等,本发明的实施例对此不做限定。
检测出关键表情点后,在步骤S12中,即可在以各所述关键表情点为中心的面部子空间上分别提取表情特征。
具体而言,首先可以以各关键表情点为中心,为每个关键表情点建立面部子空间;然后通过动态捕捉面部表情,在捕捉到的多帧图像中各所述关键表情点对应的矩形面部子空间分别提取表情特征。
可选的,仍以上述实施例中检测出的51个关键表情点为例。可以在这51附近建立矩形框,在这些矩形框中提取LBP特征。这种特征提取方案唯一的可变因素就是特征点附近的矩形框大小。为了保证在实时场景中该特征提取方案能够在多种尺度上鲁棒,在关键点上划定区域时,采用了相对尺度。具体的,可以以特征点检测得到的第28和31个关键表情点(即两眼球连线与鼻梁相交点和鼻尖点)的纵向坐标差值D作为归一化尺度,在特征点的上、下、左、右s*D(其中s决定了子空间的相对大小)的矩形空间内,设定子空间,并在这些子空间中提取LBP特征,然后将各子空间的LBP特征级联起来。
需要说明的是,不同的S值对应不同的子空间大小,例如,如表1所示,在本发明的一个实施例中,经过大量数据库和实时测试验证,发现s=0.3时识别率和鲁棒性最高,其效果可如图2所示。图2展示了从左至右、从上至下,不同大小面部子空间选取效果(s=0.1、0.2、0.3..、1.0)。当然,在本发明的其他实施例中,也可能是其他情况下的识别率和鲁棒性更好,本发明的实施例对此不限。
表1
s 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
平均识别率% 65.5 76.5 80.8 78.5 79.5 80.0 75.5 74.0 67.8 62.5
需要说明的是,本实施例中,由于是以动态方式捕捉的面部表情,因此可以得到关于用户的面部表情或动作的一系列图像,这样,在捕捉到的多帧图像中各所述关键表情点对应的矩形面部子空间分别提取表情特征后,得到的表情特征具有时间的延续性和因果性,有效信息更多,因此也就能更准确地被用来进行表情识别。
可选的,除了对多帧图像中各所述关键表情点对应的矩形面部子空间分别提取表情特征进行提取外,还可以以其他方式来从一帧图像中得到更多的表情特征信息。例如,在本发明的一个实施例中,在以各所述关键表情点为 中心的面部子空间上分别提取表情特征可包括:
以各所述关键表情点为中心,以预设长度为边长,在同一表情图像的不同缩放比例下分别为每个所述关键表情点建立比例面部子空间;
在所述比例面部子空间分别提取表情特征。
也就是说,在进行特征提取时,对同一帧图像进行了不同比例的放缩,这样,以同样的预设长度为边长而取到的关键表情点的子空间的大小就有所联系又有所不同,利用这种联系与不同,又可以得到面部子空间的一些其他表情特征,从而使表情识别具有更高的准确率和更好的鲁棒性。
可选的,本实施例中,在对比例面部子空间分别提取表情特征的过程中,也可以结合时间因素,从而获得更多的表情特征信息。例如,可以通过动态捕捉面部表情,在捕捉到的多帧图像中的比例面部子空间分别提取表情特征。
在提取完表情特征后,在步骤S13中,即可根据提取的表情特征来具体识别面部表情。
为了使进行表情识别时的计算更为简单,可选的,计算LBP特征脸时可采用等价模式将每个子空间内统计的特征向量由256维降低为59维。可选的,将提取到的LBP特征级联后,特征维数为每个人脸的提取到的特征维数为59*51=3009,维数较大,还可以进一步进行降维处理,以降低计算量。
可选的,本实施例中特征降维的方法为PCA算法。降维的过程中可选择保留90%~95%的有效信息,这样的操作可以保证降维的同时不会造成过多的信息损失或冗余。降维后的维度随着特征向量的大小和训练样本数目的变化而不同,如,用500个CK+人脸数据作为训练样本,PCA降维(保留90%有效信息)后的特征维度为400维左右。
这样,经过降维得到面部表情特征后,可以对表情进行分类,以便进行表情识别。可选的,可以使用多种分类器对这些面部表情进行分类,例如决策树(Decision Tree),KNN(K-Nearest Neighbour),支持向量机(SVM,Support Vector Machine)等。
举例说明,在本发明的一个实施例中,采用SVM来进行分类。SVM基于Vapnik的结构风险最小化原则,能够在分类器的容量和训练误差之间保持 很好的平衡,具有较高的学习泛化能力。也就是说不仅能处理小样本问题,还能很好地在高维(甚至是无穷维)空间工作。同时,支持向量机是一个凸优化问题,局部最优解也就是全局最优解,可以防止过学习,这一特征是很多学习算法如神经网络算法等所不及的。本申请中采用支持向量机进行分类,采用径向基(RBF)核函数。
为了证实本发明实施例提供的表情识别方法比常用LBP特征提取方法取得更高的识别率,在CK+和PIE数据库进行了测试,测试结果如表2和表3所示。其中,表2为Muti-PIE数据库不同偏转角度下本申请的识别率与常用LBP特征提取方案的比较,表3为PIE数据库43种光照条件下本申请的平均识别率与常用LBP特征提取方案的比较。结果表明,本申请确实能够提高表情识别率。
表2
Figure PCTCN2017079376-appb-000001
表3
Figure PCTCN2017079376-appb-000002
针对相关技术的面部表情识别系统对于侧脸,光照环境复杂和遮挡等情况下识别效果不好的问题,该申请做出了改进,并对现实场景进行了测试。结果表明,本申请对于各种复杂场景能够做到实时准确识别。
针对相关技术的面部表情识别系统对于侧脸,光照环境复杂和遮挡等情况下识别效果不好的问题,该申请做出了改进,并对现实场景进行了测试,如附图3,4,5所示。其中,附图3为实时测试闪光灯光源干扰条件下识别结果。附图4为实时测试非正面条件下识别结果,附图5为实时测试时遮挡条件下的识别结果。可见,本申请对于各种复杂场景能够做到实时准确识别。
本发明实施例另外提供一种计算机可读存储介质,存储有计算机可执行 指令,所述计算机可执行指令被执行时实现上述方法。
相应的,如图6所示,本发明的实施例还提供一种表情识别装置,包括:
定位单元61,设置成定位面部各关键表情点,所述关键表情点的覆盖位置包括眉毛、眼睛、鼻子、嘴巴和面颊;
提取单元62,设置成在以各所述关键表情点为中心的面部子空间上分别提取表情特征;
识别单元63,设置成根据提取的所述表情特征识别面部表情。
本发明实施例提供的表情识别装置,定位单元61能够定位面部各关键表情点,提取单元62能够在以各关键表情点为中心的面部子空间上分别提取表情特征,识别单元63能够根据提取的所述表情特征识别面部表情。这样,通过对面部多个区域的关键表情点进行特征提取,就能够准确获取影响面部表情的部位在不同表情时对应的特征,从而根据这些关键表情点的特征变化情况准确获知对应的表情,从而对面部表情进行快速准确的实时监测。
可选的,定位单元61,设置成通过CLM特征点检测法定位面部各关键表情点。
可选的,提取单元62,包括:建立模块,设置成以各所述关键表情点为中心,为每个所述关键表情点建立面部子空间;提取模块,设置成通过动态捕捉面部表情,在捕捉到的多帧图像中各所述关键表情点对应的矩形面部子空间分别提取表情特征。
可选的,提取单元62,可包括:
比例建立模块,设置成以各所述关键表情点为中心,以预设长度为边长,在同一表情图像的不同缩放比例下分别为每个所述关键表情点建立比例面部子空间;
比例提取模块,设置成在所述比例面部子空间分别提取表情特征。
可选的,比例提取模块,设置成通过动态捕捉面部表情,在捕捉到的多帧图像中的所述比例面部子空间分别提取表情特征。
可选的,识别单元63,设置成通过分类器对提取到的表情特征进行分类,以识别面部表情。
尽管为示例目的,已经公开了本发明的可选实施例,本领域的技术人员将意识到各种改进、增加和取代也是可能的,因此,本申请的范围应当不限于上述实施例。
本领域普通技术人员可以理解,上文中所公开方法中的全部或某些步骤、系统、装置中的功能模块/单元可以被实施为软件、固件、硬件及其适当的组合。在硬件实施方式中,在以上描述中提及的功能模块/单元之间的划分不一定对应于物理单元的划分;例如,一个物理组件可以具有多个功能,或者一个功能或步骤可以由若干物理组件合作执行。某些组件或者所有组件可以被实施为由处理器,如数字信号处理器或微处理器执行的软件,或者被实施为硬件,或者被实施为集成电路,如专用集成电路。这样的软件可以分布在计算机可读介质上,计算机可读介质可以包括计算机存储介质(或非暂时性介质)和通信介质(或暂时性介质)。如本领域普通技术人员公知的,术语计算机存储介质包括在用于存储信息(诸如计算机可读指令、数据结构、程序模块或其他数据)的任何方法或技术中实施的易失性和非易失性、可移除和不可移除介质。计算机存储介质包括但不限于RAM、ROM、EEPROM、闪存或其他存储器技术、CD-ROM、数字多功能盘(DVD)或其他光盘存储、磁盒、磁带、磁盘存储或其他磁存储装置、或者可以用于存储期望的信息并且可以被计算机访问的任何其他的介质。此外,本领域普通技术人员公知的是,通信介质通常包含计算机可读指令、数据结构、程序模块或者诸如载波或其他传输机制之类的调制数据信号中的其他数据,并且可包括任何信息递送介质。
工业实用性
本发明实施例提供的表情识别方法及装置,能够定位面部各关键表情点,然后在以各关键表情点为中心的面部子空间上分别提取表情特征,并根据提取的所述表情特征识别面部表情。这样,通过对面部多个区域的关键表情点进行特征提取,就能够准确获取影响面部表情的部位在不同表情时对应的特征,从而根据这些关键表情点的特征变化情况准确获知对应的表情,从而对面部表情进行快速准确的实时监测。

Claims (12)

  1. 一种表情识别方法,包括:
    定位面部各关键表情点,所述关键表情点的覆盖位置包括眉毛、眼睛、鼻子、嘴巴和面颊;
    在以各所述关键表情点为中心的面部子空间上分别提取表情特征;
    根据提取的所述表情特征识别面部表情。
  2. 根据权利要求1所述的方法,其中,所述定位面部各关键表情点的步骤包括:
    通过约束局部模型CLM特征点检测法定位面部各关键表情点。
  3. 根据权利要求1所述的方法,其中,所述在以各所述关键表情点为中心的面部子空间上分别提取表情特征的步骤包括:
    以各所述关键表情点为中心,为每个所述关键表情点建立面部子空间;
    通过动态捕捉面部表情,在捕捉到的多帧图像中各所述关键表情点对应的矩形面部子空间分别提取表情特征。
  4. 根据权利要求1所述的方法,其中,所述在以各所述关键表情点为中心的面部子空间上分别提取表情特征的步骤包括:
    以各所述关键表情点为中心,以预设长度为边长,在同一表情图像的不同缩放比例下分别为每个所述关键表情点建立比例面部子空间;
    在所述比例面部子空间分别提取表情特征。
  5. 根据权利要求4所述的方法,其中,所述在所述比例面部子空间分别提取表情特征的步骤包括:
    通过动态捕捉面部表情,在捕捉到的多帧图像中的所述比例面部子空间分别提取表情特征。
  6. 根据权利要求1所述的方法,其中,所述根据提取的所述表情特征识别面部表情的步骤包括:
    通过分类器对提取到的表情特征进行分类,以识别面部表情。
  7. 一种表情识别装置,包括:
    定位单元,设置成定位面部各关键表情点,所述关键表情点的覆盖位置包括眉毛、眼睛、鼻子、嘴巴和面颊;
    提取单元,设置成在以各所述关键表情点为中心的面部子空间上分别提取表情特征;
    识别单元,设置成根据提取的所述表情特征识别面部表情。
  8. 根据权利要求7所述的装置,其中,所述定位单元,设置成通过约束局部模型CLM特征点检测法定位面部各关键表情点。
  9. 根据权利要求7所述的装置,其中,所述提取单元,包括:
    建立模块,设置成以各所述关键表情点为中心,为每个所述关键表情点建立面部子空间;
    提取模块,设置成通过动态捕捉面部表情,在捕捉到的多帧图像中各所述关键表情点对应的矩形面部子空间分别提取表情特征。
  10. 根据权利要求7所述的装置,其中,所述提取单元,包括:
    比例建立模块,设置成以各所述关键表情点为中心,以预设长度为边长,在同一表情图像的不同缩放比例下分别为每个所述关键表情点建立比例面部子空间;
    比例提取模块,设置成在所述比例面部子空间分别提取表情特征。
  11. 根据权利要求10所述的装置,其中,所述比例提取模块,设置成通过动态捕捉面部表情,在捕捉到的多帧图像中的所述比例面部子空间分别提取表情特征。
  12. 根据权利要求7所述的装置,其中,所述识别单元,设置成通过分类器对提取到的表情特征进行分类,以识别面部表情。
PCT/CN2017/079376 2016-04-01 2017-04-01 一种表情识别方法及装置 WO2017167313A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610201370.0 2016-04-01
CN201610201370.0A CN107292218A (zh) 2016-04-01 2016-04-01 一种表情识别方法及装置

Publications (1)

Publication Number Publication Date
WO2017167313A1 true WO2017167313A1 (zh) 2017-10-05

Family

ID=59963544

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/079376 WO2017167313A1 (zh) 2016-04-01 2017-04-01 一种表情识别方法及装置

Country Status (2)

Country Link
CN (1) CN107292218A (zh)
WO (1) WO2017167313A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112364787A (zh) * 2020-11-13 2021-02-12 江苏汉德天坤数字技术有限公司 一种面部微表情识别方法
CN113762136A (zh) * 2021-09-02 2021-12-07 北京格灵深瞳信息技术股份有限公司 人脸图像遮挡判断方法、装置、电子设备和存储介质

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107909020A (zh) * 2017-11-09 2018-04-13 东南大学 一种基于滤波器设计的光流向量微表情发生阶段检测方法
CN108216254B (zh) * 2018-01-10 2020-03-10 山东大学 基于面部图像与脉搏信息融合的路怒情绪识别方法
CN108734570A (zh) * 2018-05-22 2018-11-02 深圳壹账通智能科技有限公司 一种风险预测方法、存储介质和服务器

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101964064A (zh) * 2010-07-27 2011-02-02 上海摩比源软件技术有限公司 一种人脸比对方法
US20120321140A1 (en) * 2009-12-31 2012-12-20 Nokia Corporation Method and apparatus for local binary pattern based facial feature localization
CN103310204A (zh) * 2013-06-28 2013-09-18 中国科学院自动化研究所 基于增量主成分分析的特征与模型互匹配人脸跟踪方法
CN103514441A (zh) * 2013-09-21 2014-01-15 南京信息工程大学 基于移动平台的人脸特征点定位跟踪方法
CN105117703A (zh) * 2015-08-24 2015-12-02 复旦大学 基于矩阵乘法的快速动作单元识别方法

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105095827B (zh) * 2014-04-18 2019-05-17 汉王科技股份有限公司 人脸表情识别装置和方法
CN103984919A (zh) * 2014-04-24 2014-08-13 上海优思通信科技有限公司 基于粗糙集与混合特征的人脸表情识别方法
CN104951743A (zh) * 2015-03-04 2015-09-30 苏州大学 基于主动形状模型算法分析人脸表情的方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120321140A1 (en) * 2009-12-31 2012-12-20 Nokia Corporation Method and apparatus for local binary pattern based facial feature localization
CN101964064A (zh) * 2010-07-27 2011-02-02 上海摩比源软件技术有限公司 一种人脸比对方法
CN103310204A (zh) * 2013-06-28 2013-09-18 中国科学院自动化研究所 基于增量主成分分析的特征与模型互匹配人脸跟踪方法
CN103514441A (zh) * 2013-09-21 2014-01-15 南京信息工程大学 基于移动平台的人脸特征点定位跟踪方法
CN105117703A (zh) * 2015-08-24 2015-12-02 复旦大学 基于矩阵乘法的快速动作单元识别方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
JENI, L.A. ET AL.: "Continuous AU Intensity Estimation Using Localized, Sparse Facial Feature Space", 2013 10TH IEEE INTERNATIONAL CONFERENCE AND WORKSHOPS ON AUTOMATIC FACE AND GESTURE RECOGNITION (FG, 26 April 2013 (2013-04-26), XP055427228 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112364787A (zh) * 2020-11-13 2021-02-12 江苏汉德天坤数字技术有限公司 一种面部微表情识别方法
CN113762136A (zh) * 2021-09-02 2021-12-07 北京格灵深瞳信息技术股份有限公司 人脸图像遮挡判断方法、装置、电子设备和存储介质

Also Published As

Publication number Publication date
CN107292218A (zh) 2017-10-24

Similar Documents

Publication Publication Date Title
WO2017167313A1 (zh) 一种表情识别方法及装置
JP7078803B2 (ja) 顔写真に基づくリスク認識方法、装置、コンピュータ設備および記憶媒体
US9691007B2 (en) Identification apparatus and method for controlling identification apparatus
US10242249B2 (en) Method and apparatus for extracting facial feature, and method and apparatus for facial recognition
CN108629336B (zh) 基于人脸特征点识别的颜值计算方法
US20150242678A1 (en) Method and apparatus of recognizing facial expression using adaptive decision tree based on local feature extraction
CN110069989B (zh) 人脸图像处理方法及装置、计算机可读存储介质
US11250246B2 (en) Expression recognition device
US20220237943A1 (en) Method and apparatus for adjusting cabin environment
JP6071002B2 (ja) 信頼度取得装置、信頼度取得方法および信頼度取得プログラム
JP6351243B2 (ja) 画像処理装置、画像処理方法
CN110751069A (zh) 一种人脸活体检测方法及装置
US20140056490A1 (en) Image recognition apparatus, an image recognition method, and a non-transitory computer readable medium thereof
US11875603B2 (en) Facial action unit detection
JP2010108494A (ja) 画像内の顔の特性を判断する方法及びシステム
Divya et al. Facial expression recognition by calculating euclidian distance for eigen faces using PCA
EP3869450A1 (en) Information processing device, information processing method, and program
JP5879188B2 (ja) 顔表情解析装置および顔表情解析プログラム
EP4459575A1 (en) Liveness detection method, device and apparatus, and storage medium
EP2998928B1 (en) Apparatus and method for extracting high watermark image from continuously photographed images
Bekhouche et al. Automatic age estimation and gender classification in the wild
EP3699865B1 (en) Three-dimensional face shape derivation device, three-dimensional face shape deriving method, and non-transitory computer readable medium
Oliveira et al. A comparison between end-to-end approaches and feature extraction based approaches for sign language recognition
Cirne et al. Gender recognition from face images using a geometric descriptor
JPWO2017029758A1 (ja) 学習装置および学習識別システム

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17773342

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 17773342

Country of ref document: EP

Kind code of ref document: A1