CN109145754A - Merge the Emotion identification method of facial expression and limb action three-dimensional feature - Google Patents

Merge the Emotion identification method of facial expression and limb action three-dimensional feature Download PDF

Info

Publication number
CN109145754A
CN109145754A CN201810816740.0A CN201810816740A CN109145754A CN 109145754 A CN109145754 A CN 109145754A CN 201810816740 A CN201810816740 A CN 201810816740A CN 109145754 A CN109145754 A CN 109145754A
Authority
CN
China
Prior art keywords
facial expression
limb action
feature
space
emotion identification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810816740.0A
Other languages
Chinese (zh)
Inventor
邵洁
汪伟鸣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai University of Electric Power
University of Shanghai for Science and Technology
Original Assignee
Shanghai University of Electric Power
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai University of Electric Power filed Critical Shanghai University of Electric Power
Priority to CN201810816740.0A priority Critical patent/CN109145754A/en
Publication of CN109145754A publication Critical patent/CN109145754A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Image Analysis (AREA)

Abstract

A kind of Emotion identification method merging facial expression and limb action three-dimensional feature, is related to field of artificial intelligence, the solution is to improve the Emotion identification accuracy of people.This method is input to SVM model using the feature vector for having merged facial expression and limb action and is trained, and obtains SVM Emotion identification model;It when carrying out Emotion identification to target video, is input in the SVM Emotion identification model that training obtains after having merged facial expression and the feature vector of limb action from extraction in target video, implements Emotion identification using feature vector of the SVM classifier to extraction.Method provided by the invention, Emotion identification Fusion Features facial expression and limb action, identification accuracy are higher.

Description

Merge the Emotion identification method of facial expression and limb action three-dimensional feature
Technical field
The present invention relates to the technologies of artificial intelligence, more particularly to a kind of fusion facial expression and limb action three-dimensional feature Emotion identification method technology.
Background technique
With the progress of computer vision and multimedia technology, intelligent Emotion identification analysis has been in current computer vision One of most active research field.The purpose is to the image sequences to the mankind to detect, track and identify, explain more scientificly Human behavior.Emotion identification can be applied to the various aspects of life: game manufacturer can with the mood of intellectual analysis player, according to Different expressions pointedly with player's interaction, improve the experience of game;Camera manufacturer can use this technology and capture mankind's table Feelings, for example when needs one open the photo of smile or anger, can capture by the facial expression of the personnel of bat and bat is rapidly completed According to work;Government or sociologist can install camera in public, and expression and the limbs for analyzing group of entire society are dynamic Make to understand people's lives operating pressure;Mall can according to customer to the shopping of commodity when movement and expression video, it is right Product does relevant market survey.
In practical applications, the Emotion identification research for being based purely on human face expression has encountered bottleneck, on the one hand, based on experiment The front face Expression Recognition at room visual angle has reached high discrimination, but related algorithm is being applied to natural mode facial expression recognition When but discrimination it is lower;On the other hand, limb action is equally that people obtain social and mood one of important clue, very much In application, it can extend efficient help for the Emotion identification based on facial expression.Therefore, carry out fusion facial expression and The Emotion identification research of limb action has important value to the development of human emotion's intelligent recognition related application from now on.
Summary of the invention
For above-mentioned defect existing in the prior art, it is quasi- that technical problem to be solved by the invention is to provide a kind of identifications The Emotion identification method of the true high fusion facial expression of property and limb action three-dimensional feature.
In order to solve the above-mentioned technical problem, a kind of fusion facial expression provided by the present invention and limb action three-dimensional feature Emotion identification method, it is characterised in that:
Using camera shooting include the Sample video of human body face expression and limb action, then is mentioned from Sample video The feature vector for having merged facial expression and limb action is taken, the feature vector of extraction is input to SVM model and is trained, is obtained To SVM Emotion identification model;
When carrying out Emotion identification to target video, the spy for having merged facial expression and limb action is extracted from target video It is input in the SVM Emotion identification model that training obtains after sign vector, implements feelings using feature vector of the SVM classifier to extraction Thread identification;
The step of feature vector for having merged facial expression and limb action is extracted from video is as follows:
1) using the characteristic detection method filtered based on Gabor, face's three-D grain feature is extracted from video;
2) a point of interest primary election threshold condition and a point of interest lower limit value are set, then uses non-maxima suppression algorithm Face's three-D grain feature in search video meets the Local modulus maxima of point of interest primary election threshold condition, and from searching out In Local modulus maxima, the Local modulus maxima that all face's three-D grain features are greater than point of interest lower limit value is selected as finally Space-time interest points, choose one facial expression space-time characteristic square of face's three-D grain feature construction of each space-time interest points Battle array;
3) human body limb movement three-D grain is extracted from video using three-dimensional orthogonal plane local binary pattern operator One limb action space-time characteristic matrix of feature construction;
4) to face's three-D grain feature in facial expression space-time characteristic matrix and limb action space-time characteristic matrix, people Body limb action three-D grain feature, using PCA algorithm carry out dimensionality reduction, thus obtain facial expression space-time characteristic matrix it is main at Divide the principal component eigenmatrix of eigenmatrix and limb action space-time characteristic matrix;
5) blending algorithm based on canonical correlation analysis is used, to the principal component feature square of facial expression space-time characteristic matrix The principal component eigenmatrix of battle array and limb action space-time characteristic matrix implements fusion, obtains fused feature vector.
The Emotion identification method of fusion facial expression provided by the invention and limb action three-dimensional feature, is extracted from video The space-time characteristic of human body face expression and limb action, and mood is characterized with it, and use the fusion based on canonical correlation analysis Two kinds of features of algorithm fusion, then Emotion identification is carried out with support vector cassification, to obtain Emotion identification as a result, due to mood Recognition result has merged facial expression and limb action, and identification accuracy is higher.
Specific embodiment
Technical solution of the present invention is described in further detail below in conjunction with specific embodiment, but the present embodiment and is not had to It is all that protection scope of the present invention should all be included in using similar structure and its similar variation of the invention in the limitation present invention, this Pause mark in invention indicates the relationship of sum, and the English alphabet in the present invention is case sensitive.
A kind of Emotion identification method merging facial expression and limb action three-dimensional feature provided by the embodiment of the present invention, It is characterized by:
Using camera shooting include the Sample video of human body face expression and limb action, then is mentioned from Sample video The feature vector for having merged facial expression and limb action is taken, the feature vector of extraction is input to SVM model and is trained, is obtained To SVM Emotion identification model;
When carrying out Emotion identification to target video, the spy for having merged facial expression and limb action is extracted from target video It is input in the SVM Emotion identification model that training obtains after sign vector, implements feelings using feature vector of the SVM classifier to extraction Thread identification;
The step of feature vector for having merged facial expression and limb action is extracted from video is as follows:
1) using the characteristic detection method filtered based on Gabor, face's three-D grain feature is extracted from video;
The method for extracting face's three-D grain feature from image using the characteristic detection method filtered based on Gabor is existing There is technology;This method (adds rich filtering using one-dimensional Gabor filter on (x, t) time shaft and (y, t) time shaft of video Device) each frame image is filtered, it can also be filtered in (x, y) spatial axes of video using Gaussian according to actual needs Wave device (Gaussian filter) is filtered each frame image, Gabor filter response function R such as formula 1;
Formula 1:R=(I (x, y, t) * g (x, y, σ) * hev)2+ (I (x, y, t) * g (x, y, σ) * hod)2
In formula 1, I (x, y, t) is video data, and g (x, y, σ) is that the dimensional Gaussian acted on Spatial Dimension (x, y) is put down Sliding kernel function, hevAnd hodIt is the one-dimensional Gabor filter for acting on a pair of orthogonal of time-domain, hevAnd hodIt is respectively defined as formula 2, formula 3;
Formula 2:
Formula 3:
In formula 2 and formula 3: the τ of ω=4/
2) video can regard the natural extending of single-frame images on a timeline as, and video is in two dimensions of room and time There is the place of great variety, often with the generation of spatio-temporal event, principle can extract from video and represent space-time thing accordingly The space-time interest points of part, and mood is characterized with it;
Under normal circumstances, the maximum place of local acknowledgement in video is space-time interest points, therefore, emerging in order to extract space-time It is interesting, a point of interest primary election threshold condition and a point of interest lower limit value are first set, then using the (letter of non-maxima suppression algorithm Referred to as NMS algorithm, this method are the prior art) search video in face's three-D grain feature meet point of interest primary election threshold value item The Local modulus maxima of part, and from the Local modulus maxima searched out, all face's three-D grain features are greater than interest The Local modulus maxima of point lower limit value is selected as final space-time interest points, and the face's three-D grain for choosing each space-time interest points is special Sign one facial expression space-time characteristic matrix of building;
3) human body limb movement three-D grain is extracted from video using three-dimensional orthogonal plane local binary pattern operator One limb action space-time characteristic matrix of feature construction;
It is special that human body limb movement three-D grain is extracted from video using three-dimensional orthogonal plane local binary pattern operator The method of sign is the prior art;
4) to face's three-D grain feature in facial expression space-time characteristic matrix and limb action space-time characteristic matrix, people Body limb action three-D grain feature, using PCA algorithm carry out dimensionality reduction, thus obtain facial expression space-time characteristic matrix it is main at Divide the principal component eigenmatrix of eigenmatrix and limb action space-time characteristic matrix;
PCA algorithm is used to carry out the method for dimensionality reduction as the prior art;
5) using the blending algorithm for being based on canonical correlation analysis (CCA), to the principal component of facial expression space-time characteristic matrix The principal component eigenmatrix of eigenmatrix and limb action space-time characteristic matrix implements fusion, obtains fused feature vector;
The method integrated based on the blending algorithm of canonical correlation analysis to two matrixes is the prior art, canonical correlation The purpose of analysis is to identify and quantify the connection between two groups of characteristic variables, that is, finds the linear combination of two groups of characteristic variables, and Former variable is indicated with it, the correlation of former variable is reflected with the correlation between them, and specific integration method is as follows:
If the principal component eigenmatrix of facial expression space-time characteristic matrix be X, limb action space-time characteristic matrix it is main at Dividing eigenmatrix is the matrixes that Y, X and Y are respectively p peacekeeping q dimension, and X and Y are expressed as formula 4;
Formula 4:
In order to find certain the maximum linear combination of X and Y degree of correlation, Z is definedxFor the linear combination coefficient of X, ZyFor Y's Linear combination coefficient, ρ (Zx, Zy) it is relevance function, formula is formula 5;
Formula 5:
In formula 5, SXXFor the variance matrix of X, SYYFor the variance matrix of Y, SXYFor the covariance matrix of X and Y, can pass through Method of Lagrange multipliers is by ρ (Zx, Zy) it is reduced to formula 6;
Formula 6:
A matrix R is defined, by solving formula 6 using the method for singular value decomposition to matrix R, the definition of matrix R is formula 7;
Formula 7:
In formula 7, the order of r representing matrix R, λi(i=1 ..., r) representing matrix RTR or RRTCharacteristic value, and D=diag (λi) (i=1 ..., k), its solution is that p × q is asked to tie up the approximate solution that order obtained from correlation matrix is 1, with d singular value before it It goes to approach R, i.e.,Thus formula 6 can be converted into the form of formula 8;
Formula 8:
Therefore, the final projection vector of the blending algorithm based on canonical correlation analysis (CCA) can be obtained by formula 9;
Formula 9:
D can be obtained to the Projection Character of quantity by above-mentioned algorithm, be denoted as Z respectivelyX=(α1..., αd) and ZY=(β1..., βd), then for such as formula 10 of the characteristic vector after X and Y projection;
Formula 10:
By X ' and Y ', serially fusion obtains new feature vector Fusion as shown in Equation 11,;
Formula 11:
In the embodiment of the present invention, implement the side of Emotion identification using SVM model training feature vector and using SVM classifier Method is the prior art;SVM based on Bayesian Learning Theory is a kind of extremely effective recognition methods, and the principle of SVM is first By maps feature vectors to high-dimensional feature space, then finds to largest interval a linear separation hyperplane and separate this higher-dimension The data in space, to mood the video { (x of one group of training labeli, yi), i=1 ..., l }, wherein xi∈Rn, yi∈ { 1, -1 }, Test sample xiPass through the function category of formula 12:
Formula 12:
In formula 12, αiIt is the Lagrange multiplier of double optimization problem, it describes the hyperplane of separation, K (xi, xj) be The kernel function of Nonlinear Mapping, b are hyperplane threshold parameters;
Work as αiWhen > 0, training sample xiIt is exactly supporting vector, support vector machines finds one and supporting vector distance maximum Hyperplane, give a Nonlinear Mapping Φ, the form of kernel function is K (xi, xj)=< Φ (xi)·Φ(xj) >, effect It is exactly by the data conversion of input to higher dimensional space.

Claims (1)

1. a kind of Emotion identification method of fusion facial expression and limb action three-dimensional feature, it is characterised in that:
Include the Sample video of human body face expression and limb action using camera shooting, then extracts and melt from Sample video The feature vector of extraction is input to SVM model and is trained, obtained by the feature vector for having closed facial expression and limb action SVM Emotion identification model;
When carrying out Emotion identification to target video, extracted from target video merged the feature of facial expression and limb action to It is input to after amount in the SVM Emotion identification model that training obtains, implements mood knowledge using feature vector of the SVM classifier to extraction Not;
The step of feature vector for having merged facial expression and limb action is extracted from video is as follows:
1) using the characteristic detection method filtered based on Gabor, face's three-D grain feature is extracted from video;
2) a point of interest primary election threshold condition and a point of interest lower limit value are set, then uses non-maxima suppression algorithm search Face's three-D grain feature in video meets the Local modulus maxima of point of interest primary election threshold condition, and from the part searched out In maximum point, when the Local modulus maxima that all face's three-D grain features are greater than point of interest lower limit value is selected as final Empty point of interest chooses one facial expression space-time characteristic matrix of face's three-D grain feature construction of each space-time interest points;
3) human body limb movement three-D grain feature is extracted from video using three-dimensional orthogonal plane local binary pattern operator Construct a limb action space-time characteristic matrix;
4) to face's three-D grain feature in facial expression space-time characteristic matrix and limb action space-time characteristic matrix, human body limb Body acts three-D grain feature, carries out dimensionality reduction using PCA algorithm, so that the principal component for obtaining facial expression space-time characteristic matrix is special Levy the principal component eigenmatrix of matrix and limb action space-time characteristic matrix;
5) use the blending algorithm based on canonical correlation analysis, principal component eigenmatrix to facial expression space-time characteristic matrix and The principal component eigenmatrix of limb action space-time characteristic matrix implements fusion, obtains fused feature vector.
CN201810816740.0A 2018-07-23 2018-07-23 Merge the Emotion identification method of facial expression and limb action three-dimensional feature Pending CN109145754A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810816740.0A CN109145754A (en) 2018-07-23 2018-07-23 Merge the Emotion identification method of facial expression and limb action three-dimensional feature

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810816740.0A CN109145754A (en) 2018-07-23 2018-07-23 Merge the Emotion identification method of facial expression and limb action three-dimensional feature

Publications (1)

Publication Number Publication Date
CN109145754A true CN109145754A (en) 2019-01-04

Family

ID=64799031

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810816740.0A Pending CN109145754A (en) 2018-07-23 2018-07-23 Merge the Emotion identification method of facial expression and limb action three-dimensional feature

Country Status (1)

Country Link
CN (1) CN109145754A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109934156A (en) * 2019-03-11 2019-06-25 重庆科技学院 A kind of user experience evaluation method and system based on ELMAN neural network
CN110147729A (en) * 2019-04-16 2019-08-20 深圳壹账通智能科技有限公司 User emotion recognition methods, device, computer equipment and storage medium
CN110287912A (en) * 2019-06-28 2019-09-27 广东工业大学 Method, apparatus and medium are determined based on the target object affective state of deep learning
CN110728194A (en) * 2019-09-16 2020-01-24 中国平安人寿保险股份有限公司 Intelligent training method and device based on micro-expression and action recognition and storage medium
CN111353439A (en) * 2020-03-02 2020-06-30 北京文香信息技术有限公司 Method, device, system and equipment for analyzing teaching behaviors
CN111401184A (en) * 2020-03-10 2020-07-10 珠海格力智能装备有限公司 Machine vision processing method and device, storage medium and electronic equipment
CN111680550A (en) * 2020-04-28 2020-09-18 平安科技(深圳)有限公司 Emotion information identification method and device, storage medium and computer equipment
CN115857595A (en) * 2023-03-02 2023-03-28 安徽星辰智跃科技有限责任公司 Functional environment adjusting method, system and device based on user mood

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103971137A (en) * 2014-05-07 2014-08-06 上海电力学院 Three-dimensional dynamic facial expression recognition method based on structural sparse feature study
CN104091169A (en) * 2013-12-12 2014-10-08 华南理工大学 Behavior identification method based on multi feature fusion
CN105139004A (en) * 2015-09-23 2015-12-09 河北工业大学 Face expression identification method based on video sequences
CN106295568A (en) * 2016-08-11 2017-01-04 上海电力学院 The mankind's naturalness emotion identification method combined based on expression and behavior bimodal

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104091169A (en) * 2013-12-12 2014-10-08 华南理工大学 Behavior identification method based on multi feature fusion
CN103971137A (en) * 2014-05-07 2014-08-06 上海电力学院 Three-dimensional dynamic facial expression recognition method based on structural sparse feature study
CN105139004A (en) * 2015-09-23 2015-12-09 河北工业大学 Face expression identification method based on video sequences
CN106295568A (en) * 2016-08-11 2017-01-04 上海电力学院 The mankind's naturalness emotion identification method combined based on expression and behavior bimodal

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
ARROW: "SVM学习总结", 《CSDN》 *
JYT1129: "机器学习-SVM算法原理(1)", 《CSDN》 *
周孟然 等: "《煤矿突水水源的激光光谱检测技术研究》", 31 March 2017, 《合肥工业大学出版社》 *
汪伟鸣 等: "融合面部表情和肢体动作特征的情绪识别", 《电视技术》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109934156A (en) * 2019-03-11 2019-06-25 重庆科技学院 A kind of user experience evaluation method and system based on ELMAN neural network
CN110147729A (en) * 2019-04-16 2019-08-20 深圳壹账通智能科技有限公司 User emotion recognition methods, device, computer equipment and storage medium
CN110287912A (en) * 2019-06-28 2019-09-27 广东工业大学 Method, apparatus and medium are determined based on the target object affective state of deep learning
CN110728194A (en) * 2019-09-16 2020-01-24 中国平安人寿保险股份有限公司 Intelligent training method and device based on micro-expression and action recognition and storage medium
CN111353439A (en) * 2020-03-02 2020-06-30 北京文香信息技术有限公司 Method, device, system and equipment for analyzing teaching behaviors
CN111401184A (en) * 2020-03-10 2020-07-10 珠海格力智能装备有限公司 Machine vision processing method and device, storage medium and electronic equipment
CN111680550A (en) * 2020-04-28 2020-09-18 平安科技(深圳)有限公司 Emotion information identification method and device, storage medium and computer equipment
WO2021217973A1 (en) * 2020-04-28 2021-11-04 平安科技(深圳)有限公司 Emotion information recognition method and apparatus, and storage medium and computer device
CN115857595A (en) * 2023-03-02 2023-03-28 安徽星辰智跃科技有限责任公司 Functional environment adjusting method, system and device based on user mood

Similar Documents

Publication Publication Date Title
CN109145754A (en) Merge the Emotion identification method of facial expression and limb action three-dimensional feature
Wang et al. Large-scale isolated gesture recognition using convolutional neural networks
Jiang et al. Recognizing human actions by learning and matching shape-motion prototype trees
Wong et al. Extracting spatiotemporal interest points using global information
Kim et al. Canonical correlation analysis of video volume tensors for action categorization and detection
Wu et al. A detection system for human abnormal behavior
Jones et al. Relevance feedback for real-world human action retrieval
Foggia et al. Recognizing human actions by a bag of visual words
Gosavi et al. Facial expression recognition using principal component analysis
Zhu et al. Discriminative feature adaptation for cross-domain facial expression recognition
Farajzadeh et al. Study on the performance of moments as invariant descriptors for practical face recognition systems
Liu et al. Action recognition by multiple features and hyper-sphere multi-class svm
Estrela et al. Sign language recognition using partial least squares and RGB-D information
Prabhu et al. Facial Expression Recognition Using Enhanced Convolution Neural Network with Attention Mechanism.
Engoor et al. Occlusion-aware dynamic human emotion recognition using landmark detection
Saabni Facial expression recognition using multi Radial Bases Function Networks and 2-D Gabor filters
Jayasimha et al. A facial expression recognition model using hybrid feature selection and support vector machines
Sidhu et al. Content based image retrieval a review
Tan et al. A Motion Deviation Image-based Phase Feature for Recognition of Thermal Infrared Human Activities.
Li et al. Human action recognition using spatio-temoporal descriptor
Chen et al. Human pose estimation using structural support vector machines
Vo et al. Facial expression recognition by re-ranking with global and local generic features
Henriques Circulant structures in computer vision
Benhamida et al. Human Action Recognition and Coding based on Skeleton Data for Visually Impaired and Blind People Aid System
Lassoued et al. Video action classification: A new approach combining spatio-temporal krawtchouk moments and laplacian eigenmaps

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20190104