TW201028934A - Facial expression recognition method and system thereof - Google Patents

Facial expression recognition method and system thereof Download PDF

Info

Publication number
TW201028934A
TW201028934A TW98102249A TW98102249A TW201028934A TW 201028934 A TW201028934 A TW 201028934A TW 98102249 A TW98102249 A TW 98102249A TW 98102249 A TW98102249 A TW 98102249A TW 201028934 A TW201028934 A TW 201028934A
Authority
TW
Taiwan
Prior art keywords
expression
classifier
optical flow
flow vector
facial
Prior art date
Application number
TW98102249A
Other languages
Chinese (zh)
Inventor
Wen-Neng Lai
jian-wei Yang
Original Assignee
Univ Nat Cheng Kung
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Univ Nat Cheng Kung filed Critical Univ Nat Cheng Kung
Priority to TW98102249A priority Critical patent/TW201028934A/en
Publication of TW201028934A publication Critical patent/TW201028934A/en

Links

Abstract

A facial expression recognition method comprises an expression classifier training process, and an expression determining process. The expression classifier training process takes a plurality of optical flow chain codes of each image sequence as features, and adopts AdaBoost algorithm for training a plurality of strong classifiers, each strong classifier corresponds to one of a plurality of specific expressions. The expression determining process adopts the strong classifiers to perform classifier operation, and according to the result of classifier operation to perform expression recognition.

Description

201028934 六、發明說明: 【發明所屬之技術領域】 本發明是有關於一種臉部表情辨識技術,特別是指_ 種採用影像序列之光流(optical flow )向量作為特徵,並 配合可適性提昇(Adaptive Boosting,以下簡稱AdaB〇ost) 演算法來訓練表情分類器之臉部表情辨識方法及其系統。 【先前技術】 自動化分析人類的情緒可以為人機互動(human-machine interaction ) 帶來新的里程碑 。由於相當多杜會心 理學家的研究顯示:臉部表情是人類最主要的溝通形式之 一;故,近年來越來越多的研究者對這一塊領域投以重大 的關注,如何使得機器能夠理解人類非口語的溝通方式; 例如,對於臉部表情辨識的發展;正以龐大的研究動能不 斷驅動。 b 歷年來有關表情辨識領域的研究數量相當龐大,現有 的表情辨識方法主要可分類為三種:「以特徵 feat跡based )」、「以模型為基礎(㈣咖心⑻⑷」,及「 以樣本為基礎(sample based )」,分別簡述如下。「以特徵 為基礎」的表情辨識方法係藉由人臉的特徵資訊,例如, 單張影像的整體性(glQbal)臉部紋理特徵、幾何特徵 3 201028934 情辨識方法不像「以模型為基礎」的表情辨識方法需先建 立表情模型,而是採用學習(learning )的方式,透過學習 一系列的訓練樣本,從中擷取有用的特徵以設計具有鑑別 能力的分類器(classifier ),然後以此分類器進行表情辨識 :本發明即是基於「以樣本為基礎」的表情辨識方法進行 研究發展。 在「以樣本為基礎」的表情辨識方法之相關研究中, 文獻 P. Viola and M. Jones. ’’Rapid object detection using a boosted cascade of simple features.Proc. of IEEE Int'l Conf. on Computer Vision and Pattern Recognition, Vol. 1. pp.511 -518. 2001.(以下簡稱文獻Viola)係以積分圖形快速求得比 對特徵,再以AdaBoost演算法從眾多矩形特徵中挑選出最 適合的數個來組成具有較高辨識能力的分類器,最後再以 串接方式節省訓練與偵測時間。文獻Sung Uk Jung, Do Hvoung Kim. Kwane Ho An. and Mvung Jin Chung. ^Efficient rectangle feature extraction for real-time facial expression recognition based on Adaboost·’’ Proc. o疒/五五五 /wf’/ Co»广 o衫 Intelligent Robots and Systems (IRQS-2005). pp. 1941-1946. 2-6 Aug. 2005.(以下簡稱文獻Chung)倐改了文獻Viola所 提出的矩形特徵,以較為多種的特徵型態提供系統作選擇 ,並以AdaBoost演算法讓每個表情各選出五個最適合的特 徵型態,最後再次以AdaBoost演算法從影像上找出最適合 擺放這些特徵的位置,並結合這些特徵形成辨識度較高的 分類器來辨識表情。文獻Honebo Dene. Jianke Zhu, M.R. 201028934201028934 VI. Description of the Invention: [Technical Field] The present invention relates to a facial expression recognition technology, and particularly relates to an optical flow vector using an image sequence as a feature, and is adapted to improve the fitness ( Adaptive Boosting (hereinafter referred to as AdaB〇ost) algorithm to train facial expression recognition method and system of expression classifier. [Prior Art] Automated analysis of human emotions can bring new milestones to human-machine interaction. Because quite a number of Duhui psychologists have shown that facial expressions are one of the most important forms of human communication; in recent years, more and more researchers have paid significant attention to this field, how to make machines Understand the way humans communicate non-verbally; for example, the development of facial expression recognition; is driven by huge research kinetic energy. b The number of studies on the field of expression recognition has been quite large over the years. The existing expression recognition methods can be mainly classified into three types: “based on feature-feat-based”, “model-based ((4) café (8)(4)”, and “by sample The "sample based" is briefly described below. The "feature-based" expression recognition method is based on the feature information of the face, for example, the integrity of a single image (glQbal) facial texture features, geometric features 3 201028934 The method of emotion identification is not like the "model-based" expression recognition method. Firstly, the expression model needs to be established first. Instead, learn a series of training samples and learn useful features to design and identify. The classifier of the ability, and then the expression recognition by the classifier: the invention is based on the "sample-based" expression recognition method for research and development. The research on the "sample-based" expression recognition method , P. Viola and M. Jones. ''Rapid object detection using a boosted cascade of simple features. Proc. of IEEE Int'l Conf. on Computer Vision and Pattern Recognition, Vol. 1. pp.511 -518. 2001. (hereinafter referred to as the document Viola) is to quickly find the matching features in the integral graph, and then use the AdaBoost algorithm Select the most suitable number of rectangular features to form a classifier with higher recognition ability, and finally save training and detection time in series. Sung Uk Jung, Do Hvoung Kim. Kwane Ho An. and Mvung Jin Chung. ^Efficient rectangle feature extraction for real-time facial expression recognition based on Adaboost·'' Proc. o疒/五五五/wf'/ Co»广o衫Intelligent Robots and Systems (IRQS-2005). pp 1941-1946. 2-6 Aug. 2005. (hereinafter referred to as the literature Chung) falsified the rectangular features proposed by the document Viola, providing a system for selection with a variety of trait types, and using the AdaBoost algorithm to make each expression Each of the five most suitable eigentypes is selected. Finally, the AdaBoost algorithm is used to find the position that best fits these features from the image, and combined with these features to form a classifier with high recognition. Expression. Literature Honebo Dene. Jianke Zhu, M.R. 201028934

Lvu. and I. King. ^Two-staee Multi-class AdaBoost for Facial Expression Recognition.Proc. of IEEE Infl Conf. on Neural Networks, dp.3005-3010. Orlando. Florida. USA. August 12-17,?_〇1 (以下簡稱文獻Deng)利用嘉伯(Gabor)濾波器 擷取小波係數特徵,再以AdaBoost演算法選擇出最適合的 特徵’同樣以這些特徵組成多類別的辨識系統,來進行表 情辨識。 歸納前述文獻技術,文獻Viola是近十年内應用 AdaBoost演算法來運作表情辨識系統的典範,常有文獻引 用文獻Viola所提出的方法進行改良;像是,文獻Chung增 加了矩形特徵的種類’因此增加了特徵空間的維度,使辨 識的效果得到相當程度的增益。至於文獻Dene以小玻係數 作為特徵’能夠對於表情樣本的内容分析更為詳細,也為 分類器提供了高辨識能力的基礎《本發明有鑑於表情通常 是由一序列的臉部動作變化所組成,單一靜態影像往往無 法完整表達其所包含的意義,故,採用影像序列之光流向 量作為特徵,並配合AdaBoost演算法設計有鑑別能力的分 類器,以用於臉部表情辨識。 【發明内容】 因此’本發明之目的’即在提供一種臉部表情辨識方 法。 於是,本發明臉部表情辨識方法,包含一表情分類器 訓練程序,以及一表情判斷程序。 其中,該表情分類器訓練程序用以根據複數個訓練影 201028934 像序列樣本訓練出複數個強分類器,每一強分類器對應至 複數個特定表情其中一者,每一訓練影像序列樣本已被標 示為特疋表情標籤,該表情分類器訓練程序包括下列步 驟(a)根據每一訓練影像序列樣本求得複數個光流向量序 列,其中,每一光流向量序列對應至每一臉部畫素位置;( b)針對每—光流向量序列求得與其對紅—光流向量鍵碼 :⑴根據屬於同一特定表情及同一臉部畫素位置的該 “量鏈碼’求得對應前述特定表情及前述臉部畫素位置 之一光流向量核心;(d)針斜备 弱每Μ向4核心訓練出一 兮等弱、 可適性提昇演算法進行循環運算,以自 弱分類器.刀… ⑽每-特定表情的複數個最佳 形成對應每—特定表 情辨識,該等:_進行表 組兮秸心慶信· 值求出對應的複數個信心度評)根據該等組合信心度 列屬於哪—特定表情。 ,以判斷該待測影像序Lvu. and I. King. ^Two-staee Multi-class AdaBoost for Facial Expression Recognition.Proc. of IEEE Infl Conf. on Neural Networks, dp.3005-3010. Orlando. Florida. USA. August 12-17,?_ 〇1 (hereinafter referred to as the document Deng) uses the Gabor filter to extract the wavelet coefficient features, and then uses the AdaBoost algorithm to select the most suitable features. The same feature is used to form a multi-class identification system for expression recognition. Incorporating the aforementioned literature and technology, the paper Viola is a model for using the AdaBoost algorithm to operate the expression recognition system in the past ten years. It is often modified by the method proposed by the document Viola; for example, the literature Chung increases the type of rectangular features' The dimension of the feature space gives the recognition effect a considerable degree of gain. As for the document Dene with the small glass coefficient as the feature 'can analyze the content of the expression sample in more detail, it also provides the basis for the high discriminating ability of the classifier. The invention is based on the fact that the expression is usually composed of a sequence of facial movement changes. A single static image often cannot fully express its meaning. Therefore, the optical flow vector of the image sequence is used as a feature, and the AdaBoost algorithm is used to design a classifier with discriminative ability for facial expression recognition. SUMMARY OF THE INVENTION Therefore, the object of the present invention is to provide a facial expression recognition method. Thus, the facial expression recognition method of the present invention comprises an expression classifier training program and an expression judgment program. The expression classifier training program is configured to train a plurality of strong classifiers according to a plurality of training images 201028934 image sequence samples, each strong classifier corresponding to one of a plurality of specific expressions, each training image sequence sample has been Marked as a special expression tag, the expression classifier training program includes the following steps: (a) obtaining a plurality of optical flow vector sequences according to each training image sequence sample, wherein each optical flow vector sequence corresponds to each facial image (b) for each optical flow vector sequence to obtain its red-light flow vector key code: (1) according to the same specific expression and the same face pixel position of the "quantity chain code" to obtain the corresponding specific The expression and the position of the aforementioned face pixel are one of the optical flow vector cores; (d) the needle is obliquely weak, and each of the four cores trains a weak, adaptable algorithm to perform the loop operation to the weak classifier. (10) A plurality of optimal formations for each-specific expression correspond to each-specific expression recognition, such as: _ performing a table group 兮 心 庆 · · 值 值 值 值 值 值 值 值 值 值If the combined confidence ranks, which is the specific expression, to judge the image sequence to be tested.

本發明之另一目玷B 統 Φ ❹ 。 、’ p在提供-種臉部表情辨識系 於疋,本發明臉部表 -分類器建立單元, H包含1徵計算單天 該特徵計算單元用、:表情辨識單元。 异皁兀用以根據一 m像序列樣本求得複 201028934 • 數個光流向量序列,再針對每—光流向量序列求得與其對 應之-光流向量鏈碼。該分類器建立單元用以訓練出複數 個強分類器’每一強分類器對應至複數個特定表情其中一 者,其中,該分類器建立單元包括一弱分類器訓練模組、 -最佳弱分類器選擇模組’及一強分類器組合模組;該弱 分類器訓練模組用以根據屬於同一特定表情及同一臉部畫 t位置㈣等光流向量鏈碼’求得對應前述特定表情及前 ㈣部晝素位置之—线向4核心,再針對每—光流向量 核心訓練出-弱分類器;該最佳弱分類器選擇模組用以利 用可適性提昇演算法進行循環運算,以自該等弱分類器中 針對每一特定表情選擇出複數個最佳弱分類器;該強分類 器組合模組用以將對應至同一特定表情的該等最佳弱分類 器以線性組合的方式形成對應每一特定表情的強分類器。 該表情辨識單元包括由該分類器建立單元訓練出之該等強 分類器、一信心度評估模組,及一表情判斷模組;該等強 ❹分類器用以對一待測影像序列進行分類器運算,求出對應 每一強分類器的一組合信心度值;該信心度評估模組用以 根據該等組合信心度值求出對應的複數個信心度評估值; 該表情判斷模組用以根據該等信心度評估值來判斷該待測 影像序列屬於哪一特定表情。 【實施方式】 有關本發明之前述及其他技術内容、特點與功效,在 以下配合參考圖式之一個較佳實施例的詳細說明中,將可 清楚的呈現。 7 201028934 參閱圖1,本發明臉部表情辨識系統丨之較佳實施例包 含一影像前處理單元u、一特徵計算單元12、一分類器建 立單元13 ’及一表情辨識單元14。其中,該分類器建立單 元13包括一弱分類器(weak classifier)訓練模組131、一 最佳弱分類器選擇模組132 ’及一強分類器(str〇ng classifier)組合模組133。該表情辨識單元14包括一分類 器運算模組141、一信心度評估模組142,及一表情判斷模 組 143。 該影像前處理單元11用以對影像序列(訓練影像序列 樣本、待測影像序列)中的單張影像進行前處理。該特徵 計算單元12用以根據影像序列求得複數個光流向量序列, 再針對每一光流向量序列求得與其對應的一光流向量鏈碼 (chain code)。該分類器建立單元13用以訓練出複數個強 分類器AU=1,2,...,7)’每-強分類器&對應至複數個特定 表情其中-者。該表情辨識單元14用以判斷出影像序列所 屬之特定表情,且該表情辨識單A 14之分類器運算模組 141具有該分類器建立單元13所訓練出的該等強分 其中,該等特定表情共有7種,分別為一開心表情、一憤 怒表情、一哀傷表情、-厭惡表情、-恐懼表情、一驚訝 ^情太及-無表情1發明所屬領域中具有通常知識者當 所列實施並一種表情辨識’亦不限於上面 軟體=實施例中,本發明臉部表情辨識系統1係以 201028934 配合上述臉部表情辨識 法之較佳實施例包含 、、,本發8錢部表情辨識方 判斷程序。圖丨中虛類器訓練程序,以及-表情 類器訓練程序中之資二箭號部分係用以對應描述該表情分 ^ ,机’圖1令實線箭號部分係用以對 應描述該表情判斷程序中之資料流 參閱圖1與圖2,續矣法、 。 月为類器訓練程序包括下列步驟 如步驟21所示,爭德此各Another aspect of the invention is the system Φ ❹ . , 'p is providing a facial expression recognition system, the facial table-classifier establishing unit of the present invention, H includes 1 calculation calculation unit, and the feature calculation unit uses: an expression recognition unit. Isosaponins are used to obtain a sequence of images from a sequence of m images. 201028934 • Several sequences of optical flow vectors are obtained, and the corresponding optical flow vector chain codes are obtained for each sequence of optical flow vectors. The classifier establishing unit is configured to train a plurality of strong classifiers [each strong classifier corresponding to one of a plurality of specific expressions, wherein the classifier establishing unit comprises a weak classifier training module, - the best weak a classifier selection module' and a strong classifier combination module; the weak classifier training module is configured to obtain the corresponding specific expression according to the optical flow vector chain code belonging to the same specific expression and the same facial t position (4) And the front (4) part of the position of the line - line to the 4 core, and then for each - optical flow vector core training - weak classifier; the best weak classifier selection module is used to perform the loop operation using the adaptive improvement algorithm, Selecting a plurality of optimal weak classifiers for each particular expression from the weak classifiers; the strong classifier combination module is configured to linearly combine the best weak classifiers corresponding to the same specific expression The way forms a strong classifier that corresponds to each particular expression. The expression recognition unit includes the strong classifier trained by the classifier establishing unit, a confidence evaluation module, and an expression judgment module; the powerful classifier is configured to classify a sequence of images to be tested. Calculating a combined confidence value corresponding to each strong classifier; the confidence evaluation module is configured to obtain a corresponding plurality of confidence evaluation values according to the combined confidence values; Based on the confidence evaluation values, it is determined which particular expression the image sequence to be tested belongs to. The above and other technical contents, features, and advantages of the present invention will be apparent from the following detailed description of the preferred embodiments. 7 201028934 Referring to Figure 1, a preferred embodiment of the facial expression recognition system of the present invention comprises an image pre-processing unit u, a feature computing unit 12, a classifier building unit 13' and an expression recognition unit 14. The classifier unit 13 includes a weak classifier training module 131, an optimal weak classifier selection module 132', and a strong classifier module 133. The expression recognition unit 14 includes a classifier operation module 141, a confidence evaluation module 142, and an expression determination module 143. The image pre-processing unit 11 is configured to pre-process a single image in the image sequence (training image sequence sample, image sequence to be tested). The feature calculating unit 12 is configured to obtain a plurality of optical flow vector sequences according to the image sequence, and obtain an optical flow vector chain code corresponding to each optical flow vector sequence. The classifier establishing unit 13 is configured to train a plurality of strong classifiers AU = 1, 2, ..., 7)' per-strong classifier & corresponding to a plurality of specific expressions. The expression recognition unit 14 is configured to determine a specific expression to which the image sequence belongs, and the classifier operation module 141 of the expression recognition unit A 14 has the strong points trained by the classifier establishing unit 13, wherein the specifics There are 7 kinds of expressions, which are a happy expression, an angry expression, a sad expression, a disgusted expression, a fear expression, a surprise, a feeling of too much - no expression. The invention has the usual knowledge in the field of implementation. An expression recognition is not limited to the above software. In the embodiment, the facial expression recognition system 1 of the present invention is included in the preferred embodiment of the facial expression recognition method of 201028934, and is determined by the expression recognition party of the present invention. program. In the figure, the virtual class training program, and the part of the two-arrow part of the expression class training program are used to describe the expression. The machine 'Figure 1 makes the solid arrow part corresponding to describe the expression. Refer to Figure 1 and Figure 2 for the data flow in the judgment program. The monthly training program includes the following steps, as shown in step 21,

^ + ^ , 私像則處理步驟係由於訓練影像序列 t 影像内人臉大小與頭部方向(例如,臉部對稱 向)未必相同’因此,為了提昇後續處理的準確度 ,藉由該影像前處理單元u針對訓練影像序列樣本中每一 影像内人臉部分進行裁切,並將裁切後的人臉部分進行角 度旋轉校正(即’使臉部對稱中線在垂直方向)以使五官 對齊,然、後再將角度旋轉校正後的影像重新取樣(職㈣ )為相同大小(例如’ 256χ256畫素(pixei))。 如特徵計算步驟22所示,係藉由該特徵計算單元i2 先根據訓練影像序列樣本求得該等光流向量序列,再針對 每一光流向量序列求得與其對應的光流向量鏈碼。其中, 每一影像序列樣本包括複數個臉部畫素位置,每一光流向 量序列(及其對應的光流向量鏈碼)對應至每一臉部畫素 位置。 在本較佳實施例中,該特徵計算單元12係採用文獻乙 Y. Bouquet, ^Pyramidal Implementation of the Lucas Kanade Feature__Tracker Description of the Al^orithm.^ Intel 9 201028934^ + ^ , the private image processing step is because the face size of the training image sequence t image is not necessarily the same as the head direction (for example, the face symmetry direction). Therefore, in order to improve the accuracy of subsequent processing, by the image front The processing unit u cuts the face portion of each image in the training image sequence sample, and performs angle rotation correction on the cropped face portion (ie, 'make the face symmetry center line in the vertical direction) so that The facial features are aligned, and then the angle-corrected image is resampled (title (4)) to the same size (for example, '256 256 pixei). As shown in the feature calculation step 22, the feature calculation unit i2 first obtains the optical flow vector sequences according to the training image sequence samples, and then obtains the corresponding optical flow vector chain codes for each optical flow vector sequence. Each image sequence sample includes a plurality of face pixel positions, and each light flow vector sequence (and its corresponding optical flow vector chain code) corresponds to each face pixel position. In the preferred embodiment, the feature computing unit 12 uses the document Y. Bouquet, ^Pyramidal Implementation of the Lucas Kanade Feature__Tracker Description of the Al^orithm.^ Intel 9 201028934

Corporation Microprocessor Research Labs Φ 所述的金字塔 * 結構方式’求得訓練影像序列樣本中每一影像内各臉部畫 素位置的光流向量序列,其中,每一光流向量序列具有複 數個光流向量資訊(Vjc,vj,每一光流向量資訊(Vi,v,)係以一正 切角0 = arg(tan(%))來表示。由於訓練影像序列樣本中,大部 分從無表情轉變到有顯著變化的特定表情往往需要數張影 像,且不同的人的表情變化速度不同。為了使後續處理的 光流向量鏈碼正規化為同一長度以便於辨識,該特徵計算 . 單元12係利用二次曲線法(Quadratic )將每一光流向量序 _ 列的該等正切角重新取樣成一特定數目讲的已重新取樣正 刀肖 i^resampkj’❷resamp丨e_2,…,0res〇n^e—m ),該等已重新取樣正切角中每一 者再經過量化(在本較佳實施例中,係將<9__量化 為8個正切角量化值其中之一者,並標示為12、3、4、5 、6、7,或8),以形成該光流向量鍵碼。 如步驟23所示,弱分類器訓練步驟係以該分類器建立 單π 13之弱分類器訓練模組131根據屬於同一特定表情( 每一影像序列樣本已被標示為一特定表情標籤)及同一臉〇 部畫素位置的該等光流向量鏈碼求得對應每一臉部畫素位 置上特定表情之-光流向量核心(Qptieal flQw _ns),然 後以該等光流向量核心訓練出複數個分別對應每一臉部畫 素位置的弱分類器。 在本較佳實施例中,該弱分類器訓練模組131係將屬 :同-特定表情的同一臉部畫素位置之光流向量鏈碼加總 均’以未得對應前述特定表情之每—臉部晝素位置的光 10 201028934Corporation Microprocessor Research Labs Φ The pyramid* structure method 'determines the optical flow vector sequence of each face pixel position in each image of the training image sequence sample, wherein each optical flow vector sequence has a plurality of optical flow vectors Information (Vjc, vj, each optical flow vector information (Vi, v,)) is represented by a tangent angle of 0 = arg(tan(%)). Since the training image sequence samples mostly change from no expression to A particular expression that changes significantly often requires several images, and different people's expressions change at different speeds. In order to normalize the subsequent processed optical flow vector chain codes to the same length for easy identification, the feature is calculated. The curve method (Quadratic) resamples the tangent angles of each optical flow vector sequence into a specific number of resampled positive knives i^resampkj'❷resamp丨e_2,...,0res〇n^e-m) Each of the resampled tangent angles is further quantized (in the preferred embodiment, <9__ is quantized to one of 8 tangent angle quantized values, and labeled as 12, 3, 4, 5 , 6, 7, or 8) to form the optical flow vector key code. As shown in step 23, the weak classifier training step is to establish a single π 13 weak classifier training module 131 by the classifier according to the same specific expression (each image sequence sample has been marked as a specific expression tag) and the same The optical flow vector chain code at the position of the face of the face is obtained by the optical flow vector core (Qptieal flQw _ns) corresponding to the specific expression at each facial position, and then the complex is trained with the optical flow vector core A weak classifier that corresponds to the position of each face pixel. In the preferred embodiment, the weak classifier training module 131 adds: the optical flow vector chain code of the same facial pixel position of the same-specific expression to each of the specific expressions - face light position 10 light 2010 28934

流向量核心;然後在每一光流向量核心周圍以對應的一門 檻值進行群集(cluster)判斷分類,也就是說,任一光流向 量鍵碼若與其中任一光流向量核心的歐機里德距離在門檻 值内’則判斷為對應特定表情的正樣本。其中,門榣值的 求法是先將每一特定表情中對應同一臉部畫素位置的每一 光流向量鏈碼至其對應的光流向量核心的距離作升冪排序 ,並以距離能夠覆蓋一特定比例(例如,80% )作為門檻 值的取捨,所以門檻值係視臉部晝素位置及特定表情而定 如步驟24〜25所示,最佳弱分類器選擇步驟係對於每 一特定表情,藉由該最佳弱分類器選擇模組132利用 AdaB〇°St演算法進行Γ次的循環運算,以自該等弱分類器 形成的-弱分類器池(ροο1)(該弱分類器池包括所有可能 臉部畫素位置對應的弱分類器)中選擇出複數個(即,7個 )最佳弱分類器以及對應的複數個權重值气,其中, ^二2,··.,Γ,且每一最佳弱分類器(實際上可對應到影像之 疋臉部晝素位置。由於共有7種表情,所以]=1 2 7 類器組合步驟係以該強分類器組合模組…將 弱分㈣^隸組合时式形成對㈣-特定 表情的強分類尸’可以下列式⑴表示。 枷〜) = Σα*Α·,,(从*)........... ,=, ................................ 如式(1)所述,一種特定矣 ^mmstr( ^ 表隋Q的辨識即仰賴一個強 刀頰15及中,Ct)來對樣本义(包 量鏈喝隼合)進行判撕處臉部畫素位置的光流向 進仃判斷。*-)亦可稱為對“個特定表 11 201028934 二的-:合信心度值’若對某—特定表情的組合信心度值 ^ 樣本讀該特定表情的關聯密切度越高。值得 習去枯蔽步驟24〜25中所使用的AdaBoost演算法,係屬 备°技—故其進一部的細節不在此費述。 參閱圓1與圖3,該表情判斷程序包括下列步驟。 如步驟31所示,類彳 -M ^ 則於步驟21 ’係以該影像前處理單 凡11針對待測影像序列中每一 母景“象進行裁切、角度旋轉校 正’及重新取樣之前處理。 如步驟32所示,係以該特 ^ ^ , ^ Λ将徵6十算卓兀12根據待測影 出其對應於每一最佳弱分類 置的光流向#_。 讀Μ之臉。Ρ晝素位 如步驟33所示,係以該分 32所求出的光流向量鏈碼,=運算模組⑷根據步驟 並利用已訓練好的該等強分類 2進订分類器運算,求出對應每一強分類器言心 度值。 细d r广’係以該信心度評估模組142根據該等 度值求出對應的複數個信心度評紙,在本較佳實施 以“:r等組合信心度值以下列式⑴進行正規化, 以求出該等信心度評估值t。 /The stream vector core; then clustering is judged by a corresponding threshold value around each optical stream vector core, that is, any optical stream vector key code is at a distance from the European machine Reid of any of the optical stream vector cores. Within the threshold value, it is judged as a positive sample corresponding to a specific expression. Wherein, the threshold value is obtained by first sorting the distance of each optical flow vector chain code corresponding to the same facial pixel position to the core of the corresponding optical flow vector in each specific expression, and covering the distance by distance A specific ratio (for example, 80%) is used as a threshold for the threshold value, so the threshold value depends on the position of the facial element and the specific expression as shown in steps 24 to 25, and the optimal weak classifier selection step is for each specific The expression, by the optimal weak classifier selection module 132, performs an iterative loop operation using the AdaB〇°St algorithm to form a weak classifier pool (ροο1) formed from the weak classifiers (the weak classifier) The pool includes a plurality of weak classifiers corresponding to all possible face positions, and a plurality of (ie, 7) optimal weak classifiers and corresponding plurality of weight values are selected, wherein ^2, 2, . Γ, and each of the best weak classifiers (actually corresponding to the face of the face of the image. Since there are 7 kinds of expressions, so] 1 2 7 class combination step is the strong classifier combination module ...will form a weak (4)^ combination of the formula to form a pair (four)-specific The strong classification of love can be expressed by the following formula (1). 枷~) = Σα*Α·,, (from *)...........,=, .......... ...................... As described in equation (1), a specific 矣^mmstr (^ table 隋Q recognition depends on a strong knife cheek 15 and In the middle, Ct) is used to judge the optical flow of the face pixel position in the sample meaning (package chain). *-) can also be called "a specific table 11 201028934 two -: confidence value" if a certain combination of specific expressions of confidence value ^ sample read the specific expression of the closer the degree of closeness. Worth learning The AdaBoost algorithm used in the deconstruction steps 24 to 25 is a technique that is not described here. Referring to the circle 1 and FIG. 3, the expression judging program includes the following steps. The class 彳-M ^ is processed in step 21 ' with the image pre-processing 11 for each mother scene "cut, angle rotation correction" and re-sampling in the image sequence to be tested. As shown in step 32, the sign is calculated by the special ^^, ^ Λ, and the light flow corresponding to each of the best weak classifications is ## according to the image to be measured. Read the face of you. As shown in step 33, the prime position is the optical flow vector chain code obtained by the score 32, and the = computing module (4) uses the trained strong classification 2 to perform the classifier operation according to the steps. Corresponding to each strong classifier speech value. According to the confidence level evaluation module 142, the corresponding plurality of confidence evaluation papers are obtained by the confidence evaluation module 142. In the preferred embodiment, the combined confidence value of ":r is normalized by the following formula (1). To obtain the confidence evaluation value t.

信心度評估值I Σ«*,< ❹ 2 如步驟35所示,係以該表情判斷模組⑷ :;度評估值摘待測影像序列屬於哪-特定表情,亦即:在 專信心度評紙中選擇其值最大者〆,其對應的強分類器,· 12 201028934 ' 所屬之特定表情,即判斷為該待測影像序列所屬之特定 情’以下列式(3)表示。 、 arg[忍%(信心度評估值』.................................... 本發明的實驗利用了 Cohn-Kanade AU-Coded臉部表情 影像序列資料庫來進行表情辨識,其中選取317則影像序 列作為訓練影像序列樣本組,158則來當測試樣本組。本實 驗採时又驗證(en)ss_validatiQn)以提升f料庫樣本數量 〇 在本發明的實驗中,我們暫時先將每一強分類器所包 ^的弱分類器個數r設定為2〇,然後針對7個欲辨識之特 定表情在臉部畫素位置上個別選出2G個最佳弱分類器因 此在每個測試樣本上,總共會有140個臉部畫素位置(其 中可能重複)的光流向量鏈碼作為特徵用來判斷該測試樣 本的表情屬性β 若在實驗中變動Γ的值,即可得到如圖4所示的各種 ❿ ^㈣率’圖4中橫轴代表π轴代表經過交又驗證所 得的平均辨識率。@ 4中同時說明在選擇弱分類器過程中 「重複(以虛線表示)」與「不重複(以實線表示)」已選 擇弱分類器的兩種情況。我們可以發現在「不重複」已選 擇肖分類器的情況下,當Γ值升高至約8〇以上時平均辨 識率可達72%以上。 ,綜上所述,本發明採用影像序列之光流向量鏈碼作為 特徵’並配合AdaBoost演算法設計有鐘別能力的強分類器 ,以供進行影像序列之表情判斷辨識,確實能達成本發明 13 201028934 之目的。 惟以上所述者,僅為本發明之較佳實施例而已,當不 能以此限定本發明實施之範圍,即大凡依本發明申請專利 範圍及發明說明内容所作之簡單的等效變化與修飾皆仍 屬本發明專利涵蓋之範圍内。 【圖式簡單說明】 圖1是—方塊圖,說明本發明臉部表情辨識系統之較 佳實施例; ❹ 疋/爪程圖,說明本發明臉部表情辨識方法之較 佳實施例的表情分類器訓練程序; 圖3是-流程圖’說明本發明臉部表情辨識方法之較 佳實施例的表情判斷程序;及 •I對各是稀:曲線圖,說明不同的弱分類器個數與選擇方 式對應的各種平均辨識率。Confidence evaluation value I Σ «*, < ❹ 2 As shown in step 35, the expression judgment module (4) is used: the degree evaluation value is extracted from which image-specific expression belongs, that is, in the confidence level In the evaluation paper, the largest value is selected, and the corresponding strong classifier, 12 201028934 'is a specific expression, that is, the specific situation to which the image sequence to be tested belongs is represented by the following formula (3). , arg [bearing % (confidence evaluation value).................................... The experiment used the Cohn-Kanade AU-Coded facial expression image sequence database to perform expression recognition. Among them, 317 image sequences were selected as the training image sequence sample group, and 158 was used as the test sample group. )ss_validatiQn) to increase the number of f-repository samples. In the experiment of the present invention, we temporarily set the number of weak classifiers r for each strong classifier to 2〇, and then for the specific 7 to be identified. The expression individually selects 2G best weak classifiers on the face pixel position. Therefore, on each test sample, a total of 140 face pixel positions (which may be repeated) of the optical flow vector chain code are used as features. Judging the expression attribute β of the test sample, if the value of Γ is changed in the experiment, various ❿ ^ (four) rates as shown in FIG. 4 can be obtained. In FIG. 4, the horizontal axis represents the π axis, and the average recognition rate obtained after verification is obtained. . @4 also shows two cases in which "were (indicated by dashed line)" and "not repeated (indicated by solid line)" have been selected in the process of selecting a weak classifier. We can find that the average recognition rate can reach 72% when the threshold value rises above about 8〇 in the case of “no repeat” selected Xiao classifier. In summary, the present invention adopts the optical flow vector chain code of the image sequence as the feature' and cooperates with the AdaBoost algorithm to design a strong classifier with a clock ability for the expression judgment of the image sequence, and can truly achieve the present invention. 13 The purpose of 201028934. The above is only the preferred embodiment of the present invention, and the scope of the present invention is not limited thereto, that is, the simple equivalent changes and modifications made by the scope of the invention and the description of the invention are all It is still within the scope of the invention patent. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a block diagram showing a preferred embodiment of the facial expression recognition system of the present invention; ❹ 疋/claw diagram illustrating the expression classification of the preferred embodiment of the facial expression recognition method of the present invention Figure 3 is a flow chart 'illustrating the expression judging program of the preferred embodiment of the facial expression recognition method of the present invention; and · I is thin for each: a graph showing the number and selection of different weak classifiers The various average recognition rates corresponding to the mode.

14 201028934 【主要元件符號說明】 1 ..........臉部表情辨識系 133……強分類器組合模14 201028934 [Explanation of main component symbols] 1 .......... Facial expression recognition system 133... Strong classifier combination mode

統 11 .........影像前處理單元 12 .........特徵計算單元 13 .........分類器建立單元 131 .......弱分類器訓練模 組 132……最佳弱分類器選 則模組 14.........表情辨識單元 141 .......分類器運算模組 142 .......信心度評估模組 143……表情判斷模組 21〜25 ·…步驟 31〜35.…步驟System 11 .... image pre-processing unit 12 ... feature calculation unit 13 ... ... classifier establishment unit 131 .... Weak classifier training module 132...Best weak classifier selection module 14...Expression recognition unit 141....Classifier operation module 142 ..... .. confidence evaluation module 143 ... expression judgment module 21 ~ 25 · ... steps 31 ~ 35 .... steps

1515

Claims (1)

201028934 七、申請專利範圍: 1. 一種臉部表情辨識方法,包含一表情分類器訓練程序, 以及一表情判斷程序; 其中’該表情分類器訓練程序用以根據複數個訓練 影像序列樣本訓練出複數個強分類器,每一強分類器對 應至複數個特定表情其中一者,每一訓練影像序列樣本 已被標示為一特定表情標箴,該表情分類器訓練程序包 括下列步驟:201028934 VII. Patent application scope: 1. A facial expression recognition method, comprising an expression classifier training program, and an expression judgment program; wherein the expression classifier training program is used to train plural numbers according to a plurality of training image sequence samples A strong classifier, each strong classifier corresponding to one of a plurality of specific expressions, each training image sequence sample has been marked as a specific expression standard, and the expression classifier training program comprises the following steps: (a)根據每一訓練影像序列樣本求得複數個光流向 量序列,其中,每一光流向量序列對應至每一臉部畫素 位置; ' (b )針對每一光流向量序列求得與其對應之一光流 向量鍵碼; ^ 根據屬於同一特定表情及同一臉部畫素位置的 該等光流向量鏈碼’求得對應前述特定表情及前述臉部 晝素位置之一光流向量核心;(a) obtaining a plurality of optical flow vector sequences according to each training image sequence sample, wherein each optical flow vector sequence corresponds to each facial pixel position; ' (b) is obtained for each optical flow vector sequence Corresponding to one of the optical flow vector key codes; ^ obtaining the optical flow vector corresponding to the specific expression and the aforementioned facial pixel position according to the optical flow vector chain codes belonging to the same specific expression and the same facial pixel position core; (d)針對每一光流向量核心訓練出一弱分類器; 利用可適性提昇演算法進行循環運算,以自該 =刀類器中選擇出對應於每—特定表情的複數個 弱分類器;及 (f)將該等最佳弱分類器以線性組合的方式形成 應每-特定表情的強分類器; 表情辨I,:二判斷程序用以利用該等強分類器线 該表情判斷程序包括下列步驟: 16 201028934(d) training a weak classifier for each optical flow vector core; performing a cyclic operation using an adaptability improvement algorithm to select a plurality of weak classifiers corresponding to each specific expression from the = tool; And (f) forming the best weak classifiers in a linear combination to form a strong classifier for each specific expression; an expression recognition I: a second determination program for utilizing the strong classifier lines to include the expression determination program The following steps: 16 201028934 (g)利肖該冑㊣分類器對一待測影冑序列進 器運算’求出對應每一強分類器的一级合信心度值;及 ▲ (h) ϋ該等組合信心度值求出對應的複數個信心 度”平估值α判斷該待測影像序列屬於哪一特定表情。 依據申請專利範圍第1項所述之臉部表情辨識方法月,其 中在„亥步驟⑴中,每一光流向量序列具有複數個光 流向量資訊’每—光流向量資訊係以—正切角表示。 依據申請專利範圍第2項所述之臉部表情㈣方法,其 中在該步驟(b)中,係利用二次曲線法將每一光流向 量序列的該等正切角重新取樣成—特定數目的已重新取 樣正切角’該等已重新取樣正切角中每一者再被量化為 複數個正切角量化值其中—者,以形成該光流向量鍵碼 〇 4. 依據申請專利範圍第1項所述之臉部表情辨識方法,其 中’該步驟(h)具有下列子步驟: (h 1 )將該專組合彳g心度值正規化,以求出該等信 心度評估值;以及 (h-2 )在該等信心度評估值中值最大者其對應的 強分類器所屬之特定表情,即判斷為該待測影像序列所 屬之特定表情。 5. 依據申請專利範圍苐丨項所述之臉部表情辨識方法,其 中該等特定表情為一開心表情、一憤怒表情、一哀傷 表情、一厭惡表情、一恐懼表情、一驚訝表情,及一無 表情。 17 201028934 6. —種臉部表情辨識系統,包含: 特徵-十算單凡,用以根據一訓練影像序列樣本求 得複數個光流向量序列,再針對每—光流向量序列求得 與其對應之一光流向量鏈碼; 一分類器建立單元,用w,站.丨 早 用以訓練出複數個強分類器, 每一強分類器對應至複數個特定表情其中—者其中, 該分類器建立單元包括一弱分類器訓練模組一最佳弱 分類器選擇模組’及一強分類器組合模組,該弱分_ 訓練模組用以根據屬於同一特定表情及同一臉部畫素& ❹ 置的該等光流向量鍵碼,求得對應前述特定表情及前述 臉部晝素位置之-光流向量核心,再針對每—光流向量 核心訓練出一弱分類器,該最佳弱分類器選擇模組用以 利用可適性提昇演算法進行循環運算,以自該等弱分類 器中針對每一特定表情選擇出複數個最佳弱分類器該 強分類器組合模組用以將對應至同一特定表情的該等最 佳弱分類器以線性組合的方式形成對應每一特定表情的 強分類器;以及 ❹ 一表情辨識單元,包括由該分類器建立單元訓練出 之該等強分類器、一信心度評估模組,及一表情判斷模 組,該等強分類器用以對一待測影像序列進行分類器運 _ 算,求出對應每一強分類器的一組合信心度值,該作心 . 又《平估模組用以根據該等組合信心度值求出對應的複數 個仏心度評估值,該表情判斷模組用以根據該等作心度 評估值來判斷該待測影像序列屬於哪一特定表情。 18 201028934 7. 依據申請專利範圍第6項所述之臉部表情辨· 中,每一光流向量序列具有複數個光流向 、統 、 置資訊,每一 光流向量資訊係以一正切角表示。 8. 依據申凊專利範圍第7項所述之臉部表情 ^ ^ ^ 馆辨識系統,其 中,該特徵計算單元係利用二次曲線法將 ^ ^ ^ 母一光流向量 序列的该等正切角重新取樣成—特^數目的已重新取樣 =1:等已重新取樣正切角中每一者再被量化為複 ㈣正切角量化值其中-者’以形成該光流向量鏈碼。 • 9.依據申請專利範圍帛6項所述之臉部表情辨識系統,其 中,該信心度評估模組係將該等組合信心度值正規化:、 以求出該等信心度評估值。 Η).依據申請專利範圍第6項所述之臉部表情辨識系統,其 中’該表情判斷模組係以該等信心度評估值中值最大者 =對應的強分類器所屬之特定表情,判斷為該待測影 像序列所屬之特定表情。 ’ ❿η.:據申請專利範圍第6項所述之臉部表情辨識系統,其 ,該等特定表情為-開心表情、—憤怒表情、一哀傷 =、一厭惡表情、一恐懼表情、一驚訝表情, 表情。 …、 19(g) Li Xiao's positive classifier calculates a first-level confidence value for each strong classifier for a to-be-measured sequencer; and ▲ (h) ϋ the combined confidence values The corresponding plurality of confidence levels "flat value α" determine which specific expression the image sequence to be tested belongs to. According to the facial expression recognition method described in claim 1, wherein in the step (1), each An optical flow vector sequence has a plurality of optical flow vector information 'per-light flow vector information is represented by a tangent angle. According to the facial expression (4) method of claim 2, wherein in the step (b), the tangent angles of each optical flow vector sequence are resampled into a specific number by a quadratic method. The resampled tangent angle 'each of the resampled tangent angles is further quantized into a plurality of tangent angle quantized values to form the optical flow vector key code 〇 4. According to claim 1 The facial expression recognition method, wherein 'the step (h) has the following sub-steps: (h 1 ) normalizing the special combination 心g heart value to obtain the confidence evaluation value; and (h- 2) The specific expression of the corresponding strong classifier in the median value of the confidence evaluation values is determined as the specific expression to which the image sequence to be tested belongs. 5. The facial expression recognition method according to the scope of the patent application, wherein the specific expressions are a happy expression, an angry expression, a sad expression, a disgusting expression, a fear expression, a surprise expression, and a Expressionless. 17 201028934 6. A facial expression recognition system, comprising: a feature-ten calculation unit for obtaining a plurality of optical flow vector sequences according to a training image sequence sample, and corresponding to each optical flow vector sequence One optical flow vector chain code; a classifier building unit, using w, station. 丨 early to train a plurality of strong classifiers, each strong classifier corresponding to a plurality of specific expressions - wherein the classifier The establishment unit includes a weak classifier training module, an optimal weak classifier selection module, and a strong classifier combination module, and the weak component _ training module is used according to the same specific expression and the same facial pixel & The optical flow vector key code of the device is used to obtain an optical flow vector core corresponding to the specific expression and the position of the aforementioned facial element, and then a weak classifier is trained for each optical flow vector core, the best The weak classifier selection module is configured to perform a loop operation by using an adaptability improvement algorithm to select a plurality of best weak classifiers for each specific expression from the weak classifiers. The module is configured to form the strong classifiers corresponding to each specific expression in a linear combination manner with the best weak classifiers corresponding to the same specific expression; and an expression recognition unit, including training by the classifier building unit The strong classifier, a confidence evaluation module, and an expression judgment module are used to perform a classifier operation on a sample sequence to be tested, and obtain a corresponding classifier for each strong classifier. a combination confidence value, the heart. The "evaluation module" is used to obtain a corresponding plurality of evaluation values based on the combined confidence values, and the expression judgment module is configured to perform evaluation according to the mentality The value is used to determine which specific expression the image sequence to be tested belongs to. 18 201028934 7. According to the facial expression recognition described in item 6 of the patent application scope, each optical flow vector sequence has a plurality of optical flow directions, unified information, and each optical flow vector information is represented by a tangent angle. . 8. The facial expression recognition system according to claim 7 of the claim, wherein the feature calculation unit uses the quadratic curve method to calculate the tangent angle of the ^^^ mother-optical flow vector sequence Resampling into a number of resampled = 1; each of the resampled tangent angles is then quantized to a complex (four) tangent angle quantized value where - to form the optical flow vector chain code. 9. The facial expression recognition system according to claim 6, wherein the confidence evaluation module normalizes the combined confidence values to obtain the confidence evaluation values. Η). The facial expression recognition system according to claim 6 of the patent application scope, wherein the expression determination module determines that the median value of the confidence evaluation value is the largest one of the corresponding strong classifiers It is the specific expression to which the image sequence to be tested belongs. '❿η.: According to the facial expression recognition system described in claim 6, the specific expressions are - happy expression, angry expression, sadness, a disgusted expression, a fear expression, a surprised expression , expressions. ..., 19
TW98102249A 2009-01-21 2009-01-21 Facial expression recognition method and system thereof TW201028934A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW98102249A TW201028934A (en) 2009-01-21 2009-01-21 Facial expression recognition method and system thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW98102249A TW201028934A (en) 2009-01-21 2009-01-21 Facial expression recognition method and system thereof

Publications (1)

Publication Number Publication Date
TW201028934A true TW201028934A (en) 2010-08-01

Family

ID=44853859

Family Applications (1)

Application Number Title Priority Date Filing Date
TW98102249A TW201028934A (en) 2009-01-21 2009-01-21 Facial expression recognition method and system thereof

Country Status (1)

Country Link
TW (1) TW201028934A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102385691A (en) * 2010-08-31 2012-03-21 财团法人资讯工业策进会 Facial expression identification system, identification device and identification method
US9036018B2 (en) 2010-06-17 2015-05-19 Institute For Information Industry Facial expression recognition systems and methods and computer program products thereof
CN105404878A (en) * 2015-12-11 2016-03-16 广东欧珀移动通信有限公司 Photo classification method and apparatus
US9330483B2 (en) 2011-04-11 2016-05-03 Intel Corporation Avatar facial expression techniques
CN105847734A (en) * 2016-03-30 2016-08-10 宁波三博电子科技有限公司 Face recognition-based video communication method and system

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9036018B2 (en) 2010-06-17 2015-05-19 Institute For Information Industry Facial expression recognition systems and methods and computer program products thereof
CN102385691A (en) * 2010-08-31 2012-03-21 财团法人资讯工业策进会 Facial expression identification system, identification device and identification method
US9330483B2 (en) 2011-04-11 2016-05-03 Intel Corporation Avatar facial expression techniques
CN105404878A (en) * 2015-12-11 2016-03-16 广东欧珀移动通信有限公司 Photo classification method and apparatus
CN105847734A (en) * 2016-03-30 2016-08-10 宁波三博电子科技有限公司 Face recognition-based video communication method and system

Similar Documents

Publication Publication Date Title
Agrawal et al. Facial expression detection techniques: based on Viola and Jones algorithm and principal component analysis
Munder et al. An experimental study on pedestrian classification
Jiang Asymmetric principal component and discriminant analyses for pattern classification
Ma et al. Robust head pose estimation using LGBP
Islam et al. Performance of SVM, CNN, and ANN with BoW, HOG, and image pixels in face recognition
Chen et al. Facial expression recognition using geometric and appearance features
JP2014232533A (en) System and method for ocr output verification
Li et al. Automatic 4D facial expression recognition using dynamic geometrical image network
CN106250811B (en) Unconstrained face identification method based on HOG feature rarefaction representation
CN109815920A (en) Gesture identification method based on convolutional neural networks and confrontation convolutional neural networks
Wang et al. A new facial expression recognition method based on geometric alignment and lbp features
TW201028934A (en) Facial expression recognition method and system thereof
Sisodia et al. ISVM for face recognition
Kumar et al. Artificial Emotional Intelligence: Conventional and deep learning approach
Bilinski et al. Can a smile reveal your gender?
Chen et al. A multi-scale fusion convolutional neural network for face detection
Xu et al. An ordered-patch-based image classification approach on the image grassmannian manifold
Lin et al. Integrating a mixed-feature model and multiclass support vector machine for facial expression recognition
Lu et al. Automatic gender recognition based on pixel-pattern-based texture feature
Ma et al. A smile detection method based on improved LeNet-5 and support vector machine
CN116311483B (en) Micro-expression recognition method based on local facial area reconstruction and memory contrast learning
Jasim et al. A real-time computer vision-based static and dynamic hand gesture recognition system
Yuan et al. Holistic learning-based high-order feature descriptor for smoke recognition
Wang et al. Facial image-based gender classification using local circular patterns
Sadeghzadeh et al. Triplet loss-based convolutional neural network for static sign language recognition