TWI303792B - - Google Patents

Download PDF

Info

Publication number
TWI303792B
TWI303792B TW95101438A TW95101438A TWI303792B TW I303792 B TWI303792 B TW I303792B TW 95101438 A TW95101438 A TW 95101438A TW 95101438 A TW95101438 A TW 95101438A TW I303792 B TWI303792 B TW I303792B
Authority
TW
Taiwan
Prior art keywords
expression
feature
facial
conversion
image
Prior art date
Application number
TW95101438A
Other languages
Chinese (zh)
Other versions
TW200727201A (en
Inventor
Zhen-Chang Lian
Yang-Kai Zhang
Zhong-Ling Huang
Original Assignee
Univ Nat Chiao Tung
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Univ Nat Chiao Tung filed Critical Univ Nat Chiao Tung
Priority to TW095101438A priority Critical patent/TW200727201A/en
Publication of TW200727201A publication Critical patent/TW200727201A/en
Application granted granted Critical
Publication of TWI303792B publication Critical patent/TWI303792B/zh

Links

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Description

1303792 九、發明說明: 【發明所屬之技術領域】 本發月係有關-種影像辨識方法,制是關於—種辨識臉部表情之辨 識方法。 【先前技術】 表情辨齡統常麟即時監控系統或是人機互動介面,甚至也可應用 於邊院對重症患者、^轉軸人居減賴護上。在進行表情辨識 之初,必須絲測追蹤鏡頭前移_物體,並自動控制鏡頭遠近取得高解 析度影像,再聰像做人臉_,觸臉部細錢行正規赠理,以得 到較回精確度的臉部n最後才開始臉部表情辨識的進行。以上每一步 驟均需耗費系統―定的效能及運算時間,為了能將系統運用於即時偵測裝 置上,降低表情辨識方法的運算量並維持辨識結果的正確性是目前努力的 方向。 常見的表情辨識技術有主要元素分析(Pdneiple ___ Analysis)和線性識別分析(Linear Discriminate遍响)的方法, 雖然簡單’但細於赫度高的人臉紐賴上效果並不好;而利用類神 經網路(Neural Network)、隱藏式馬可夫模型(Hidden Markw M〇dels) 或是賈伯小波轉換(Gabor Wavelet TVansfQnnatic)n)結合支持向量機 (Support Vector Machine)等方法雖然可達到不錯的辨識率,卻又因複 雜的運算過程耗費過多的系統效能及執行時間,而失去即時的功用。 基於上述習知技術的缺點,本發明即針對運算複雜度進行改良,提出 1303792 -種可調整準確度及運算_以符合實際需求的臉部表情辨識方法。 【發明内容】 本發明之主要目的係在提供一種即時的臉部表情辨識方法,其係以特 徵轉換矩轉絲賴侧_財式,使運算餅低,職難更快速。 本發明之另-目的係在提供-種臉部表細識方法,無論是系統建立 或進行表情辨識之過程均不需接觸或穿戴任何裝置,使操作步驟簡便。 • 本發明之再-目的係在提供—種可依實際需求調整運算時間之臉部表 情賊方法,藉由調整特徵轉換矩陣的維度大小,可控制系統的準確度與 執行時間。 為了達到上述目的’本發明之臉部表情辨識方法係先讀入各種臉部表 情的訓練影像,擁取出每-訓練影像的特徵值,並計算各表情間的特徵轉 換矩陣’在特徵轉換矩陣資料庫建立完成後,即可開錢躲情辨識作業。 其係將使用者已知表情影像經由特徵轉換矩陣轉換建立出該使用者之各種 • 表情模型,接著把擷取到的使用者臉部影像與各種表情模型進行比對,並 輸出辨識結果。 底下藉由具體實施例配合所附的圖式詳加說明,當更容易瞭解本發明 之目的、技術内容、特點及其所達成之功效。 【實施方式】 本發明倾1_部表情_綠,聽·繼制的特徵轉換 矩陣將各種表情影像做直接轉換,快速建立各種臉部表情模型,如生氣 (A職)、厭惡⑽㈣、害怕(Fea〇、快樂(h着)、難過(祕叫 6 1303792 驚謂(Surprise)和無表情(Neutral)等,之後再利用這些表情模型進行 表情的比對。第-圖為轉舰轉換輯之流細,首先如步驟⑽所示, 讀入生乳、厭惡、害怕、快樂、難過、輯和無表情之各表情的大量訓練 影像’接著在步驟S12中,並請同時參考第二圖,將讀入之影像大小為_ 的訓練影像以p*q的大小切割成數個小區塊,以公式⑴求出每—小區塊内 像素值的平均值,1303792 IX. Description of the invention: [Technical field to which the invention pertains] The present invention relates to an image recognition method, which relates to a method for recognizing facial expressions. [Previous technology] The expression of the age-changing Changlin real-time monitoring system or the human-computer interaction interface can even be applied to the front-sector to the critically ill patients. At the beginning of facial expression recognition, it is necessary to measure the tracking lens to move forward to the object, and automatically control the lens to obtain a high-resolution image, and then use the face as a face to make a more accurate look. The face of the degree n finally began to perform facial expression recognition. Each of the above steps requires system-specific performance and computation time. In order to apply the system to the instant detection device, it is the current direction to reduce the computational complexity of the expression recognition method and maintain the correctness of the identification result. The common expression recognition technology has the main element analysis (Pdneiple ___ Analysis) and the linear recognition analysis (Linear Discriminate) method, although the simple 'but the finer than the high degree of the face is not good effect; and the use of the class Neural network, Hidden Markw M〇dels or Gabor Wavelet TVansfQnnatic (n) combined with Support Vector Machine (Support Vector Machine) can achieve a good recognition rate. However, due to the complicated operation process, the system performance and execution time are too much, and the instant function is lost. Based on the shortcomings of the above-mentioned prior art, the present invention improves the computational complexity, and proposes 1303792 - an adjustable facial expression recognition method that can adjust the accuracy and operation _ to meet actual needs. SUMMARY OF THE INVENTION The main object of the present invention is to provide an instant facial expression recognition method, which is characterized in that the conversion of the moment is reduced to the side of the grid, so that the calculation cake is low and the job is more difficult. Another object of the present invention is to provide a method for analysing facial expressions, which does not require contact or wear of any device during the process of system establishment or expression recognition, so that the operation steps are simple. • The re-purpose of the present invention is to provide a face thief method that can adjust the operation time according to actual needs, and by adjusting the dimension size of the feature conversion matrix, the accuracy and execution time of the system can be controlled. In order to achieve the above object, the facial expression recognition method of the present invention first reads the training images of various facial expressions, extracts the feature values of each training image, and calculates the feature conversion matrix between the expressions in the feature conversion matrix data. After the library is established, you can open the money to identify the job. The user-known expression image is converted into a variety of expression models of the user through the feature transformation matrix, and then the captured user's face image is compared with various expression models, and the recognition result is output. The purpose, technical contents, features and effects achieved by the present invention will become more apparent from the detailed description of the embodiments and the accompanying drawings. [Embodiment] The present invention pours 1_ facial expression_green, and the feature conversion matrix of listening and relaying directly converts various expression images, and quickly establish various facial expression models, such as anger (A position), disgust (10) (four), fear ( Fea〇, happy (h), sad (secret 6 1303792 Surprise and Neutral), and then use these expression models for expression comparison. The first picture shows the flow of the transshipment Fine, first, as shown in step (10), read a large number of training images of the expressions of raw milk, disgust, fear, happiness, sadness, and expressionlessness. Then, in step S12, please refer to the second figure at the same time, and read in The training image of image size _ is cut into several cell blocks by the size of p*q, and the average value of pixel values in each cell block is obtained by formula (1).

pqPq

Σ (s,tQ XΣ (s, tQ X

l< l < N ⑴ 其中Vs,t為區塊Xl中(s,t)點的像素值。整張影像就可以用一組包含各區塊 像素值的特徵向量α2 ... 作為其特徵值的表示,而不同 表晴間的轉換斜以以其對應之特徵向量的轉換來進行。兩特徵向量間的 轉換矩陣,即特徵轉換矩陣可由直接對應法(Direct Mapping)或奇異值分 解法(Singular Value Decomposition)求得。l < l < N (1) where Vs,t is the pixel value of the (s, t) point in block X1. The entire image can be represented by a set of eigenvectors α2 ... containing the pixel values of the respective blocks, and the conversion slant between the different gradations is performed by the conversion of the corresponding eigenvectors. The transformation matrix between the two feature vectors, that is, the feature transformation matrix can be obtained by Direct Mapping or Singular Value Decomposition.

如第三圖所示,步驟S14中直接對應法是任兩特徵值的直接對應,以 矩陣運算的方式求得特徵機矩陣;而步驟S16 _之奇異齡解法係如 第四圖所示’將所有的訓練影像放人同—矩陣,再以公式⑵將矩陣做奇異 值分解,得到-組可自行選擇維度大小的特徵屬性向量v, 為⑴… A^k) (⑴… A/k) Α (1> …As shown in the third figure, the direct correspondence method in step S14 is a direct correspondence between any two eigenvalues, and the feature machine matrix is obtained by matrix operation; and the singular age solution of step S16_ is as shown in the fourth figure. All the training images are placed in the same matrix, and then the matrix is singularly decomposed by the formula (2) to obtain the feature attribute vector v of the group size that can be selected by itself. (1)... A^k) ((1)... A/k) (1> ...

⑵ 再求出特徵屬性向量與各表情影像間的特徵轉換矩I L pj ·· ph] 。由於特徵屬性向量的維度大小決定了特: 1303792 轉換矩陣的維度大小’ _定了表情轉換的準確度及計_,當特徵 轉換矩陣_度越大,表情轉換轉確度會越高,但計_也越長。故 藉由調整特徵屬性向量的維度大小,可控制系統的準確度與執行時間以符 合實際情況的需求。 在求出可將代表臉部表情之特徵值轉換成代表另一臉部表情之特徵 值的特徵轉換矩陣之後,更可包括_步糊,將其齡至資料庫中,完成 特徵轉換矩陣資料庫的建立,接著就可關始表情辨識作業。 圖為本發月進行臉部表情觸之流程圖。步驟卿擁取使用者臉 邛〜像’而步驟S22觸該影像是否為使用者的已知表情影像如果是, 則步驟S24將該影像以與建立特徵轉換矩陣時操取影像特徵值相_方法 操取該表情影像之特徵值,並經由特_換矩轉換成其他表情的特徵 值再對應回原來影像中,得到各種表情的表情模型,並儲存至一表情指 型資料庫’如步驟S26所示。若該影像不是已知,而為一待辨識之表情景 像,則進入步驟S28使用公式⑶之二維關聯式比對法(c〇rreiati〇i Matching)錢人之麟影像絲舰觀料庫帽有絲舰型進洲 對,为別求得一關聯係數γ(x,y), Σ Σ ^) ~ /] [^(χ+s,x+t)-w] Γ ι ⑶ ΣΣ[/㈣-Λ2ΣΣ[ Hx + s,y + t)^wf\2(2) Then find the feature transformation vector I L pj ·· ph] between the feature attribute vector and each expression image. Because the dimension size of the feature attribute vector determines the special: 1303792 The dimension size of the transformation matrix ' _ determines the accuracy of the expression conversion and the calculation _, when the feature transformation matrix _ degree is greater, the expression conversion will be more accurate, but _ The longer it is. Therefore, by adjusting the dimension size of the feature attribute vector, the accuracy and execution time of the system can be controlled to meet the actual situation. After finding a feature transformation matrix that can convert the feature value representing the facial expression into a feature value representing another facial expression, the _step paste can be included, and the age is converted into a database to complete the feature conversion matrix database. The establishment of the, then you can start the expression recognition job. The figure is a flow chart of facial expression touches in the month of the month. Step S22 grabs the user's face 像~image and step S22 touches whether the image is a known expression image of the user. If yes, step S24 takes the image with the image feature value when the feature conversion matrix is established. The feature value of the expression image is obtained, and the feature value of the other expression is converted into the original image through the special_change moment, and the expression model of each expression is obtained and stored in an expression finger type database as in step S26. Show. If the image is not known, but is a scene image to be recognized, then proceed to step S28 using the two-dimensional correlation comparison method of formula (3) (c〇rreiati〇i Matching) Qianren Zhilin image ship ship observation library The cap has a silk ship type into the continent pair, so as to obtain a correlation coefficient γ(x,y), Σ Σ ^) ~ /] [^(χ+s,x+t)-w] Γ ι (3) ΣΣ[/ (4)-Λ2ΣΣ[ Hx + s,y + t)^wf\2

^ S t St I 其中f(x,y)為輪入之臉部影像,W(X,y)為表情模型,尸和讶則為各別之像 素值平均。最後於步驟S30輸出其中關聯係數最大之表情模型作為辨識結 1303792 本發明提供了—種即時的臉部表情職方法,其細特徵轉換矩陣作 為表情模型間的轉換方式,經由特徵轉換矩陣將—臉部表情轉換成另—種 臉部表情,可使得纽運算餅低,_難更糾,且絲是建立特徵 轉換矩陣資料庫或進行表情辨識之過輯不需接觸或賴任何裝置,操作 步驟簡便,可改善以往習知技術費時費力之缺點。 以上所麟_關她鄕,嫩《該技術者 能了解本發明之魄麟以實施,_限林發狀專利細,故凡立他 未脫離本發明所揭示之精神所完成之等效修飾或修改,仍應包含在以下所 述之申請專利範圍中。 【圖式簡單說明】 第-圖為本魏計算特徵轉換轉之流程圖。 第二圖為本發明特徵值擷取步驟之示意圖。 第三圖為本發明以直制應法轉特徵键矩陣之示意圖。 第四圖為本發明以奇異值分解法求得特徵轉換矩陣之示意圖。 第五圖為本發明進行臉部表情辨識之流程圖。 【主要元件符號說明】^ S t St I where f(x, y) is the facial image of the wheel, W(X, y) is the expression model, and the corpse and the surprise are the average of the pixel values. Finally, in step S30, the expression model with the largest correlation coefficient is output as the identification node 1303792. The present invention provides an instant facial expression method, and the fine feature transformation matrix is used as a conversion mode between the expression models, and the face is transformed via the feature transformation matrix. The expression of the part is converted into another facial expression, which can make the calculation of the cake low, _ difficult to correct, and the silk is to create a feature conversion matrix database or perform facial expression analysis without touching or relying on any device, the operation steps are simple It can improve the shortcomings of the conventional techniques that are time-consuming and labor-intensive. The above-mentioned lin _ 鄕 鄕 鄕 嫩 嫩 嫩 嫩 该 该 该 该 该 该 该 该 该 该 该 该 该 该 该 该 该 该 该 该 该 该 该 该 该 该 该 该 该 该 该 该 该 该 该 该 该 该Modifications shall still be included in the scope of the patent application described below. [Simple description of the diagram] The first-graph is a flow chart of the conversion of feature calculations. The second figure is a schematic diagram of the feature value extraction step of the present invention. The third figure is a schematic diagram of the invention in which the characteristic key matrix is transferred by the direct method. The fourth figure is a schematic diagram of the feature conversion matrix obtained by the singular value decomposition method of the present invention. The fifth figure is a flow chart of facial expression recognition according to the present invention. [Main component symbol description]

Claims (1)

1303792 ^日If正替換頁 十、申請專利範圍?7. -___ Lr種臉部表情特徵轉換轉的建立方法,包括下列步驟: 嗔入各種臉部表情的訓練影像; 擷取出每-該訓練影像的特徵值;以及 陣可將代表一該臉部表情之第-特徵值轉換成代表—Γ徵轉換担 第一 4物估、#&收 另w亥臉部表情之 9 观蝴—輸觸細-物表情。 卞脍/專·圍弟1項所述之臉部表情特徵轉換矩陣的建立方法,立中 :一、害怕,'難過柄表情 ▲ μ翻補弟1項所述之臉部表情特徵轉換矩陣的建立方法,其中 该特徵值係該訓練影像的像素值。 ’、 申μ專利關第3項輯之臉部表情特徵馳轉的建立方法,其中 _練影像值_取係_訓練影像切割成數個小區塊,編每 一該小區塊内該像素值的平均。 5. 如申請專利細第丨撕述之臉部表情舰轉換矩_建立方法,宜中 該特徵轉換矩陣的計算可經由任二該特徵值的直接對應法(Direct M_ng)或奇異值分解法⑶卿㈤⑽D_pQsitiQn)得出。 6. 如申明專她目帛5項所叙臉部表情賴職矩_建対法,其中 該奇異值分解法贱將所有該訓練影像放人同—轉,將該矩陣分解出一 ’且可自狀擇維度大小的賊触向量,再求出該特徵雜向量與各該訓 練影像之間的該特徵轉換矩陣。 7·如申請專纖_ 6項所述之臉部表情特徵轉換矩_建立方法,其中 1303792 日g正替換頁 97 Jl# X寺徵屬㈣_維度大小決定鱗徵轉換轉的維度大小。 補徵ιΐΓ]欄第1項所述之臉部表情特徵轉換矩陣的建立方法,其中 Γ如”專矩陣的維度大小決定了表情轉換的準確度及計算所需時間。 該特==Γ1立方法’其中 長。、ρ轉度大小敍’表,__確度越高,啊算時間越 ,其中 10·如申4利乾圍第丨項所述之臉部表情特徵轉換矩陣的建立方 求出該特徵轉換矩陣後,可將其儲存至-資料庫。 彳 U. -種臉部表情辨識方法,包括下列步驟: 頃入各種臉部表情的訓練影像; 擷取出每一該訓練影像的特徵值; 根f值,計算各該崎鱗_特徵轉換轉,轉徵轉換矩 陣可將代表—該臉部表情之第―特徵值轉換成代表另—該臉部表情之 第-特徵值’進而將一該臉部表情轉換成另一該臉部表情; 擷取使用者臉部影像;以及 判斷該臉部影像是否為已知表情影像: 若是,則將該臉部影像經由特徵轉換矩陣轉換建立出 各種表情的表情模型;以及 若否’則將該臉部影像與該各種表情模型進行比對,輸出 12·如申請專_第η顧述之臉部表情辨_,其ς。 括生氣、厭惡、害怕、快樂、難過、料和絲情。 *表情包 1303792 -以如申請專利範圍第 -訓練影像的像素值。 辨識方法,其中該特徵值係該 K如申請專利範圍第η項所述之臉部表情辨識方法,盆 徵值的擷取係先將該訓練影像切割成數 ^彳、、東影像特 該像素值的平均。 再求出母—该小區塊内 &如t請專概圍第u酬述之臉部表 陣的計算可軸任二鋪紐的直 a ,、巾挪徵轉換矩 分解法(Sin秦驗_)或奇異值 R如申請專利範圍第15項所述之臉部表'_方法,W 法係先將所有該訓練影像放入同—矩陣,將該、厂異值分解 邊特徵轉換矩陣。 1双谭之間的 17.如申請專·_ 16斯叙料錯 量的維度大小決定轉徵轉換轉轉度知、。5S ’、巾挪徵屬性向 他如申請專利細第u項所述之臉部表情 陣的維度大顿定了表情轉換神確度及計算所科I、中轉徵轉換矩 如申請專細㈣項所述之臉部表情觸方法 陣的維度大小越大,錯轉換醉確度 心4徵轉換矩 :如__第—^ 換矩陣後,可將其儲存至一資料庫。 /、中求出該特徵轉 21.如申4利範圍第u項所述之臉部表情辨識方法, 一中δ亥各種表情模 12 1303792 f年月0修正替換頁 型的建立係藉由擷取心a 換成其他讀嚼練,騎應m絲情騎巾徵轉換矩陣轉 汉如申請專利範圍第u項所述之臉部表情辨識方法 型建立後,可儲存至-表賴《料料。 -^各種表情模 23. 如申請專利範圍第u項所述之臉部表 ^ δΒ方法,其中該比對步驟係 將該臉,像與所有§絲情模型進行比對,分別求得係數,其中該 關聯係數取大之該表情模型即為該辨識結果。 24. 如申請專利範圍第23項所述之臉邹表情辨識方法,其中該關聯係數係 該臉部影像與-該表情模型經過二維關聯式比對法(GQ_i〇n Matching)之輸出結果。1303792 ^Day If the page is replaced. 10. What is the scope of the patent application? 7. -___ Lr facial expression feature conversion conversion method, comprising the following steps: breaking into the training images of various facial expressions; extracting the feature values of each of the training images; and the array may represent the facial The expression-the eigenvalue of the expression is converted into a representative--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- The method of establishing the facial expression feature transformation matrix described in 1 item of 卞脍/专·弟弟弟, Lizhong: I. Fear, 'Difficult to handle the expression ▲ μ 翻翻弟1 item of facial expression feature conversion matrix A method is established, wherein the feature value is a pixel value of the training image. ', Shen μ patent off the third item of the facial expression feature to achieve the establishment of the method, in which the _ practice image value _ take the system _ training image cut into several cell blocks, the average of the pixel value in each of the cell blocks . 5. If the application of the patent detailing 丨 丨 之 脸 表情 表情 表情 表情 建立 建立 建立 建立 建立 建立 建立 建立 建立 建立 建立 建立 建立 建立 建立 建立 建立 建立 建立 建立 建立 建立 建立 建立 建立 建立 建立 建立 建立 建立 建立 建立 建立 建立 建立 建立 建立 建立 建立 建立 建立Qing (five) (10) D_pQsitiQn). 6. If she is specifically convinced that she has witnessed the facial expressions of the five facial expressions, the singular value decomposition method, which puts all the training images into the same direction, and decomposes the matrix into a ' The thief touches the vector from the size of the dimension, and then finds the feature transformation matrix between the feature vector and each of the training images. 7. If the application of the special fiber _ 6 facial expression feature conversion moment _ establishment method, which 1303792 day g positive replacement page 97 Jl # X temple genus (four) _ dimension size determines the dimension size of the scale sign conversion. The method for establishing the facial expression feature transformation matrix described in item 1 of the ιΐΓ] column, wherein the dimension of the special matrix determines the accuracy of the expression conversion and the time required for the calculation. The special == Γ1 vertical method 'The length of the .. ρ degree of reversal' table, __ the higher the degree, the more time, the 10, such as the Shen 4 Li Ganwei 丨 丨 之 脸 丨 脸 脸 脸 脸 脸 脸After the feature conversion matrix, it can be stored in the -database. 彳U. - The facial expression recognition method includes the following steps: training images into various facial expressions; 撷 extracting the feature values of each of the training images ; root f value, calculate each of the scales _ feature conversion, the conversion transformation matrix can represent - the facial expression of the first - feature value into a representative of the other - the facial expression of the first - feature value - and then a Converting the facial expression into another facial expression; capturing the facial image of the user; and determining whether the facial image is a known facial expression image: if yes, converting the facial image into a variety of feature transformation matrix to establish various Expression of expression The model; and if not 'the face image is compared with the various expression models, and the output 12·such as the application for the _ η 顾 之 之 脸 脸 脸 脸 脸 , , ς ς ς ς ς ς ς ς ς ς ς ς ς ς ς ς ς ς ς ς ς ς ς ς , sadness, material and silk feeling. * Emoticon package 1303792 - as in the patent application scope - the pixel value of the training image. Identification method, wherein the feature value is the facial expression recognition as described in item n of the patent application scope The method of extracting the value of the basin sign is to first cut the training image into an average of the pixel values of the number of images and the east image. Then find the parent - the block within the block & The calculation of the facial array can be as follows: the straight a of the axis, the conversion method, or the singular value R, as described in claim 15 of the patent scope. In the method, the W method first puts all the training images into the same-matrix, and divides the plant-valued decomposition edge feature transformation matrix. 1 between the two Tans. The size determines the transfer conversion degree, and 5S ', the towel's levy attribute to him The dimension of the facial expression array described in the patent item u is a large dimension of the expression of the expression and the calculation of the subject, and the conversion of the face, such as the application of the special (4) The larger the size, the wrong conversion drunkness and the heart of the 4 sign conversion moment: such as __ first - ^ after changing the matrix, you can store it in a database. /, find the feature turn 21. If the application of the 4 range The facial expression recognition method described in the item, a δ hai various expression modal 12 1303792 f year 0 correction replacement page type is established by taking the heart a to other reading chews, riding the m silk jersey After the conversion matrix is changed, the face expression recognition method described in item u of the patent application scope is established, and can be stored until the material is used. -^ Various expression modalities 23. The face table δ Β method according to the scope of claim U, wherein the comparison step compares the face, like all § silk models, to obtain coefficients, The expression model in which the correlation coefficient is larger is the identification result. 24. The facial expression recognition method according to claim 23, wherein the correlation coefficient is an output result of the facial image and the expression model by a two-dimensional correlation comparison method (GQ_i〇n Matching). 1313
TW095101438A 2006-01-13 2006-01-13 Identification technique of human-face countenance for countenance conversion and comparison TW200727201A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW095101438A TW200727201A (en) 2006-01-13 2006-01-13 Identification technique of human-face countenance for countenance conversion and comparison

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW095101438A TW200727201A (en) 2006-01-13 2006-01-13 Identification technique of human-face countenance for countenance conversion and comparison

Publications (2)

Publication Number Publication Date
TW200727201A TW200727201A (en) 2007-07-16
TWI303792B true TWI303792B (en) 2008-12-01

Family

ID=45070798

Family Applications (1)

Application Number Title Priority Date Filing Date
TW095101438A TW200727201A (en) 2006-01-13 2006-01-13 Identification technique of human-face countenance for countenance conversion and comparison

Country Status (1)

Country Link
TW (1) TW200727201A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI470562B (en) * 2011-04-11 2015-01-21 Intel Corp Method, apparatus and computer-readable non-transitory storage medium for tracking and recognition of faces using selected region classification

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8374422B2 (en) * 2008-04-14 2013-02-12 Xid Technologies Pte Ltd. Face expressions identification

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI470562B (en) * 2011-04-11 2015-01-21 Intel Corp Method, apparatus and computer-readable non-transitory storage medium for tracking and recognition of faces using selected region classification
US9489567B2 (en) 2011-04-11 2016-11-08 Intel Corporation Tracking and recognition of faces using selected region classification

Also Published As

Publication number Publication date
TW200727201A (en) 2007-07-16

Similar Documents

Publication Publication Date Title
US11998364B2 (en) Image-based system and method for predicting physiological parameters
WO2020038136A1 (en) Facial recognition method and apparatus, electronic device and computer-readable medium
CN110348330B (en) Face pose virtual view generation method based on VAE-ACGAN
US10321747B2 (en) Makeup assistance device, makeup assistance system, makeup assistance method, and makeup assistance program
CN101964064B (en) Human face comparison method
KR101901591B1 (en) Face recognition apparatus and control method for the same
Li et al. Overview of principal component analysis algorithm
Zhang et al. Computer models for facial beauty analysis
CN107358648A (en) Real-time full-automatic high quality three-dimensional facial reconstruction method based on individual facial image
JP4951498B2 (en) Face image recognition device, face image recognition method, face image recognition program, and recording medium recording the program
CN108776983A (en) Based on the facial reconstruction method and device, equipment, medium, product for rebuilding network
CN111680550B (en) Emotion information identification method and device, storage medium and computer equipment
JP2007065766A (en) Image processor and method, and program
CN103984922B (en) Face identification method based on sparse representation and shape restriction
WO2023151237A1 (en) Face pose estimation method and apparatus, electronic device, and storage medium
US20230230305A1 (en) Online streamer avatar generation method and apparatus
KR20060098730A (en) Apparatus and method for caricature function in mobile terminal using basis of detection feature-point
KR101558547B1 (en) Age Cognition Method that is powerful to change of Face Pose and System thereof
CN114596619A (en) Emotion analysis method, device and equipment based on video stream and storage medium
CN104091173A (en) Gender recognition method and device based on network camera
CN102495999A (en) Face recognition method
WO2023273247A1 (en) Face image processing method and device, computer readable storage medium, terminal
TWI303792B (en)
CN109087240B (en) Image processing method, image processing apparatus, and storage medium
TWI357022B (en) Recognizing apparatus and method for facial expres