JPH04342078A - Device for recognizing facial expression - Google Patents

Device for recognizing facial expression

Info

Publication number
JPH04342078A
JPH04342078A JP3114600A JP11460091A JPH04342078A JP H04342078 A JPH04342078 A JP H04342078A JP 3114600 A JP3114600 A JP 3114600A JP 11460091 A JP11460091 A JP 11460091A JP H04342078 A JPH04342078 A JP H04342078A
Authority
JP
Japan
Prior art keywords
facial expression
expression
feature
facial
recognition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP3114600A
Other languages
Japanese (ja)
Other versions
JP3098276B2 (en
Inventor
Kenji Mase
健二 間瀬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nippon Telegraph and Telephone Corp
Original Assignee
Nippon Telegraph and Telephone Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nippon Telegraph and Telephone Corp filed Critical Nippon Telegraph and Telephone Corp
Priority to JP03114600A priority Critical patent/JP3098276B2/en
Publication of JPH04342078A publication Critical patent/JPH04342078A/en
Application granted granted Critical
Publication of JP3098276B2 publication Critical patent/JP3098276B2/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • G06V40/176Dynamic expression

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Collating Specific Patterns (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

PURPOSE:To offer an expression recognizing device with accuracy by measuring the slight movement of muscle all over a face, making the time change into a pattern and recognizing expression expressing feeling based on the pattern. CONSTITUTION:The expression recognizing device is constituted in such a way that an optical flow computer is used as a module 1 measuring the movement of skin, the time change pattern of skin movement is converted (2) into a first and second moments and a feature vector in a standard expression picture is set and used in a recognition mode.

Description

【発明の詳細な説明】[Detailed description of the invention]

【0001】0001

【産業上の利用分野】本発明は、表情の時系列画像に基
づき、計算機により表情の測定を行い表情の機械認識を
行う表情認識装置に関するものである。
BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a facial expression recognition device that measures facial expressions using a computer based on time-series images of facial expressions and performs machine recognition of facial expressions.

【0002】0002

【従来の技術】従来、例えば“工藤力訳編「表情分析入
門」誠心書房”に記されているように、心理学の分野で
、人間の視察によって人間の表情を解析する手法がいく
つか提案されている。最近では、テレビ電話、テレビ会
議の様なシステムにおいて人間の表情や動作を送る場合
に、画像の信号の形態ではなく、「笑う」などのような
記号データとして通信するものが指向されている。昭和
63年電子情報通信学会春季全国大会予稿D−97、内
山らの“分析合成画像符号化における顔画像の表情分析
”においては、表情の分析の1手法が提案されており、
それは顔の特徴点をあらかじめ指定し、その変化から表
情を認識する試みであるが、これら特徴点の精度の高い
自動検出は困難であることと、特徴点となりうる場所が
限られて、必ずしも筋肉の微妙な動きまでとらえられな
いという欠点があった。また、これらのデータが表情を
つかさどる筋肉の動きとは関連性が低く、生理学的な分
析になっていないという問題点があった。
[Prior Art] In the field of psychology, several methods have been proposed for analyzing human facial expressions through human observation, as described in, for example, "Introduction to Facial Expression Analysis," edited by Riki Kudo, Seishin Shobo. Recently, when transmitting human facial expressions and movements in systems such as videophones and videoconferencing, the trend is to communicate not in the form of image signals but as symbolic data such as "smile". has been done. In Proceedings of the 1988 Institute of Electronics, Information and Communication Engineers Spring National Conference D-97, Uchiyama et al.'s "Facial Expression Analysis of Facial Images in Analysis and Synthetic Image Coding," a method for facial expression analysis is proposed.
This is an attempt to specify facial feature points in advance and recognize facial expressions from their changes, but it is difficult to automatically detect these feature points with high precision, and the locations that can be used as feature points are limited, so it is not always possible to recognize facial expressions by specifying feature points on the face. The drawback was that it was unable to capture even the subtlest movements. Another problem was that these data had little correlation with the movements of the muscles that control facial expressions, and were not analyzed physiologically.

【0003】また、発明者による特許出願:特願平2−
50246号「表情認識装置」(平成2年3月1日出願
)に見られるように、筋肉の微少な動きを測定し、その
時間的変化をパターン化したものから表情の認識を行う
装置も発表されている。その発明では、表情筋の動きを
筋肉の位置に相当する場所を想定して検出する、「表情
測定装置」(特願平2−50247号)の出力を利用し
ていた。これによると、測定した筋肉の場所が必要で十
分な情報を必ずしも提供しない場合に、無駄な情報を使
って表情の認識を行うことになり、認識率が十分でない
ことがあるという問題があった。
[0003] Also, the inventor filed a patent application: Japanese Patent Application No. 2003-
As seen in No. 50246 "Facial Expression Recognition Device" (filed on March 1, 1990), they also announced a device that measures minute muscle movements and recognizes facial expressions based on patterns of temporal changes. has been done. The invention utilized the output of a "facial expression measurement device" (Japanese Patent Application No. 50247/1999) that detects movements of facial muscles assuming locations corresponding to the positions of the muscles. According to this, when the location of the measured muscle is required and sufficient information is not necessarily provided, there is a problem that facial expressions are recognized using useless information, and the recognition rate may not be sufficient. .

【0004】0004

【発明が解決しようとする課題】さらに、これらの表情
認識の手法および装置は表情を記号の組み合わせで記述
することが目標で、その表情を「笑い」、「怒り」、「
驚き」などの感情の言葉で表して認識するに至っていな
かった。
[Problem to be Solved by the Invention] Furthermore, the goal of these facial expression recognition methods and devices is to describe facial expressions using a combination of symbols, such as "laughter", "anger", "
They had not yet been able to express and recognize their feelings in words such as "surprise."

【0005】一方、画像や音声パターンから認識を行う
手法は、数多く発明報告されているが、対象に依存した
特徴量をいかにうまく認識手法に入力するかが重要で、
それによって認識率が大きく左右されてしまう。しかし
、感情の表情を認識する装置に関する報告はごくわずか
で、具体的にどのような特徴を使って、従来提案されて
いるパターン認識手法を適用すればよいか全く不明であ
った。
[0005] On the other hand, many inventions have been reported regarding methods for recognizing images and audio patterns, but it is important to know how well to input feature values that depend on the object into the recognition method.
This greatly affects the recognition rate. However, there have been only a few reports on devices that recognize emotional facial expressions, and it was unclear what specific features should be used to apply previously proposed pattern recognition methods.

【0006】本発明は、顔面全体にわたって筋肉の微少
な動きを計測し、その時間的変化をパターン化し、その
パターンに基づいて、感情を表す表情の認識を行い精度
の良い表情認識装置を提供することを目的としている。
[0006] The present invention provides an accurate facial expression recognition device that measures minute movements of muscles over the entire face, patterns their temporal changes, and recognizes facial expressions expressing emotions based on the patterns. The purpose is to

【0007】[0007]

【課題を解決するための手段】本発明では、各種表情の
表皮の変形データから学習を行い、最も情報量のある場
所を使って認識することにより、効率よく認識をするシ
ステムを提供する。図1は本発明の原理構成図を示す。
[Means for Solving the Problems] The present invention provides a system that performs efficient recognition by learning from data on the deformation of the epidermis of various facial expressions and performing recognition using the location with the greatest amount of information. FIG. 1 shows a basic configuration diagram of the present invention.

【0008】本発明においては(1)表皮の動きを測定
するモジュール1としてオプティカルフロー計算器を用
い、(2)表皮の動きの時間的変化のパターンを1次お
よび2次モーメントへ変換する変換器2を用いてこれを
認識パターンとし、(3)さらに標準表情画像における
特徴量の識別有効基準量処理部3において統計的解析に
基づいて定め、それにより低次元の特徴ベクトルを定め
ておき、(4)認識モードにおいて利用するようにして
いる。
In the present invention, (1) an optical flow calculator is used as the module 1 for measuring the movement of the epidermis, and (2) a converter is used to convert the temporal change pattern of the movement of the epidermis into first and second moments. 2 is used to make this a recognition pattern, and (3) further, it is determined based on statistical analysis in the identification effective reference amount processing unit 3 for feature amounts in standard facial expression images, thereby determining a low-dimensional feature vector, and ( 4) It is used in recognition mode.

【0009】[0009]

【作用】これにより、限定された標準表情データから特
徴ベクトルの選定を効率よく行うことが可能となるほか
、標準表情画像を増やすことによって、表情の学習効果
をあげることができ、認識率の高い表情認識装置を構成
できる。そして、画像通信のシステムにおいて表情を言
葉で伝達するための手段を提供すると共に、コンピュー
タとのインタフェースにおいて、人間の意志を機械に伝
達する手段を提供することが可能となる。
[Effect] This makes it possible to efficiently select feature vectors from limited standard facial expression data, and by increasing the number of standard facial expression images, it is possible to increase the learning effect of facial expressions and achieve a high recognition rate. A facial expression recognition device can be configured. In addition, it is possible to provide a means for verbally transmitting facial expressions in an image communication system, and also to provide a means for transmitting human intentions to a machine in an interface with a computer.

【0010】0010

【実施例】以下、図に基づいて本発明の装置の動作を説
明する。図2は本発明の表情認識装置全体の構成を説明
する図であって、101はテレビカメラ等の撮像素子お
よびA/D変換回路等からなる映像入力装置である。本
発明の表情認識装置102は、103の表皮変形測定部
、105の表情特徴抽出部、106の標準特徴計算部、
108の表情特徴比較部、109の標準表情特徴蓄積部
、および104,107の認識学習切り替え用連動スイ
ッチA,Bから構成される。
DESCRIPTION OF THE PREFERRED EMBODIMENTS The operation of the apparatus of the present invention will be explained below with reference to the drawings. FIG. 2 is a diagram illustrating the overall configuration of the facial expression recognition device of the present invention, and 101 is a video input device consisting of an image pickup device such as a television camera, an A/D conversion circuit, and the like. The facial expression recognition device 102 of the present invention includes an epidermal deformation measurement unit 103, a facial expression feature extraction unit 105, a standard feature calculation unit 106,
It is composed of an expression feature comparison section 108, a standard expression feature accumulation section 109, and recognition learning switching interlock switches A and B 104 and 107.

【0011】その動作はまず、標準表情の学習モードで
あり、図中連動スイッチA104とスイッチB107と
を下方の端子の側へ倒して、学習を行う。学習は、まず
、学習用の標準表情動作画像を撮影して、当該時系列画
像を表皮変形測定部103に入力して顔部全体の表情生
成に関連する表皮の動きを測定し、標準表情特徴抽出部
105でパターンの学習を行い表情特徴ベクトルの空間
を決定する。さらに、各標準表情動作画像に対してその
表情特徴ベクトルの値を計算し、標準表情特徴蓄積部1
09に蓄積しておく。
The operation is first in the standard expression learning mode, and learning is performed by flipping the interlocking switch A104 and switch B107 in the figure to the lower terminal side. In learning, first, standard facial expression motion images for learning are captured, and the time-series images are input to the epidermal deformation measuring unit 103 to measure epidermal movements related to facial expression generation of the entire face, and standard facial expression features are measured. The extraction unit 105 performs pattern learning and determines the space of facial expression feature vectors. Furthermore, the value of the facial expression feature vector is calculated for each standard facial expression motion image, and the standard facial expression feature storage unit 1
Store it in 09.

【0012】次に、表情の認識モードの動作を説明する
。認識対象の表情画像に対しては、連動スイッチ104
,107を上方の端子の側に倒して認識を行う。認識モ
ードでは、まず表情動作画像を撮影して、当該時系列画
像を表皮変形測定部103に入力して表情生成に関連す
る表皮動きを測定する。ここで学習時点で顔のうち特徴
ベクトルを作成するための表皮の位置が後述のようにわ
かっているので、顔全体の動きを測定して表情特徴計算
部106に送ってもよいし、あるいは該表情特徴計算部
106から位置情報を取り寄せ、その場所に限って表皮
変形の測定を行ってもよい。
Next, the operation of the facial expression recognition mode will be explained. For the facial expression image to be recognized, the interlocking switch 104
, 107 to the upper terminal side for recognition. In the recognition mode, first, facial expression movement images are photographed, and the time-series images are input to the epidermal deformation measurement unit 103 to measure epidermal movement related to facial expression generation. Since the position of the epidermis of the face for creating a feature vector is known at the time of learning as described later, the movement of the entire face may be measured and sent to the facial expression feature calculation unit 106, or the Position information may be obtained from the facial expression feature calculation unit 106 and epidermal deformation may be measured only at that location.

【0013】以下は顔全体の動きを測定してデータ転送
する場合について説明する。次に、表皮変形データを受
け取った表情特徴計算部106においては、先に学習モ
ードで決定した表情特徴ベクトル空間に基づいて、入力
画像に対する表情特徴ベクトルの値を計算して表情特徴
比較部108に送る。表情特徴比較部108は認識対象
画像の表情特徴ベクトルと学習モードにおいて標準表情
特徴蓄積部109に蓄積しておいた標準表情特徴ベクト
ルとを比較して、類似する標準表情特徴ベクトルを検索
し、認識対象画像が何の表情であるかの認識結果を出力
する。
[0013] A case will be described below in which the movement of the entire face is measured and data is transferred. Next, the facial expression feature calculation unit 106 that has received the epidermal deformation data calculates the value of the facial expression feature vector for the input image based on the facial expression feature vector space previously determined in the learning mode, and sends it to the facial expression feature comparison unit 108. send. The facial expression feature comparison unit 108 compares the facial expression feature vector of the recognition target image with the standard facial expression feature vector stored in the standard facial expression feature storage unit 109 in the learning mode, searches for a similar standard facial expression feature vector, and performs recognition. Outputs the recognition result of what expression the target image has.

【0014】以下、各部の動作を説明する。表皮変形測
定部103は表情に関わる表皮の動きを測定する回路で
、オプティカルフローと呼ばれる動きベクトルを画像中
の各点について計算する機能を実現した回路である。 オプティカルフローの計算にはいろいろな方法がすでに
提案されており、たとえば、Horn著「Robot 
 Vision」(MIT  Press発行、198
5)の12章などに詳しく解説されている。あるいは、
画像の符号化装置で用いられるような動きベクトル検出
機構を実現して利用することもできる。
The operation of each part will be explained below. The epidermal deformation measurement unit 103 is a circuit that measures the movement of the epidermis related to facial expressions, and is a circuit that realizes a function of calculating a motion vector called optical flow for each point in an image. Various methods have already been proposed for calculating optical flow. For example, Horn's “Robot
Vision” (MIT Press, 198
It is explained in detail in Chapter 12 of 5). or,
It is also possible to implement and utilize a motion vector detection mechanism such as that used in image encoding devices.

【0015】ここでは、説明のため時刻tから時刻t+
1の2枚の連続する画像から得られる、各位置(x,y
)における動きベクトルを(ut (x,y),vt 
(x,y))のように垂直・水平成分の組で、表皮変形
量として出力する。表情特徴抽出部105は学習モード
において、上記の表皮変形量が時間的に変化するデータ
が各標準表情画像について入力され、次のように動作し
て特徴ベクトルの空間と標準表情特徴ベクトルを決定す
る。
Here, for the sake of explanation, from time t to time t+
Each position (x, y
) at (ut (x, y), vt
A set of vertical and horizontal components such as (x, y)) is output as the amount of skin deformation. In the learning mode, the facial expression feature extracting unit 105 receives the above-mentioned data in which the amount of epidermal deformation changes over time for each standard facial expression image, and operates as follows to determine the space of feature vectors and the standard facial expression feature vector. .

【0016】まず画像の大きさをN×M画素とするとき
、それをn×m個のr×r画素の正方領域に分割する。 そして各i(0<i≦n)行、j(0<j≦m)列の領
域R(i,j)について表皮変形量の特徴を最も的確に
かつ簡便に表現する所の次の1次モーメントおよび2次
モーメントによる統計量、平均μu,i,j ,μv,
i,j と分散σuu,i,j,σuv,i,jおよび
σvv,i,jを計算する。
First, when the size of an image is N×M pixels, it is divided into square regions of n×m r×r pixels. Then, for each i (0<i≦n) row, j (0<j≦m) column region R(i,j), the next first order that most accurately and simply expresses the characteristics of the amount of epidermal deformation. Statistics due to moment and second moment, average μu,i,j,μv,
i,j and the variances σuu,i,j, σuv,i,j and σvv,i,j are calculated.

【0017】[0017]

【数1】[Math 1]

【0018】[0018]

【数2】[Math 2]

【0019】[0019]

【数3】[Math 3]

【0020】[0020]

【数4】[Math 4]

【0021】[0021]

【数5】[Math 5]

【0022】これらの計算結果を並べると、次のK=5
mn次元のベクトル、   F={μu,1,1 ,μv,1,1 ,σuu,
1,1,σuv,1,1,σvv,1,1,…μu,i
,j ,        μv,i,j ,σuu,i
,j,σuv,i,j,σvv,i,j,…μu,n,
m ,μv,n,m ,        σuu,n,
m,σuv,n,m,σvv,n,m}       
                   (6)が出来
るが、認識のためには次元数を落とす必要がある。そこ
で、次の識別有効基準量(評価関数)を用いて、評価量
が大きくなるベクトル要素を認識特徴パラメータとして
選び、目的とする特徴ベクトルを作る。
When these calculation results are arranged, the following K=5
mn-dimensional vector, F={μu,1,1,μv,1,1,σuu,
1,1,σuv,1,1,σvv,1,1,...μu,i
, j , μv, i, j , σuu, i
,j,σuv,i,j,σvv,i,j,...μu,n,
m , μv, n, m , σuu, n,
m, σuv, n, m, σvv, n, m}
(6) can be done, but for recognition it is necessary to reduce the number of dimensions. Therefore, using the following discrimination effective reference quantity (evaluation function), a vector element with a large evaluation quantity is selected as a recognition feature parameter, and a target feature vector is created.

【0023】[0023]

【数6】[Math 6]

【0024】ただし、varW (k)varB (k
)はそれぞれk番目のベクトル要素fk のクラス間分
散とクラス内分散であり、次式で計算する。
[0024] However, varW (k) varB (k
) are the inter-class variance and intra-class variance of the k-th vector element fk, respectively, and are calculated using the following equations.

【0025】[0025]

【数7】[Math 7]

【0026】[0026]

【数8】[Math. 8]

【0027】なお、cは認識しようとするクラスの数で
あり、たとえば“笑い、悲しみ、怒り、驚き”を認識す
るときにはc=4とし、θi はi番目のクラスの学習
用サンプルから計算したベクトル要素の集合を示す。ま
た、
Note that c is the number of classes to be recognized; for example, when recognizing "laughter, sadness, anger, surprise", c = 4, and θi is the vector calculated from the training sample of the i-th class. Indicates a collection of elements. Also,

【0028】[0028]

【数9】[Math. 9]

【0029】はクラスi中の学習サンプルに対するベク
トル要素fk の平均値であり、
is the average value of the vector element fk for the training samples in class i,

【0030】[0030]

【数10】[Math. 10]

【0031】は学習サンプル全体の平均値である。従来
、この式はパターン識別関数の導出に使われていたが、
ここでは特徴ベクトルの次元を小数学習サンプルをつか
って削減するために用いる。このために,以上計算した
J(k)を大きい順に並べ、上位
is the average value of all the learning samples. Traditionally, this formula was used to derive pattern discrimination functions, but
Here, it is used to reduce the dimension of the feature vector using fractional training samples. For this purpose, arrange the J(k) calculated above in descending order, and

【0032】[0032]

【数11】[Math. 11]

【0033】個を選び、特徴ベクトル[0033] Select the feature vector

【0034】[0034]

【数12】[Math. 12]

【0035】を決定する。すなわち、選ばれた要素がど
の領域のどんな特徴量であるかをリスト化して表情特徴
計算部106に送る。さらに、各学習サンプル画像ごと
に、選ばれた特徴パラメータを順に並べて特徴ベクトル
を作り、標準表情特徴蓄積部109に転送する。図3は
、表情特徴計算部106に送られる特徴パラメータの位
置種別リスト201の構成例である。選ばれた各特徴ベ
クトル要素が上記第(1)式ないし第(5) 式で与え
られるどの式に対応しているかが、種別(第(1) 式
ないし第(5) 式のいずれかを指示する)と位置(第
(1) 式ないし第(5) 式中のiとj)との対応に
よって示されている。
Determine . In other words, a list of which areas and what feature quantities the selected elements are is sent to the facial expression feature calculation unit 106 . Furthermore, for each learning sample image, the selected feature parameters are arranged in order to create a feature vector, which is transferred to the standard facial expression feature storage unit 109. FIG. 3 is a configuration example of the feature parameter position type list 201 sent to the facial expression feature calculation unit 106. The type (one of the equations (1) to (5)) is determined by which of the equations (1) to (5) above each selected feature vector element corresponds to. ) and the position (i and j in equations (1) to (5)).

【0036】例えば図示先頭の特徴ベクトル要素は,第
(1) 式においてi=4,j=5とした場合に与えら
れるものである。リトスの内容は一例であり、実際には
学習用データによって変わることがある。図4は、標準
表情特徴蓄積部109へ送られて蓄積される特徴ベクト
ルデータ301の例である。各学習サンプル画像につい
て表情種別(人によって指示される)と夫々の特徴ベク
トルを対応づけしたものである。
For example, the feature vector element at the top of the diagram is given when i=4 and j=5 in equation (1). The contents of the litos are just an example, and may actually change depending on the learning data. FIG. 4 is an example of feature vector data 301 sent to and stored in the standard facial expression feature storage unit 109. For each learning sample image, the type of facial expression (instructed by the person) is associated with the respective feature vector.

【0037】次に、対象とする表情画像の認識モード段
階においては、次のように動作する。学習モードと同じ
ように表皮変形測定部103で計算した筋肉の動きを表
情特徴計算部106に入力する。該計算部は入力データ
から特徴パラメータの位置種別リスト201に基づいて
、特徴ベクトル作成に必要な統計量を計算する。すなわ
ち、リスト201にしたがって、例えば
Next, in the target facial expression image recognition mode stage, the operation is as follows. As in the learning mode, the muscle movements calculated by the epidermal deformation measurement unit 103 are input to the facial expression feature calculation unit 106. The calculation unit calculates statistics necessary for creating a feature vector from the input data based on the feature parameter position type list 201. That is, according to list 201, for example

【0038】[0038]

【数13】[Math. 13]

【0039】としてμu,5,6 を計算する。表情特
徴計算部106はこれら要素を
Calculate μu,5,6 as follows. The facial expression feature calculation unit 106 calculates these elements.

【0040】[0040]

【数14】[Math. 14]

【0041】個計算した後、特徴ベクトルを構成して出
力する。表情特徴計算部106の出力は表情特徴比較部
108に入力される。表情特徴比較部108は認識デー
タの特徴ベクトルが入って来ると、標準表情特徴蓄積部
109から学習データの特徴ベクトルを1つずつ順に呼
び出し、2つの特徴ベクトル間の距離を計算する。全部
の学習データとの距離を計算したら、距離の最小になっ
たk個の学習データの特徴ベクトルデータ301の表か
ら対応する表情を調べ、多数決をとって多い表情ラベル
を結果として出力する。特徴ベクトルデータ301の表
は学習データがL個あったときの例である。たとえば、
未知の認識データの特徴ベクトルと学習データの特徴ベ
クトルデータ301との距離を計算した結果、もっとも
距離の短い3個がデータ番号1(笑い)、番号3(怒り
)、番号4(笑い)だったときは、多数決によって「笑
い」を認識結果とする。この表情特徴比較部108はk
−最近傍法と呼ばれる手法でkの値は「1」から「5」
くらいを通常用いる。
After the calculation, a feature vector is constructed and output. The output of the facial expression feature calculation section 106 is input to the facial expression feature comparison section 108. When the facial expression feature comparison unit 108 receives the feature vectors of the recognition data, it sequentially reads the feature vectors of the learning data one by one from the standard facial expression feature storage unit 109 and calculates the distance between the two feature vectors. After calculating the distances to all the learning data, the corresponding facial expressions are checked from the table of feature vector data 301 of the k pieces of learning data with the minimum distance, and the majority vote is taken to output the most facial expression labels as results. The table of feature vector data 301 is an example when there are L pieces of learning data. for example,
As a result of calculating the distance between the feature vector of the unknown recognition data and the feature vector data 301 of the learning data, the three with the shortest distance were data number 1 (laughter), number 3 (anger), and number 4 (laughter). In this case, ``laughter'' is the recognition result by majority vote. This facial feature comparison unit 108 is k
-The value of k is “1” to “5” using a method called the nearest neighbor method.
Usually used.

【0042】[0042]

【発明の効果】以上説明したように、本発明によれば、
顔の皮膚の変形に基づいて、表情を非接触で認識できる
から、画像通信における表情分析モジュールとして使え
たり、心理学や医学の分野において、微妙な筋肉の動き
を非接触で自動計測できるという利点がある。また、計
算機のインタフェースとして人間の感情を機械に伝える
機能を提供できる。
[Effects of the Invention] As explained above, according to the present invention,
Since facial expressions can be recognized without contact based on the deformation of the facial skin, it can be used as a facial expression analysis module in image communication, and has the advantage of being able to automatically measure subtle muscle movements without contact in the fields of psychology and medicine. There is. Additionally, it can provide a function to convey human emotions to machines as a computer interface.

【図面の簡単な説明】[Brief explanation of drawings]

【図1】本発明の原理構成図である。FIG. 1 is a diagram showing the principle configuration of the present invention.

【図2】本発明の表情認識装置全体の構成を説明する図
である。
FIG. 2 is a diagram illustrating the overall configuration of the facial expression recognition device of the present invention.

【図3】特徴パラメータ位置種別リストの例である。FIG. 3 is an example of a feature parameter position type list.

【図4】標準表情特徴蓄積部で蓄積される特徴ベクトル
データの例である。
FIG. 4 is an example of feature vector data accumulated in a standard facial expression feature accumulation section.

【符号の説明】[Explanation of symbols]

1  表皮の動きを測定するモジュール2  変換器 3  処理部 101  映像入力装置 102  表情認識装置 103  表皮変形測定部 104  スイッチA 105  表情特徴抽出部 106  表情特徴計算部 107  スイッチB 108  表情特徴比較部 109  標準表情特徴蓄積部 1 Module that measures epidermal movement 2 Transducer 3 Processing section 101 Video input device 102 Facial expression recognition device 103 Epidermal deformation measurement section 104 Switch A 105 Facial expression feature extraction part 106 Facial expression feature calculation section 107 Switch B 108 Facial expression feature comparison section 109 Standard facial features storage unit

Claims (1)

【特許請求の範囲】[Claims] 【請求項1】  表情を呈示している顔を映像入力装置
により撮影して得られた時系列画像にもとづいて表情を
認識する表情認識装置において、当該時系列画像に対し
て、表皮変形測定部、表情特徴抽出部、表情特徴計算部
、表情特徴比較部、標準表情特徴蓄積部をそなえ、当該
時系列画像を表皮変形測定部に入力して表情生成に関連
する表皮の動きをオプティカルフローデータとして取り
出し、学習時には用意しておいた標準表情時系列画像の
オプティカルフローデータの1次および2次の統計量パ
ターンから識別有効基準量を計算して表情を識別する特
徴ベクトルを決定しさらに各標準表情の特徴ベクトルを
計算し蓄積し、認識時には与えられた認識対象の表情時
系列画像から同等の特徴ベクトルを計算し、前記蓄積し
てある標準表情の特徴ベクトルと認識対象の特徴ベクト
ルのベクトル間距離に基づき、認識対象に近い標準表情
を選び、その結果をもとに認識対象の表情時系列画像が
何の表情であるかの認識結果を出力することを特徴とす
る表情認識装置
Claim 1: In a facial expression recognition device that recognizes facial expressions based on time-series images obtained by photographing a face exhibiting an expression using a video input device, an epidermal deformation measurement unit detects an expression on the time-series images. , is equipped with an expression feature extraction unit, an expression feature calculation unit, an expression feature comparison unit, and a standard expression feature accumulation unit, and inputs the time-series images to an epidermal deformation measurement unit to measure epidermal movements related to expression generation as optical flow data. The effective reference amount for discrimination is calculated from the first- and second-order statistical patterns of the optical flow data of the standard facial expression time-series images prepared at the time of learning, and the feature vectors for identifying facial expressions are determined. During recognition, equivalent feature vectors are calculated from the given facial expression time-series images of the recognition target, and the distance between the accumulated standard facial expression feature vectors and the recognition target feature vector is calculated. A facial expression recognition device that selects a standard facial expression that is close to the recognition target based on the above, and outputs a recognition result of what kind of facial expression the facial expression time-series image of the recognition target is based on the result.
JP03114600A 1991-05-20 1991-05-20 Facial expression recognition device Expired - Fee Related JP3098276B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP03114600A JP3098276B2 (en) 1991-05-20 1991-05-20 Facial expression recognition device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP03114600A JP3098276B2 (en) 1991-05-20 1991-05-20 Facial expression recognition device

Publications (2)

Publication Number Publication Date
JPH04342078A true JPH04342078A (en) 1992-11-27
JP3098276B2 JP3098276B2 (en) 2000-10-16

Family

ID=14641918

Family Applications (1)

Application Number Title Priority Date Filing Date
JP03114600A Expired - Fee Related JP3098276B2 (en) 1991-05-20 1991-05-20 Facial expression recognition device

Country Status (1)

Country Link
JP (1) JP3098276B2 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004178593A (en) * 2002-11-25 2004-06-24 Eastman Kodak Co Imaging method and system
JP2007200126A (en) * 2006-01-27 2007-08-09 Advanced Telecommunication Research Institute International Feeling informing reporting device
JP2009014415A (en) * 2007-07-02 2009-01-22 National Institute Of Advanced Industrial & Technology Object recognition device and method
JP2010075571A (en) * 2008-09-28 2010-04-08 Waseda Univ Automatic body part discrimination system

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4891802B2 (en) * 2007-02-20 2012-03-07 日本電信電話株式会社 Content search / recommendation method, content search / recommendation device, and content search / recommendation program

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004178593A (en) * 2002-11-25 2004-06-24 Eastman Kodak Co Imaging method and system
JP2007200126A (en) * 2006-01-27 2007-08-09 Advanced Telecommunication Research Institute International Feeling informing reporting device
JP4701365B2 (en) * 2006-01-27 2011-06-15 株式会社国際電気通信基礎技術研究所 Emotion information notification device
JP2009014415A (en) * 2007-07-02 2009-01-22 National Institute Of Advanced Industrial & Technology Object recognition device and method
JP2010075571A (en) * 2008-09-28 2010-04-08 Waseda Univ Automatic body part discrimination system

Also Published As

Publication number Publication date
JP3098276B2 (en) 2000-10-16

Similar Documents

Publication Publication Date Title
Song et al. Recognizing spontaneous micro-expression using a three-stream convolutional neural network
US4975960A (en) Electronic facial tracking and detection system and method and apparatus for automated speech recognition
CN113408508B (en) Transformer-based non-contact heart rate measurement method
US20090097711A1 (en) Detecting apparatus of human component and method thereof
KR101912569B1 (en) The object tracking system of video images
US20220218218A1 (en) Video-based method and system for accurately estimating human body heart rate and facial blood volume distribution
Yue et al. Deep super-resolution network for rPPG information recovery and noncontact heart rate estimation
Liu et al. Gaze-assisted multi-stream deep neural network for action recognition
CN107411700A (en) A kind of hand-held vision inspection system and method
CN113642526A (en) Picture processing system and method based on computer control
CN113688741A (en) Motion training evaluation system and method based on cooperation of event camera and visual camera
CN115546899A (en) Examination room abnormal behavior analysis method, system and terminal based on deep learning
JPH04342078A (en) Device for recognizing facial expression
CN114783611A (en) Neural recovered action detecting system based on artificial intelligence
Liang et al. Real time hand movement trajectory tracking for enhancing dementia screening in ageing deaf signers of British sign language
JP2886932B2 (en) Facial expression recognition device
CN113297883A (en) Information processing method, analysis model obtaining device and electronic equipment
CN116343302A (en) Micro-expression classification and identification system based on machine vision
JP2573126B2 (en) Expression coding and emotion discrimination device
CN115690895A (en) Human skeleton point detection-based multi-person motion detection method and device
CN113408435B (en) Security monitoring method, device, equipment and storage medium
CN115410261A (en) Face recognition heterogeneous data association analysis system
CN114639168A (en) Method and system for running posture recognition
Zhang et al. An approach of region of interest detection based on visual attention and gaze tracking
CN112613399A (en) Pet emotion management system

Legal Events

Date Code Title Description
FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20070811

Year of fee payment: 7

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20080811

Year of fee payment: 8

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20080811

Year of fee payment: 8

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20090811

Year of fee payment: 9

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20090811

Year of fee payment: 9

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20100811

Year of fee payment: 10

LAPS Cancellation because of no payment of annual fees