JPH08287216A - In-face position recognizing method - Google Patents

In-face position recognizing method

Info

Publication number
JPH08287216A
JPH08287216A JP9247195A JP9247195A JPH08287216A JP H08287216 A JPH08287216 A JP H08287216A JP 9247195 A JP9247195 A JP 9247195A JP 9247195 A JP9247195 A JP 9247195A JP H08287216 A JPH08287216 A JP H08287216A
Authority
JP
Japan
Prior art keywords
area
face
template
image
search
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP9247195A
Other languages
Japanese (ja)
Inventor
Hiroaki Yoshida
博明 吉田
Masato Takami
正人 高見
Atsuo Saijo
淳夫 西條
Kazuo Matsumoto
和夫 松本
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sanyo Electric Co Ltd
Original Assignee
Sanyo Electric Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sanyo Electric Co Ltd filed Critical Sanyo Electric Co Ltd
Priority to JP9247195A priority Critical patent/JPH08287216A/en
Publication of JPH08287216A publication Critical patent/JPH08287216A/en
Pending legal-status Critical Current

Links

Abstract

PURPOSE: To obtain a noncontact, markerless in-face position recognizing method and recognize feature positions of the face of an objective person without making the object person aware specially. CONSTITUTION: A human area is determined from a human candidate area obtained by analyzing an picked-up infrared image and a skin color area obtained by analyzing a color image which is picked up almost at the same time S7, a face area is extracted through horizontal and vertical projection operation for the human area S10, and template matching is performed for this face area to recognize positions in the face S11. With the 1st extracted face area, a template is registered and when respective positions in a newly extracted face area image are matched with the template, plural search areas which are different in size are set in the image, while the sizes of the minimum search area - maximum search area are increased stepwise, positions matching the template are recognized preferably in terms of the processing speed.

Description

【発明の詳細な説明】Detailed Description of the Invention

【0001】[0001]

【産業上の利用分野】本発明は人物を対象とする画像処
理技術に係り、特に顔面内の目や口等の部位の認識方法
に関する。
BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to an image processing technique for a person, and more particularly to a method for recognizing parts such as eyes and mouth in the face.

【0002】[0002]

【従来の技術】従来の画像処理技術では、人物の顔面の
認識を行うに当たり、顔面内の特徴点を作るために、顔
面内にテープ等のマーカを貼り、このマーカを特徴点と
して認識することで顔面内の各部位の認識及び顔面の動
きの追跡を行っていた。
2. Description of the Related Art In conventional image processing technology, when recognizing a person's face, a marker such as a tape is attached to the face in order to make a feature point in the face, and this marker is recognized as a feature point. In this way, each part of the face was recognized and the movement of the face was tracked.

【0003】この場合被験者にはマーカを貼る煩わしさ
や不快感及び計測されているという意識が生じるという
問題点があった。
In this case, there is a problem in that the subject feels awkward and uncomfortable to put the marker on, and that he or she is being measured.

【0004】[0004]

【発明が解決しようとする課題】本発明は斯かる従来技
術の問題点に鑑みて成されたものであり、非接触で且つ
マーカレスの顔面内部位認識方法を得ることを目的とす
る。
SUMMARY OF THE INVENTION The present invention has been made in view of the above problems of the prior art, and an object of the present invention is to obtain a non-contact and markerless in-face part recognition method.

【0005】また、被験者に計測されていることを意識
させることなく顔の特徴部位の認識が行え、更に画像処
理にかかる時間を短縮することのできる顔面内部位認識
方法を得ることを目的とする。
It is another object of the present invention to provide a facial part recognition method capable of recognizing a characteristic part of the face without making the subject aware of the measurement and further shortening the time required for image processing. .

【0006】[0006]

【課題を解決するための手段】本発明は、撮像された赤
外画像の解析による人物候補領域と略同時に撮像された
カラー画像の解析による肌色領域とから人物領域を決定
し、該人物領域に対する水平及び垂直方向の射影演算に
より顔領域を抽出し、この顔領域に対してテンプレート
マッチングを施すことにより顔面内の部位を認識する方
法である。
According to the present invention, a person area is determined from a person candidate area obtained by analyzing an imaged infrared image and a skin color area obtained by analyzing a color image taken at approximately the same time, and the person area is determined. In this method, a face area is extracted by projection calculation in the horizontal and vertical directions, and template matching is performed on the face area to recognize a part in the face.

【0007】この場合、最初に抽出された前記顔領域に
おいてテンプレートを登録し、該登録されたテンプレー
トと新たに取り込まれた顔領域画像内の各部位とのマッ
チングを取る際、該画像内に大きさの相異なる複数のサ
ーチエリアを設定し、最小のサーチエリアから最大のサ
ーチエリアまでエリアの大きさを段階的に大きくしなが
ら前記テンプレートにマッチする部位を認識していくこ
とが処理速度の点から好ましい。
In this case, when a template is registered in the first extracted face area and the registered template is matched with each part in the newly captured face area image, a large size is included in the image. The processing speed is to set a plurality of search areas with different sizes and to recognize the part that matches the template while gradually increasing the size of the area from the minimum search area to the maximum search area. Is preferred.

【0008】特に前記サーチエリアは前記登録されたテ
ンプレートより少し大きめのものを最小とし、前記顔領
域より少し大きめのものを最大とするとよい。
Particularly, it is preferable that the search area is slightly larger than the registered template and the maximum is slightly larger than the face area.

【0009】[0009]

【作用】背景に人物の入った画像を赤外画像とカラー画
像の両方で取り込み、赤外画像の解析による人物候補領
域とカラー画像の解析による肌色領域とから人物領域を
決定する。
The image with a person in the background is captured by both the infrared image and the color image, and the person region is determined from the person candidate region by the analysis of the infrared image and the skin color region by the analysis of the color image.

【0010】次に決定された人物領域に対する水平及び
垂直方向の射影演算により顔領域を抽出する。そして抽
出された顔領域に大きさの相異なる複数のサーチエリア
を段階的に拡大しながらテンプレートマッチングし、顔
領域画像内の部位を認識する。
Next, the face area is extracted by the horizontal and vertical projection calculation for the determined person area. Then, template matching is performed while stepwise enlarging a plurality of search areas having different sizes in the extracted face area to recognize a part in the face area image.

【0011】[0011]

【実施例】以下本発明顔面内部位認識方法の一実施例に
ついて図面に基づき詳細に説明する。
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS An embodiment of the facial part recognition method of the present invention will be described in detail below with reference to the drawings.

【0012】図1は主たる動作のフローチャートを示
す。同図において先ず、ステップS1で赤外カメラ等の
赤外線撮像装置(図示せず)により赤外画像を撮像する
とともに、ステップS2で可視カメラ等のカラー撮像装
置(図示せず)によりカラー画像を撮像する。
FIG. 1 shows a flowchart of the main operation. In the figure, first, in step S1, an infrared image is captured by an infrared imaging device (not shown) such as an infrared camera, and in step S2, a color image is captured by a color imaging device (not shown) such as a visible camera. To do.

【0013】次にステップS3で撮像された赤外画像
(図2(a)参照)から人物候補領域を抽出するための
環境温度閾値の決定を行い、ステップS4でこの閾値を
用いて撮像された赤外画像から人物候補領域を抽出す
る。
Next, in step S3, an environmental temperature threshold value for extracting a human candidate area from the infrared image (see FIG. 2A) is determined, and in step S4, the environmental temperature threshold value is used for imaging. A candidate region for a person is extracted from the infrared image.

【0014】一方の撮像されたカラー画像(図2(b)
参照)から肌色領域を決定するための皮膚温度領域の抽
出をステップS5にて行う。この皮膚温度領域の抽出の
際には前記赤外画像からの温度領域の情報が併せて参照
される。そして得られた皮膚温度領域の情報からカラー
画像内の肌色領域がステップS6にて決定される。
One captured color image (FIG. 2B)
Extraction of the skin temperature region for determining the skin color region is performed in step S5. When extracting the skin temperature region, information on the temperature region from the infrared image is also referred to. Then, the skin color region in the color image is determined in step S6 from the obtained information on the skin temperature region.

【0015】以上のような赤外カメラ及び可視カメラに
よるセンサフュージョン技術により得られた人物候補領
域並びに肌色領域から画像内の人物領域(図2(c)参
照)の決定をステップS7にて行う。
In step S7, the person area (see FIG. 2C) in the image is determined from the person candidate area and the skin color area obtained by the sensor fusion technique using the infrared camera and the visible camera as described above.

【0016】次に得られた人物領域から顔領域を抽出す
るため、人物領域画像に対してステップS8及びステッ
プS9にて水平方向及び垂直方向の輝度値の斜影加算を
行う。
Next, in order to extract the face area from the obtained person area, the oblique addition of the luminance values in the horizontal and vertical directions is performed on the person area image in steps S8 and S9.

【0017】図3はステップS8及びステップS9の斜
影加算の様子を示す図であり、水平方向及び垂直方向の
輝度値の総和の分布が曲線状に現れる。そこで水平方向
の閾値はピーク値の10%程度の値を採用し、垂直方向
の閾値は最初の立ち上がり部はピーク値の数%程度の値
を採用し、顎部には大きな窪みが現れるのでこの窪みを
閾値とする。
FIG. 3 is a diagram showing the manner of the slant addition in steps S8 and S9, in which the distribution of the sum of the brightness values in the horizontal and vertical directions appears in a curved line. Therefore, the horizontal threshold value is about 10% of the peak value, the vertical threshold value is about several% of the peak value at the first rising part, and a large depression appears on the jaw. The depression is used as a threshold.

【0018】このようにして得られた水平及び垂直方向
の輝度特性曲線から閾値による処理を行い該各方向の閾
値間の領域をステップS10で顔領域と判断し、人物画
像内における顔領域の抽出を行う。
From the brightness characteristic curves in the horizontal and vertical directions thus obtained, processing with threshold values is performed, and the area between the threshold values in each direction is determined as a face area in step S10, and the face area in the human image is extracted. I do.

【0019】そしてステップS11でグレイスケールで
のテンプレートマッチングにより顔面内の特徴部位の認
識を行う。本実施例ではNED社製のテンプレートマッ
チング用画像処理ボードを用い、テンプレートマッチン
グ手法は正規相関法を用いた。
Then, in step S11, the characteristic part in the face is recognized by the gray-scale template matching. In this embodiment, an image processing board for template matching manufactured by NED was used, and a normal correlation method was used as a template matching method.

【0020】図4はテンプレートマチングの処理のフロ
ーチャートを示す。先ず、ステップS21で初期設定を
終えると、ステップS22でテンプレートマッチングの
基になるテンプレートの登録を行う。
FIG. 4 shows a flowchart of the template matching process. First, when the initial setting is completed in step S21, the template that is the basis of template matching is registered in step S22.

【0021】このとき図5において水平方向をx座標、
垂直方向をy座標とする時、 (1)x座標の最も小さいテンプレートを右眉(x0
0 ) (2)x座標の二番目に小さいテンプレートを口(x
2 ,y2 ) (3)x座標の最も大きいテンプレートを左眉(x1
1 ) と認識して登録する。
At this time, in FIG. 5, the horizontal direction is x coordinate,
When the vertical direction is the y coordinate, (1) the template with the smallest x coordinate is the right eyebrow (x 0 ,
y 0 ) (2) Select the template with the second smallest x coordinate as mouth (x
2 , y 2 ) (3) The template with the largest x coordinate is set to the left eyebrow (x 1 ,
Recognize as y 1 ) and register.

【0022】図6は前記ステップS10で得られた顔領
域の画像に対して登録される各部位(両眉及び口)のテ
ンプレートを示している。これらの登録されたテンプレ
ートを用いて以降取り込まれた画像のマッチングによる
顔面内部位の認識を行う。
FIG. 6 shows a template of each part (both eyebrows and mouth) registered for the image of the face area obtained in step S10. Using these registered templates, the facial part is recognized by matching the images captured thereafter.

【0023】認識処理の方法は以下の通りである。ステ
ップS23にて被認識画像においてサーチエリアの中心
座標を設定する。以下サーチエリア内の全画素に対して
相関演算を行う方法を本サーチといい、サーチエリア内
の全画素に対して相関演算を行うのではなく、数画素お
きに相関演算を行う方法を粗サーチということにする。
The recognition processing method is as follows. In step S23, the center coordinates of the search area in the recognized image are set. Hereinafter, the method of performing the correlation calculation on all the pixels in the search area is called the main search, and the method of performing the correlation calculation on every few pixels instead of performing the correlation calculation on all the pixels in the search area is a rough search. I will decide.

【0024】次にステップS24において一番小さいサ
ーチエリアA(図7参照)を用いてテンプレートマッチ
ングの粗サーチを行う。そしてステップS25でマッチ
したか否かを判断し、否の場合はステップS26に進ん
でワンサイズ大きいサーチエリアB(図7参照)を用い
て同じく粗サーチを行う。
Next, in step S24, a rough search for template matching is performed using the smallest search area A (see FIG. 7). Then, in step S25, it is determined whether or not there is a match, and if not, the process proceeds to step S26 and a coarse search is similarly performed using the search area B (see FIG. 7) that is one size larger.

【0025】さらにステップS27でマッチしたか否か
を判断し、否の場合はステップS28に進んで更に大き
なサイズのサーチエリアC(図7参照)を用いて粗サー
チを行う。本実施例ではサーチエリアをA〜Cの3つ用
意した。従ってステップS29でマッチしないと判断さ
れた時にはステップS30でERRORを表示するよう
にしている。
Further, in step S27, it is determined whether or not there is a match, and if not, the process proceeds to step S28 and a coarse search is performed using a search area C (see FIG. 7) of a larger size. In this embodiment, three search areas A to C are prepared. Therefore, if it is determined in step S29 that no match is found, ERROR is displayed in step S30.

【0026】前記ステップS25、S27、S29のい
ずれかでマッチしたと判断された場合ステップS31で
顔と口の位置関係が正しいか否かを判断する。。以下両
眉と口の位置関係についての判断方法を説明する。 (4)先ず、位置関係を判断する基準点は右眉の重心中
心(図6参照)とする。 (5)粗サーチを検索エリア全体(前記顔面領域全体)
で行う。 (6)初期登録で部位毎に画像処理するボード(ハード
ウェア)を設定する。このときどのボードがどのボード
を担当するか判っているので、右眉担当のボードを基準
として各部位の位置関係による組み合わせを作る。
When it is determined that the match is found in any of the steps S25, S27 and S29, it is determined in step S31 whether the positional relationship between the face and the mouth is correct. . A method of determining the positional relationship between the eyebrows and the mouth will be described below. (4) First, the reference point for determining the positional relationship is the center of gravity of the right eyebrow (see FIG. 6). (5) Rough search for the entire search area (entire face area)
Done in. (6) A board (hardware) for image processing is set for each part in initial registration. At this time, since it is known which board is in charge of which board, the board in charge of the right eyebrow is used as a reference to make a combination according to the positional relationship of each part.

【0027】即ち任意の組み合わせを右眉i番目候補
(x0i,y0i)、左眉i番目候補(x 1i,y1i)、口i
番目候補(x2i,y2i)とするとき、
That is, an arbitrary combination is selected as the right eyebrow i-th candidate
(X0i, Y0i), The left eyebrow i-th candidate (x 1i, Y1i), Mouth i
Th candidate (x2i, Y2i),

【0028】[0028]

【数1】 [Equation 1]

【0029】なる制限条件を満足する。 (7)前記組み合わせのサーチエリアに対して本サーチ
を行い、各顔面内部位の候補を絞り込む。 (8)上記本サーチで残った組み合わせに対して、再度
位置関係の判断を行う。
The following limiting conditions are satisfied. (7) The main search is performed on the search areas of the combination to narrow down the candidates for the facial parts. (8) The positional relationship of the combinations remaining in the main search is determined again.

【0030】即ちX方向では両眉の位置関係に対して数
2のような制限条件を満たす組み合わせを選ぶ。
That is, in the X direction, a combination satisfying the limiting condition such as the expression 2 is selected for the positional relationship between both eyebrows.

【0031】[0031]

【数2】 [Equation 2]

【0032】一方Y方向では眉と口及び右眉と左眉の位
置関係に対して数3のような制限条件を満たす組み合わ
せを選ぶ。
On the other hand, in the Y direction, a combination satisfying the limiting condition as shown in Expression 3 is selected for the positional relationship between the eyebrows and the mouth and the right eyebrows and the left eyebrows.

【0033】[0033]

【数3】 (Equation 3)

【0034】前記ステップS31で両眉と口の位置関係
が正しいと判断されればステップS32へ進みサーチエ
リアの中心座標を再設定してステップS24からの処理
を繰り返す。正しくないと判断されれば前記ステップS
30へ戻ってERROR表示する。
If it is determined in step S31 that the positional relationship between the eyebrows and the mouth is correct, the process proceeds to step S32, the center coordinates of the search area are reset, and the processes from step S24 are repeated. If it is judged to be incorrect, the above step S
Return to 30 and display ERROR.

【0035】図8は図4のフローチャートの動作を概念
的に示す図である。また図9に示すように上記フローチ
ャートに従って処理し、両眉、口の位置関係を考慮する
ことで、顔の向き変化にも対応できることが明らかであ
る。
FIG. 8 is a diagram conceptually showing the operation of the flowchart of FIG. Further, as shown in FIG. 9, it is apparent that it is possible to deal with the change in the orientation of the face by performing the processing according to the above-mentioned flowchart and considering the positional relationship between the eyebrows and the mouth.

【0036】また図8から明らかなようにサーチエリア
の拡大を行いながら認識部位の粗サーチを行っているの
で、認識部位が見つかればサーチエリアを縮小して本サ
ーチを行うのは当然である。
Further, as is apparent from FIG. 8, the coarse search of the recognition site is carried out while the search area is expanded. Therefore, if the recognition site is found, it is natural to perform the main search by reducing the search area.

【0037】[0037]

【発明の効果】本発明は以上の説明のように、非接触で
且つマーカレスの顔面内部位認識方法を提供することが
できる。
As described above, the present invention can provide a non-contact and markerless intra-face part recognition method.

【0038】また、被験者に計測されていることを意識
させることなく顔の特徴部位の認識が行え、更に画像処
理にかかる時間を短縮することのできる顔面内部位認識
方法を提供することができる。
Further, it is possible to provide a facial part recognition method capable of recognizing the characteristic part of the face without making the subject aware of the measurement and further shortening the time required for the image processing.

【0039】しかも各部位の位置関係を考慮することで
顔の向きの変化にも追従させることが可能である。
Moreover, it is possible to follow changes in the orientation of the face by taking into consideration the positional relationship between the parts.

【図面の簡単な説明】[Brief description of drawings]

【図1】本発明顔面内部位認識方法のメインのフローチ
ャートである。
FIG. 1 is a main flowchart of a method for recognizing an in-face part of the present invention.

【図2】センサフュージョン技術の概念図である。FIG. 2 is a conceptual diagram of sensor fusion technology.

【図3】斜影加算による顔領域の抽出方法の概念図であ
る。
FIG. 3 is a conceptual diagram of a method of extracting a face area by adding a shadow.

【図4】テンプレートマッチング処理のフローチャート
である。
FIG. 4 is a flowchart of template matching processing.

【図5】登録されるテンプレートの概念図である。FIG. 5 is a conceptual diagram of a template to be registered.

【図6】顔面内の基準点及び設定される初期情報を示す
概念図である。
FIG. 6 is a conceptual diagram showing reference points in the face and initial information to be set.

【図7】テンプレートマッチングに適用するサーチエリ
アを示す図である。
FIG. 7 is a diagram showing a search area applied to template matching.

【図8】テンプレートマッチング処理の概念図である。FIG. 8 is a conceptual diagram of template matching processing.

【図9】顔の向きの変化と両眉、口の位置関係を示す概
念図である。
FIG. 9 is a conceptual diagram showing a change in the orientation of the face and the positional relationship between both eyebrows and the mouth.

───────────────────────────────────────────────────── フロントページの続き (72)発明者 松本 和夫 大阪府守口市京阪本通2丁目5番5号 三 洋電機株式会社内 ─────────────────────────────────────────────────── ─── Continuation of the front page (72) Inventor Kazuo Matsumoto 2-5-5 Keihan Hondori, Moriguchi City, Osaka Sanyo Electric Co., Ltd.

Claims (3)

【特許請求の範囲】[Claims] 【請求項1】 撮像された赤外画像の解析による人物候
補領域と略同時に撮像されたカラー画像の解析による肌
色領域とから人物領域を決定し、該人物領域に対する水
平及び垂直方向の射影演算により顔領域を抽出し、この
顔領域に対してテンプレートマッチングを施すことによ
り顔面内の部位を認識する顔面内部位認識方法。
1. A person area is determined from a person candidate area obtained by analyzing an imaged infrared image and a skin color area obtained by analyzing a color image taken at substantially the same time, and a projection operation in horizontal and vertical directions is performed on the person area. A facial part recognition method for recognizing a facial part by extracting a facial region and performing template matching on the facial region.
【請求項2】 最初に抽出された前記顔領域においてテ
ンプレートを登録し、該登録されたテンプレートと新た
に取り込まれた顔領域画像内の各部位とのマッチングを
取る際、該画像内に大きさの相異なる複数のサーチエリ
アを設定し、最小のサーチエリアから最大のサーチエリ
アまでエリアの大きさを段階的に大きくしながら前記テ
ンプレートにマッチする部位を認識していくことを特徴
とする上記請求項1記載の顔面内部位認識方法。
2. A template is registered in the first extracted face area, and when the registered template and each part in the newly captured face area image are matched, a size is calculated in the image. A plurality of search areas different from each other are set, and the site matching the template is recognized while gradually increasing the size of the area from the minimum search area to the maximum search area. Item 1. A method for recognizing an in-face part according to Item 1.
【請求項3】 前記サーチエリアは前記登録されたテン
プレートより少し大きめのものを最小とし、前記顔領域
より少し大きめのものを最大とする上記請求項2記載の
顔面内部位認識方法。
3. The intra-face part recognition method according to claim 2, wherein the search area has a size slightly larger than the registered template and a size slightly larger than the face area.
JP9247195A 1995-04-18 1995-04-18 In-face position recognizing method Pending JPH08287216A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP9247195A JPH08287216A (en) 1995-04-18 1995-04-18 In-face position recognizing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP9247195A JPH08287216A (en) 1995-04-18 1995-04-18 In-face position recognizing method

Publications (1)

Publication Number Publication Date
JPH08287216A true JPH08287216A (en) 1996-11-01

Family

ID=14055252

Family Applications (1)

Application Number Title Priority Date Filing Date
JP9247195A Pending JPH08287216A (en) 1995-04-18 1995-04-18 In-face position recognizing method

Country Status (1)

Country Link
JP (1) JPH08287216A (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100311952B1 (en) * 1999-01-11 2001-11-02 구자홍 Method of face territory extraction using the templates matching with scope condition
JP2002501234A (en) * 1998-01-08 2002-01-15 シャープ株式会社 Human face tracking system
WO2002007096A1 (en) * 2000-07-17 2002-01-24 Mitsubishi Denki Kabushiki Kaisha Device for tracking feature point on face
KR20030012193A (en) * 2001-07-31 2003-02-12 주식회사 드림미르 Method of eye position detection for the image with various background
KR20030040680A (en) * 2001-11-15 2003-05-23 삼성에스디에스 주식회사 Method for detecting face from video data and apparatus thereof
US6757422B1 (en) 1998-11-12 2004-06-29 Canon Kabushiki Kaisha Viewpoint position detection apparatus and method, and stereoscopic image display system
KR100447268B1 (en) * 2001-12-18 2004-09-07 한국전자통신연구원 Method for eye detection from face images by searching for an optimal binarization threshold
KR100695159B1 (en) * 2005-08-04 2007-03-14 삼성전자주식회사 Apparatus and method for generating RGB map for skin color model and apparatus and method for detecting skin color employing the same
CN100382751C (en) * 2005-05-08 2008-04-23 上海交通大学 Canthus and pupil location method based on VPP and improved SUSAN
JP2008299834A (en) * 2007-05-02 2008-12-11 Nikon Corp Photographic subject tracking program and photographic subject tracking device
US7662047B2 (en) 2002-06-27 2010-02-16 Ssd Company Limited Information processor having input system using stroboscope
JP2010136223A (en) * 2008-12-05 2010-06-17 Sony Corp Imaging device and imaging method
US7786898B2 (en) 2006-05-31 2010-08-31 Mobileye Technologies Ltd. Fusion of far infrared and visible images in enhanced obstacle detection in automotive applications
JP2011198270A (en) * 2010-03-23 2011-10-06 Denso It Laboratory Inc Object recognition device and controller using the same, and object recognition method
US8139854B2 (en) 2005-08-05 2012-03-20 Samsung Electronics Co., Ltd. Method and apparatus for performing conversion of skin color into preference color by applying face detection and skin area detection
US8402050B2 (en) 2010-08-13 2013-03-19 Pantech Co., Ltd. Apparatus and method for recognizing objects using filter information
JP2019118548A (en) * 2017-12-28 2019-07-22 株式会社Jvcケンウッド Cornea reflection position detection device, visual line detection device, and cornea reflection position detection method

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002501234A (en) * 1998-01-08 2002-01-15 シャープ株式会社 Human face tracking system
US6757422B1 (en) 1998-11-12 2004-06-29 Canon Kabushiki Kaisha Viewpoint position detection apparatus and method, and stereoscopic image display system
KR100311952B1 (en) * 1999-01-11 2001-11-02 구자홍 Method of face territory extraction using the templates matching with scope condition
WO2002007096A1 (en) * 2000-07-17 2002-01-24 Mitsubishi Denki Kabushiki Kaisha Device for tracking feature point on face
KR20030012193A (en) * 2001-07-31 2003-02-12 주식회사 드림미르 Method of eye position detection for the image with various background
KR20030040680A (en) * 2001-11-15 2003-05-23 삼성에스디에스 주식회사 Method for detecting face from video data and apparatus thereof
KR100447268B1 (en) * 2001-12-18 2004-09-07 한국전자통신연구원 Method for eye detection from face images by searching for an optimal binarization threshold
US7662047B2 (en) 2002-06-27 2010-02-16 Ssd Company Limited Information processor having input system using stroboscope
CN100382751C (en) * 2005-05-08 2008-04-23 上海交通大学 Canthus and pupil location method based on VPP and improved SUSAN
KR100695159B1 (en) * 2005-08-04 2007-03-14 삼성전자주식회사 Apparatus and method for generating RGB map for skin color model and apparatus and method for detecting skin color employing the same
US8139854B2 (en) 2005-08-05 2012-03-20 Samsung Electronics Co., Ltd. Method and apparatus for performing conversion of skin color into preference color by applying face detection and skin area detection
US9443154B2 (en) 2006-05-31 2016-09-13 Mobileye Vision Technologies Ltd. Fusion of far infrared and visible images in enhanced obstacle detection in automotive applications
US7786898B2 (en) 2006-05-31 2010-08-31 Mobileye Technologies Ltd. Fusion of far infrared and visible images in enhanced obstacle detection in automotive applications
US9323992B2 (en) 2006-05-31 2016-04-26 Mobileye Vision Technologies Ltd. Fusion of far infrared and visible images in enhanced obstacle detection in automotive applications
JP2008299834A (en) * 2007-05-02 2008-12-11 Nikon Corp Photographic subject tracking program and photographic subject tracking device
US8416303B2 (en) 2008-12-05 2013-04-09 Sony Corporation Imaging apparatus and imaging method
JP4702441B2 (en) * 2008-12-05 2011-06-15 ソニー株式会社 Imaging apparatus and imaging method
JP2010136223A (en) * 2008-12-05 2010-06-17 Sony Corp Imaging device and imaging method
JP2011198270A (en) * 2010-03-23 2011-10-06 Denso It Laboratory Inc Object recognition device and controller using the same, and object recognition method
US8402050B2 (en) 2010-08-13 2013-03-19 Pantech Co., Ltd. Apparatus and method for recognizing objects using filter information
US9405986B2 (en) 2010-08-13 2016-08-02 Pantech Co., Ltd. Apparatus and method for recognizing objects using filter information
JP2019118548A (en) * 2017-12-28 2019-07-22 株式会社Jvcケンウッド Cornea reflection position detection device, visual line detection device, and cornea reflection position detection method

Similar Documents

Publication Publication Date Title
Shreve et al. Macro-and micro-expression spotting in long videos using spatio-temporal strain
US6445810B2 (en) Method and apparatus for personnel detection and tracking
JP4653606B2 (en) Image recognition apparatus, method and program
JPH08287216A (en) In-face position recognizing method
EP1650711B1 (en) Image processing device, imaging device, image processing method
JP5077956B2 (en) Information terminal equipment
CN105740778B (en) Improved three-dimensional human face in-vivo detection method and device
KR101510798B1 (en) Portable Facial Expression Training System and Methods thereof
JP2006350578A (en) Image analysis device
JP2000082147A (en) Method for detecting human face and device therefor and observer tracking display
JP2004094491A (en) Face orientation estimation device and method and its program
JP2000251078A (en) Method and device for estimating three-dimensional posture of person, and method and device for estimating position of elbow of person
JP3454726B2 (en) Face orientation detection method and apparatus
JP2021503139A (en) Image processing equipment, image processing method and image processing program
JP4729188B2 (en) Gaze detection device
JP3970573B2 (en) Facial image recognition apparatus and method
JP3861421B2 (en) Personal identification device
JP3577908B2 (en) Face image recognition system
JP2004062393A (en) Method and device for determining attention
JP2005092451A (en) Head detector and head detecting method and head detecting program
Li et al. Detecting and tracking human faces in videos
JP2000268161A (en) Real time expression detector
JP5092093B2 (en) Image processing device
JP5951966B2 (en) Image processing apparatus, image processing system, image processing method, and program
JP5688514B2 (en) Gaze measurement system, method and program