JPH01314385A - Method and device for detecting face image - Google Patents

Method and device for detecting face image

Info

Publication number
JPH01314385A
JPH01314385A JP63147256A JP14725688A JPH01314385A JP H01314385 A JPH01314385 A JP H01314385A JP 63147256 A JP63147256 A JP 63147256A JP 14725688 A JP14725688 A JP 14725688A JP H01314385 A JPH01314385 A JP H01314385A
Authority
JP
Japan
Prior art keywords
face
area
storage means
mouth
candidate group
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP63147256A
Other languages
Japanese (ja)
Other versions
JP2767814B2 (en
Inventor
Hajime Kawakami
肇 川上
Yukio Miyatake
行夫 宮武
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
NEC Corp
Original Assignee
NEC Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by NEC Corp filed Critical NEC Corp
Priority to JP63147256A priority Critical patent/JP2767814B2/en
Publication of JPH01314385A publication Critical patent/JPH01314385A/en
Application granted granted Critical
Publication of JP2767814B2 publication Critical patent/JP2767814B2/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)

Abstract

PURPOSE:To detect eyes, eyebrows and a mouth area irrespective of the direction of a face which is being reflected by detecting a non-skin color area surrounded by a skin color area as an area candidate corresponding to a feature of the face, at the time of detecting a feature of a face image. CONSTITUTION:The title device is provided with a face structure collating means 12 for collating a face candidate group which as combined as area stored in an eye candidate group storage means 11 and an area stored in a mouth candidate group storage means 17, with a structure of a face stored in a face structure storage means 15. A candidate of a feature of a face is detected as an area which is not a skin color in a skin color area. That is, for instance, a mouth area being one example of a feature of a face is scarcely hidden by hair, etc., therefore, it is contained stably in a skin color area in a face image, and as for a lip, its color is varied frequently by rouge, etc., therefore, since it is difficult to specify a color of a lip, for instance, a candidate of a mouth is detected as an area being not a skin color which has been surrounded by a skin color area. In such a way, feature areas corresponding to eyebrows, a nose and a mouth can be set correctly.

Description

【発明の詳細な説明】 (産業上の利用分野) 本発明は入門管理等で重要となるTVカメラ画像からの
顔画像部分の検出を自動化する時等に必要となる顔画像
の検出技術に関する。
DETAILED DESCRIPTION OF THE INVENTION (Field of Industrial Application) The present invention relates to a face image detection technique that is necessary when automating the detection of a face image part from a TV camera image, which is important in introductory management and the like.

(従来の技術) 顔画像を検出する従来の方式の一例として文献[姉崎隆
、酒見博行、[似顔絵ロボットの画像処理(I)−処理
概要、特徴抽出、]、昭和60年度電子通信学会総合全
国大会、P5−83.1に記載されているものについて
説明する。
(Prior art) An example of a conventional method for detecting face images is given in the literature [Takashi Anezaki, Hiroyuki Sakemi, [Image processing for portrait robots (I) - Processing overview, feature extraction], 1985 IEICE General Report I will explain what is written in the National Convention, P5-83.1.

例えばTV左カメラで入力された画像から顔の特徴を抽
出するのに従来方式では、第1段階として第9図に示す
ような画像の黒領域として分割された、黒領域群毎に算
出された面積、周長、及び重心位置等に対する条件式と
上記領域群の相対的な位置関係により目のペアとなる領
域を識別する。第2段階として、上記口の座標をもとに
、眉、鼻、及び口を収める特徴領域を作り、第3段階と
して、上記各領域から特徴を抽出していた。
For example, in the conventional method for extracting facial features from an image input by the left camera of a TV, the first step is to calculate the characteristics for each group of black areas, which are divided into black areas of the image as shown in Figure 9. Regions that form a pair of eyes are identified based on conditional expressions regarding the area, circumference, center of gravity, etc., and the relative positional relationship of the region group. In the second step, feature regions containing the eyebrows, nose, and mouth were created based on the coordinates of the mouth, and in the third step, features were extracted from each of the regions.

(発明が解決しようとする問題点) しかしながら、従来の方式を用いて例えば上下が逆にな
った顔画像を識別する場合、目のペアとなる領域群だけ
では顔画像の上下を検出するのは難しく、そのため、眉
、鼻、及び口に相当する特徴領域を正しく設定するのは
困難であった。
(Problem to be Solved by the Invention) However, when using conventional methods to identify, for example, a face image that is upside down, it is difficult to detect the top and bottom of the face image using only the eye pair regions. Therefore, it was difficult to correctly set the characteristic regions corresponding to the eyebrows, nose, and mouth.

本発明の目的は顔画像の写り方に依存しない顔画像の検
出技術を提供することにある。
An object of the present invention is to provide a face image detection technique that does not depend on how the face image appears.

(問題を解決するための手段) 第1の本発明の顔画像検出方法は肌色領域内の肌色領域
でない領域として顔の特徴に対応する領域の候補を検出
することを特徴とする。
(Means for Solving the Problems) The face image detection method according to the first aspect of the present invention is characterized in that a candidate area corresponding to a facial feature is detected as an area other than a skin color area within a skin color area.

本発明の第2の顔画像検出方法は顔の特徴の候補の組を
顔面に対する上記候補の位置を頂点とする図形の面積比
で選別することを特徴とする。
A second face image detection method of the present invention is characterized in that a set of facial feature candidates is selected based on the area ratio of a figure whose apex is the position of the candidate with respect to the face.

本発明の第1の顔画像検出装置は、顔の構造を記憶する
顔構造記憶手段と、顔画像を記憶する顔画像記憶手段と
、上記顔画像記憶手段が記憶する画像から目に対応する
領域の候補を検出する目候補群発見手段と、上記目候補
群発見手段が検出した領域群を記憶する目候補群記憶手
段と、前記顔画像記憶手段が記憶する画像から口に対応
する領域の候補を検出する口候補群発見手段と、上記口
候補群発見手段が検出した領域群を記憶する口候補群記
憶手段と、前記目候補群記憶手段が記憶する領域と前記
口候補群記憶手段が領域を組み合わせた顔候補群を前記
顔構造記憶手段が記憶する顔の構造と照合する顔構造照
合手段を具備することを特徴とする。
A first face image detection device of the present invention includes a face structure storage means for storing a facial structure, a face image storage means for storing a face image, and an area corresponding to an eye from an image stored in the face image storage means. eye candidate group finding means for detecting candidates for the eye candidate group, eye candidate group storage means for storing the region groups detected by the eye candidate group finding means, and candidates for the region corresponding to the mouth from the image stored by the face image storage means. a mouth candidate group discovery means for detecting a region group detected by the mouth candidate group discovery means; a mouth candidate group storage means for storing a region group detected by the mouth candidate group discovery means; The present invention is characterized by comprising a face structure matching means for matching a group of face candidates obtained by combining the above with a face structure stored in the face structure storage means.

本発明の第2の顔画像検出装置は第1の顔画像検出装置
において顔画像の黒色領域と肌色領域を定めるパラメー
タを算出する閾値決定手段を有することを特徴とする。
A second facial image detecting device of the present invention is characterized in that the first facial image detecting device includes threshold determining means for calculating parameters that define a black area and a skin colored area of a facial image.

(作用) 第1の本発明は顔の特徴の候補を肌色領域内の肌色でな
い領域として検出するものである。
(Operation) The first aspect of the present invention detects facial feature candidates as non-skin-colored areas within the skin-colored area.

即ち、例えば顔の特徴の一例である目領域は第1に頭髪
等によって隠されることが極めて少ないので顔画像の中
では安定して肌色領域内に含まれ、第2に、口唇は口紅
等で色が変化することが多いので、口唇の色を特定化す
るのは困難であるため、本発明では例えば口の候補を肌
色領域に囲まれた肌色でない領域として検出する。
That is, firstly, the eye area, which is an example of a facial feature, is rarely hidden by hair or the like, so it is stably included in the skin-tone area in a face image, and secondly, the lips are hidden by lipstick, etc. Since the color often changes, it is difficult to specify the color of the lips. Therefore, in the present invention, for example, a mouth candidate is detected as a non-skin-colored area surrounded by a skin-colored area.

第2の本発明は顔の特徴の候補の組を、顔面に対する上
記候補の位置を頂点とする図形の面積比で選別するもの
である。
A second aspect of the present invention is to select a set of facial feature candidates based on the area ratio of a figure whose apex is the position of the candidate with respect to the face.

即ち、例えば両目と口の重心位置を頂点とする三角形の
顔面に対する面積比は、顔を見る視線方向が多少変化し
ても安定しており、さらに顔面は例えば肌色領域として
検出できるため、顔の大きさを測る尺度にすることがで
きる。よって例えば正しくない両目と口の重心位置を頂
点とする三角形の顔面に対する面積比は予め設定してお
いた範囲におさまらず、上記面積比を調べることで正し
くない両目と口の候補を選別することができる。
That is, for example, the area ratio of a triangle with respect to the face, whose vertices are the center of gravity of both eyes and mouth, is stable even if the line of sight to the face changes slightly, and since the face can be detected as a skin-colored area, for example, the area ratio of the face to the face is stable. It can be used as a scale to measure size. Therefore, for example, the area ratio to the face of a triangle whose vertices are the positions of the centers of gravity of the incorrect eyes and mouth does not fall within a preset range, and by examining the above area ratio, candidates for the incorrect eyes and mouth can be selected. I can do it.

第3の本発明は、入力画像を処理して求めた例えば目、
眉、及び口のそれぞれに対する候補領域群の形状と位置
関係を、目、眉、及び口を表わす顔構造のモデルと照合
することにより上記各候補領域群の中から目、眉、及び
口に相当する領域を選択しようとするものである。
The third aspect of the present invention provides, for example, eyes obtained by processing an input image.
By comparing the shape and positional relationship of the candidate region group for each of the eyebrows and mouth with a facial structure model representing the eyes, eyebrows, and mouth, it is possible to determine the shape and positional relationship of the candidate region group corresponding to the eyes, eyebrows, and mouth from among the above candidate region groups. The purpose of this is to select the area that will be used.

即ち、本発明では入力画像を構成する領域群のうち、例
えば第8図(A)に例示した顔構造の相対位置に関する
下記節1のパラメータ群 L: 目の重心間距離 Ll:  口の重心と、両目の重心を通る直線との距離 L2:  口の重心と、両目の重心間線分の垂直二等分
線との距離 B3:  石屑の重心と、両目の重心間線分の垂直二等
分線との距離 B4:  左側の重心と、両目の重心間線分の垂直二等
分線との距離 B5:  石屑と重心と、右目の重心を通り両目の重心
間線分に垂直な線分との距離 B6:  左側の重心と、左目の重心を通り両目の重心
間線分に垂直な線分との距離 PBR:石屑の位置、PBL: 左側の位置PER:右
目の位置、PEL: 左目の位置PM: 口の位置 に関する例えば下記条件式 と、例えば第8図(B)に例示した顔構造の形状に関し
て画像の写り方に依存しない下記第2のパラメータ群 θ1: 両目の重心間線分と右目領域の主軸方向が成す
角度 02:  両目の重心間線分と左目領域の主軸方向が成
す角度 θ3: 両目の重心間線分と目領域の主軸方向が成す角
度 o4:  両目の重心間線分と布層領域の主軸方向が成
す角度 θ5: 両目の重心間線分と左眉領域の主軸方向が成す
角度 に関する例えば下記条件式 を満たす領域を目、眉及び口に対応する領域として検出
するものである。
That is, in the present invention, among the region groups constituting the input image, the following parameter group L in Section 1 regarding the relative position of the facial structure illustrated in FIG. , distance L2 from the straight line passing through the center of gravity of both eyes: Distance between the center of gravity of the mouth and the perpendicular bisector of the line segment between the centers of gravity of both eyes B3: Perpendicular bisector of the line segment between the center of gravity of the stone chips and the center of gravity of both eyes Distance to the dividing line B4: Distance between the left center of gravity and the perpendicular bisector of the line between the centers of gravity of both eyes B5: A line that passes through the stone chips, the center of gravity, and the center of gravity of the right eye and is perpendicular to the line between the centers of gravity of both eyes. Distance from minute B6: Distance between the left center of gravity and a line passing through the center of gravity of the left eye and perpendicular to the line segment between the centers of gravity of both eyes PBR: Position of stone chips, PBL: Position of left side PER: Position of right eye, PEL: Left eye position PM: For example, the conditional expression below regarding the position of the mouth, and the following second parameter group θ1 that does not depend on the way the image is captured, for example regarding the shape of the facial structure illustrated in FIG. 8(B): Line between the centers of gravity of both eyes Angle 02 between the minute and the direction of the principal axis of the right eye area: Angle θ3 between the line segment between the centers of gravity of both eyes and the direction of the principal axis of the left eye area O4: Angle between the line segment between the centers of gravity of both eyes and the direction of the principal axis of the eye area O4: Between the centers of gravity of both eyes Angle θ5 between the line segment and the principal axis direction of the cloth layer region: For example, an area that satisfies the following conditional expression regarding the angle formed by the line segment between the centers of gravity of both eyes and the principal axis direction of the left eyebrow area is detected as an area corresponding to the eyes, eyebrows, and mouth. It is something to do.

第4の本発明は、明るい背景の下で撮像した顔画像の明
るさのヒストグラムが黒色領域と肌色領域と背景に対応
する三つのピークを持つことを利用して顔画像の肌色領
域に対応する領域を検出する。
The fourth invention corresponds to the skin color area of a face image by utilizing the fact that the brightness histogram of a face image captured under a bright background has three peaks corresponding to a black area, a skin color area, and a background. Detect areas.

即ち、本発明では第10図に例示するB1とB4とB6
にそれぞれ黒色、肌色、背景に対応する極大値が現われ
ている顔画像の明るさのヒストグラム81から例えば同
図に例示したB1よりも画素値が大きく、かつ、最小で
あるヒストグラム81の極小値B2と、同図に例示した
B4よりも画素値が小さく、かつ最大であるヒストグラ
ム81の極小値B3と、上記B4よりも画素値が大きく
、かつ最小であるヒストグラム81の極小値B5を求め
、例えば画素値BがB<B2            
     (2−1)となる画素は黒色領域に含まれ、
例えば画素値BがB3≦B < B5        
      (2−2)となる画素は肌色領域に含まれ
ると判断する。
That is, in the present invention, B1, B4, and B6 illustrated in FIG.
From the histogram 81 of the brightness of the face image in which maximum values corresponding to black color, skin color, and background appear, for example, the minimum value B2 of the histogram 81, which has a larger pixel value than B1 illustrated in the figure and is the smallest, is obtained. Then, find the minimum value B3 of the histogram 81 whose pixel value is smaller and largest than B4 illustrated in the same figure, and the minimum value B5 of the histogram 81 whose pixel value is larger and smallest than B4, for example. Pixel value B is B<B2
The pixel (2-1) is included in the black area,
For example, pixel value B is B3≦B<B5
It is determined that the pixel (2-2) is included in the skin color area.

(実施例1) 以下、本発明について図面を用いて第1の実施例を詳細
に説明する。
(Example 1) Hereinafter, a first example of the present invention will be described in detail using the drawings.

第1図は第3の本発明を用いて例えば第2図(A)に例
示する人力画像を構成する領域のうち、目、眉、及び口
に対応する領域を出力する第1の実施例を示すブロック
図である。
FIG. 1 shows a first embodiment in which the third invention is used to output regions corresponding to the eyes, eyebrows, and mouth among the regions constituting the human-powered image illustrated in FIG. 2(A), for example. FIG.

第1図において、14は例えばTVカメラで入力した第
2図(A)に例示する顔画像39を記憶する顔画像記憶
手段、15は第8図(A)と第8図(B)に例示した顔
構造のモデルをそれぞれ式(1−1)と(1−2)の表
現で記憶する顔画像記憶手段である。
In FIG. 1, 14 is a face image storage means for storing the face image 39 shown in FIG. 2(A) inputted by a TV camera, and 15 is shown in FIGS. 8(A) and 8(B). This is a face image storage means that stores models of facial structures expressed by equations (1-1) and (1-2), respectively.

動作は例えば制御手段18が目候補群発見手段10を起
動して始まる。起動された上記目候補群発見手段10は
第1段階として前記顔画像39をfCx、y)と表わし
たとき、例えば予め実験により求めておいた閾値TLを
用いて値が1である画素群が黒色領域を表わす例えば第
3図(A)に例示する黒色領域画像g(x、y)をで合
成し、第2段階として上記黒色領域画像g(x+y)が
表わす黒色領域群のうち突出部分を特願昭63−190
2号明細書「図形認識装置」に記載されている手法によ
り切断して第3図(B)に例示する画像g(x、y)を
合成し、第3段階として上記画像g(x、y)を構成す
る値が1である画素群が表わす領域のうち、例えば面積
が予め指定された範囲にあるものだけを残すことにより
第3図(C)に例示する目候補群画像4.8を合成した
後、上記目候補群画像45を目候補群記憶手段11に記
憶して処理を終了する。
The operation begins, for example, when the control means 18 activates the eye candidate group finding means 10. As a first step, the activated eye candidate group finding means 10 detects, when the face image 39 is expressed as fCx, y), a pixel group having a value of 1 using, for example, a threshold value TL determined in advance through an experiment. For example, the black area image g (x, y) shown in FIG. Patent application 1986-190
The image g(x, y) shown in FIG. ), the eye candidate group image 4.8 illustrated in FIG. After the composition, the eye candidate group image 45 is stored in the eye candidate group storage means 11, and the process ends.

上記目候補群発見手段10が処理を終了すると前記制御
手段18は例えば第1の原理を用いた口候補群発見手段
16を起動する。起動された上記口候補群発見手段16
は第1段階として前記顔画像39をf(x、y)と表わ
したとき、例えば予め実験により求めておいた閾値TS
O,TSI  (5)を用いて、値が1である画素群が
肌色領域を表わす例えば第4図(A)に例示する肌色領
で合成し、第2段階として上記肌色領域画像h(x、y
)に含まれている穴領域に相当する画素群に値1を代入
して合成した第4図(B)に例示する口候補群画豫49
を合成した後、上記口候補群画像49を口候補群記憶手
段17に記憶して処理を終了する。
When the eye candidate group finding means 10 completes the processing, the control means 18 activates the mouth candidate group finding means 16 using, for example, the first principle. The activated mouth candidate group discovery means 16
As a first step, when the face image 39 is expressed as f(x, y), for example, a threshold value TS determined in advance through an experiment is determined.
Using O, TSI (5), the pixel group having a value of 1 represents a skin color area, for example, the skin color area illustrated in FIG. y
) is synthesized by assigning the value 1 to the pixel group corresponding to the hole area included in the mouth candidate group pixel 49 illustrated in FIG. 4(B).
After synthesizing the mouth candidate group images 49, the mouth candidate group image 49 is stored in the mouth candidate group storage means 17, and the process ends.

上記口候補群発見手段16が処理を終了すると、前記制
御手段10は鎖構造照合手段12を起動する。
When the mouth candidate group finding means 16 completes the processing, the control means 10 activates the chain structure matching means 12.

起動された上記鎖構造照合手段12は第1段階として前
記目候補群記憶手段11が記憶する領域群から例えば第
3図(C)に例示した領域50と領域51を選択した後
、上記領域50の例えば重心位置を第8図(A)の点P
EHに、上記領域51の例えば重心位置を第8図(A)
の点PELにそれぞれ対応づけ、第2段階として前記領
域50と51の例えば重心間距離 L(7) を算出して第8図(A)に示すパラメータLの値を定め
、第3段階として前記口候補群記憶手段16が記憶する
領域群のうち例えば重心の位置を第8図(A)に示す点
PMに対応づけた場合に前記顔構造記憶手段15が記憶
する式(1−1)を満たす例えば第4図(B)に例示し
た領域60を選択し、第4段階として、例えば第2の原
理に従って前記領域50,51.及び60の重心位置を
頂点とする第5図に例示した三角形80の面積Stを算
出した後、第4図(A)に例示した肌色領域画像h(x
、y)の穴を含めた面積をSとしたとき、適当に定めた
定数so、s1を用いた条件 sl>St/S>s□           (8)が
満たされるまで、前記第1段階から第4段階までの処理
をくり返し、第5段階として例えば第1成分が右目の領
域番号、第2成分が左目の領域番号、第3成分が布層の
領域番号、第4成分が左側の領域番号、第5成分が口の
領域番号、第6成分が得点を意味する6次元ベクトル M = (ml、m2.m3.m4.m5.m6)  
   (9)を用意し、上記ベクトルの第3成分と第4
成分と第6成分を初期化した後例えば第5図に示す領域
の番号50,51.60をそれぞれ式(8)に示すベク
トルMの成分m1.m2.m5           
  (IQ)に代入し、第6段階として前記領域50,
51,60に対して刊行物(“Digital Pic
ture Processing”、 5econdE
dition、 Volume2. ACADEMIC
PRESS、 PP、286−290゜1982、)に
記載されている手法で求まる長軸方向と短軸方向の例え
ば各標準偏差の比 ζ50.ζ51.ζ60              
(11)が例えば適当に定めた定数al+a2ta3t
a4を用いた条件を満たす場合には前記目候補群記憶手
段11が記憶する領域群のうち、例えば重心位置を第8
図(A)に示す点PERとPBLに対応づけた場合に前
記顔構造記憶手段15が記憶する式(1−1)を満たす
例えば第4図(B)に例示した領域53を式(9)の第
4成分に、領域54を式(9)の第5成分にそれぞれ代
入した後、式(9)の成分として代入されている領域5
0,51,53,54,60の各主軸方向に関する第8
図(B)に示したパラメータo1.e2.e3.θ4,
05を算出し、上記パラメータが前記顔構造記憶手段1
5が記憶する式(1−2)を満たさない場合には式(9
)の第6成分から例えば1を減算し、第7段階として第
1段階から第6段階の処理を前記目候補群記憶手段11
が記憶する領域のすべての組み合わせについて実行して
式(9)に示すベクトルを複数個合成した後、上記ベク
トルの第6成分が最大であるベクトルに対して前記顔構
造記憶手段15が記憶する式(1−1)と式(1−2)
の条件として示されている区間を狭くして第6段階と同
様の処理を行い、以上の処理を、式(9)に示すベクト
ルの第6成分が最大であるベクトルの個数が1つになる
まで続け、第8段階として以上の処理で1つになった式
(9)のベクトルを表示手段13に出力した後、上記表
示手段13を起動して処理を終了する。
As a first step, the activated chain structure matching means 12 selects, for example, the region 50 and the region 51 illustrated in FIG. 3(C) from the region group stored in the eye candidate group storage means 11, and then For example, the center of gravity of
EH, for example, the center of gravity position of the area 51 is shown in FIG. 8(A).
As a second step, for example, the distance between the centers of gravity L(7) of the regions 50 and 51 is calculated to determine the value of the parameter L shown in FIG. 8(A), and as a third step, the For example, when the position of the center of gravity of the region group stored in the mouth candidate group storage means 16 is associated with the point PM shown in FIG. 8(A), the expression (1-1) stored in the facial structure storage means 15 is For example, the region 60 illustrated in FIG. 4B is selected, and as a fourth step, the regions 50, 51 . After calculating the area St of the triangle 80 illustrated in FIG.
, y), and the area including the holes in the first to fourth stages is Repeating the process up to step 5, for example, the first component is the right eye area number, the second component is the left eye area number, the third component is the cloth layer area number, the fourth component is the left eye area number, and the fourth component is the left eye area number. 6-dimensional vector M = (ml, m2.m3.m4.m5.m6) where the 5th component is the mouth area number and the 6th component is the score
(9), and the third and fourth components of the above vector.
After initializing the component and the sixth component, for example, the area numbers 50, 51.60 shown in FIG. m2. m5
(IQ), and as a sixth step, the area 50,
Publications (“Digital Pic
ture Processing”, 5econdE
dition, Volume2. ACADEMIC
PRESS, PP, 286-290° 1982,), for example, the ratio of each standard deviation in the major axis direction and the minor axis direction, ζ50. ζ51. ζ60
(11) is, for example, a constant al+a2ta3t determined appropriately.
When the condition using a4 is satisfied, for example, the center of gravity position is set to the 8th area group stored in the eye candidate group storage means 11.
For example, the area 53 illustrated in FIG. 4(B) that satisfies the equation (1-1) stored in the facial structure storage means 15 when correlated with the points PER and PBL shown in FIG. After substituting the region 54 into the fourth component of Equation (9) and the fifth component of Equation (9), the region 5 substituted as the component of Equation (9)
8th in each principal axis direction of 0, 51, 53, 54, 60
Parameter o1 shown in figure (B). e2. e3. θ4,
05, and the above parameters are stored in the facial structure storage means 1.
5 does not satisfy the memorized formula (1-2), the formula (9
), for example, 1 is subtracted from the sixth component of
After synthesizing a plurality of vectors shown in formula (9) by executing the formula for all combinations of regions stored by the expression (9), the facial structure storage means 15 stores the formula for the vector whose sixth component is the maximum. (1-1) and equation (1-2)
Processing similar to the sixth step is performed by narrowing the interval shown as the condition for In the eighth step, the vector of equation (9) that has been combined into one by the above processing is outputted to the display means 13, and then the display means 13 is activated and the processing is ended.

起動された上記表示手段13は前記顔構造照合手段12
が出力するベクトルの第1成分、第2成分、第3成分、
第4成分、及び第5成分はそれぞれ右目領域、左目領域
、布層領域、左側領域及び目領域であると判断して前記
目候補群記憶手段11が記憶する領域のうち、上記ベク
トルの第1成分と第2成分と第3成分と第4成分が示す
番号の例えば第6図(A)に例示する画像90と、前記
口候補群記憶手段17が記憶する領域のうち、上記ベク
トルの第5成分が示す番号の例えば第6図(B)に例示
する画像91を例えばCRTに表示し、以上ですべての
処理を終了する。
The activated display means 13 is the facial structure matching means 12.
The first component, second component, and third component of the vector output by
The fourth component and the fifth component are determined to be the right eye area, left eye area, cloth layer area, left side area, and eye area, respectively, and are stored in the eye candidate group storage means 11. For example, in the image 90 illustrated in FIG. 6(A), the number indicated by the component, the second component, the third component, and the fourth component, and the fifth of the vectors among the areas stored in the mouth candidate group storage means 17. For example, an image 91 illustrated in FIG. 6(B) with the number indicated by the component is displayed on, for example, a CRT, and all processing is thus completed.

以上の処理において、目候補群記憶手段11と顔画像記
憶手段14と口候補群記憶手段17と顔構造記憶手段1
5は例えばメモリで構成でき、表示手段13は例えばメ
モリとCRTと現在のデイスプレィ技術で構成でき、制
御手段18は例えばメモリとマイクロプロセサで構成で
きる。
In the above processing, the eye candidate group storage means 11, the face image storage means 14, the mouth candidate group storage means 17, and the face structure storage means 1
The display means 13 may be composed of, for example, a memory, a CRT, and current display technology, and the control means 18 may be composed of, for example, a memory and a microprocessor.

(実施例2) 以下、本発明について図面を用いて第2の実施例を詳細
に説明する。第7図は第4の本発明を用いて例えば第2
図(A)に例示する入力画像を構成する領域のうち、目
、眉、及び口に対応する領域を出力する第2の実施例を
示すブロック図である。
(Example 2) Hereinafter, a second example of the present invention will be described in detail using the drawings. FIG. 7 shows, for example, a second method using the fourth invention.
FIG. 12 is a block diagram showing a second example of outputting regions corresponding to eyes, eyebrows, and mouth among the regions forming the input image illustrated in FIG.

第7図において、14は例えばTV左カメラ出力のうち
、第2図(A)に例示する顔画像40を記憶する顔画像
記憶手段、15は第8図(A)と第8図(B)に例示し
た顔構造のモデルをそれぞれ式(1−1)と(1−2)
の表現で記憶する顔構造記憶手段である。
In FIG. 7, 14 is a face image storage means for storing, for example, the face image 40 illustrated in FIG. 2(A) out of the TV left camera output, and 15 is a face image storage means for storing the face image 40 illustrated in FIG. Expressions (1-1) and (1-2) of the facial structure model illustrated in
This is a facial structure storage means that stores facial structure expressions.

動作は、例えば制御手段18が例えば第4の原理を用い
た閾値決定手段19を起動して始まる。起動された上記
閾値決定手段19は第1段階として例えば前記顔画像4
0を処理して第2図(B)に例示する画素値のヒストグ
ラム41を作成し、第2段階として上記ヒストグラム4
1において例えば予め与えておいた黒色の画素値42に
最も近いピークを与える画素値43を求め、第3段階と
して前記ヒストグラム41において例えば上記画素値4
3よりも画素値が大きくかつ最小である上記ヒストグラ
ム41の極小点を示す画素値44をみつけ、第4段階と
して前記ヒストグラム41において例えば予め与えてお
いた肌色領域の画素値45に最も近いピークを与える画
素値46を求め、第5段階として前記ヒストグラム41
において例えば上記画素値46よりも画素値が小さくか
つ最大である前記ヒストグラム41の極小点を示す画素
値40をみつけ、第6段階として前記ヒストグラム41
において例えば上記画素値46よりも画素値が大きくか
つ最小である前記ヒストグラム41の極小点を示す画素
値47をみつけ、第7段階として前記画素値44と40
と47をそれぞれ変数 TL、TSO<TSI              (
13)として閾値記憶手段9に記載して処理を終了する
The operation begins, for example, when the control means 18 activates the threshold value determination means 19 using, for example, the fourth principle. As a first step, the activated threshold value determination means 19 selects, for example, the face image 4.
0 is processed to create a histogram 41 of pixel values illustrated in FIG.
In step 1, for example, a pixel value 43 that gives a peak closest to the black pixel value 42 given in advance is determined, and as a third step, in the histogram 41, for example, the pixel value 43 is determined.
The pixel value 44 indicating the minimum point of the histogram 41, which has a pixel value larger than 3 and the minimum, is found, and in the fourth step, the peak closest to the pixel value 45 of the skin color area given in advance in the histogram 41 is found. The pixel value 46 to be given is determined, and as a fifth step, the histogram 41 is
For example, a pixel value 40 indicating the minimum point of the histogram 41, which has a pixel value smaller than and maximum than the pixel value 46, is found, and as a sixth step, the histogram 41 is
For example, find a pixel value 47 indicating the minimum point of the histogram 41, which is larger and smallest than the pixel value 46, and as a seventh step, calculate the pixel values 44 and 40.
and 47 are variables TL, TSO<TSI (
13) is recorded in the threshold storage means 9, and the process ends.

上記閾値決定手段19が処理を終了すると、前記制御手
段18は目候補群発見手段10を起動する。起動された
上記目候補群発見手段10は前記閾値記憶手段9が記憶
する式(13)に示すTLを式(3)に示すTLの値と
して用いて前記第1の実施例で説明した手順と同様に動
作して処理を終了する。
When the threshold determining means 19 completes the processing, the controlling means 18 activates the eye candidate group finding means 10. The activated eye candidate group finding means 10 uses the TL shown in equation (13) stored in the threshold storage means 9 as the value of TL shown in equation (3), and performs the procedure described in the first embodiment. It operates in the same way and ends the process.

上記目候補群発見手段10が処理を終了すると、前記制
御手段18は口候補群発見手段16を起動する。起動さ
れた上記口候補群発見手段16は前記閾値記憶手段9が
記憶する式(13)に示すT’soとTSIを式(5)
に示すTsoとTSIの値としてそれぞれ用いて前記第
1の実施例で説明した手順と同様に動作して処理を終了
する。
When the eye candidate group finding means 10 completes the processing, the control means 18 activates the mouth candidate group finding means 16. The activated mouth candidate group discovery means 16 converts T'so and TSI shown in equation (13) stored in the threshold value storage means 9 into equation (5).
Using the values of Tso and TSI shown in FIG.

上記口候補群発見手段16が処理を終了した後の動作は
前記第1実施例で説明した手順と同様に進み、以上です
べての処理を終了する。
After the mouth candidate group finding means 16 completes the processing, the operation proceeds in the same manner as the procedure described in the first embodiment, and all processing ends here.

以上の説明において、閾値記憶手段19ど目候補群記憶
手段11と顔画像記憶手段14と口候補群発見手段17
と顔構造記憶手段15は例えばメモリで構成でき、表示
手段13は例えばメモリとCRTと現在のデイスプレィ
技術で構成でき、制御手段18は例えばメモリとマイク
ロプロセサで構成できる。
In the above description, the threshold storage means 19, the eye candidate group storage means 11, the face image storage means 14, and the mouth candidate group discovery means 17
The facial structure storage means 15 can be constituted by, for example, a memory, the display means 13 can be constituted by, for example, a memory, a CRT, and current display technology, and the control means 18 can be constituted by, for example, a memory and a microprocessor.

(発明の効果) 以上で説明した第1の発明は写っている顔の方向に関係
なく目、眉、及び口領域を検出できる効果があり、第2
の発明は上記第1の発明の効果に加えて多くの入力画像
に対して第1の発明より適切に目、眉、及び口に対応す
る領域を選択できるので、その結果、処理の精度を改善
できる効果がある。
(Effects of the Invention) The first invention described above has the effect of detecting the eyes, eyebrows, and mouth areas regardless of the direction of the photographed face.
In addition to the effects of the first invention, the invention can select regions corresponding to the eyes, eyebrows, and mouth more appropriately than the first invention for many input images, and as a result, improves processing accuracy. There is an effect that can be done.

【図面の簡単な説明】[Brief explanation of the drawing]

第1図は本発明の第1の実施例を示すブロック図、第2
図(A)、(B)、第3図(A)、(B)、(C)、第
4図(A)、(B)、第5図、第6図(A)、(B)は
第1の実施例の動作を説明するための図、第7図は第2
の実施例を示すブロック図、第8図(A)、(B)は本
発明の詳細な説明するための図、第9図、第10図は従
来技術を説明するための図である。 図において、10は目候補群発見手段、11は目候補群
記憶手段、14は顔画像記憶手段、12は顔画像記憶手
段、16は口候補群発見手段、17は口候補群記憶手段
、19は閾値決定手段、39は実施例で用いた入力画像
、41は実施例で用いた人力画像のヒストグラムである
FIG. 1 is a block diagram showing a first embodiment of the present invention;
Figures (A), (B), Figure 3 (A), (B), (C), Figure 4 (A), (B), Figure 5, Figure 6 (A), (B) are A diagram for explaining the operation of the first embodiment, FIG. 7 is a diagram for explaining the operation of the first embodiment.
FIGS. 8A and 8B are diagrams for explaining the present invention in detail, and FIGS. 9 and 10 are diagrams for explaining the prior art. In the figure, 10 is an eye candidate group discovery means, 11 is an eye candidate group storage means, 14 is a face image storage means, 12 is a face image storage means, 16 is a mouth candidate group discovery means, 17 is a mouth candidate group storage means, and 19 39 is the input image used in the example, and 41 is the histogram of the human image used in the example.

Claims (4)

【特許請求の範囲】[Claims] (1)顔画像の特徴を検出する際に、肌色領域に囲まれ
た非肌色領域を顔の特徴に対応する領域候補として検出
する顔画像検出方法。
(1) A face image detection method that detects a non-skin color area surrounded by a skin color area as a region candidate corresponding to the facial features when detecting features of a face image.
(2)顔画像の特徴を検出する際に、顔の特徴の候補の
組を、顔面に対する上記候補の位置を頂点とする図形の
面積比で選別する顔画像検出方法。
(2) A face image detection method in which, when detecting features of a face image, a set of candidates for facial features is selected based on the area ratio of a figure whose apex is the position of the candidate relative to the face.
(3)顔の構造を記憶する顔構造記憶手段と、顔画像を
記憶する顔画像記憶手段と、上記顔画像記憶手段が記憶
する画像から目に対応する領域の候補を検出する目候補
群発見手段と、上記目候補群発見手段が検出した領域群
を記憶する目候補群記憶手段と、前記顔画像記憶手段が
記憶する画像から口に対応する領域の候補を検出する口
候補群発見手段と、上記口候補群発見手段が検出した領
域群を記憶する口候補群記憶手段と、前記目候補群記憶
手段が記憶する領域と前記口候補群記憶手段が記憶する
領域を組み合わせた顔候補群を前記顔構造記憶手段が記
憶する顔の構造と照合する顔構造照合手段を具備し、前
記顔画像の目と口に対応する領域を発見する顔画像検出
装置。
(3) Facial structure storage means for storing the structure of the face, face image storage means for storing the face image, and eye candidate group discovery for detecting candidates for regions corresponding to the eyes from the images stored by the face image storage means. means, eye candidate group storage means for storing region groups detected by the eye candidate group discovery means, and mouth candidate group discovery means for detecting region candidates corresponding to mouths from images stored by the face image storage means. , a mouth candidate group storage means for storing the region group detected by the mouth candidate group discovery means; a face candidate group that is a combination of the region stored by the eye candidate group storage means and the region stored by the mouth candidate group storage means; A face image detection device that detects regions corresponding to the eyes and mouth of the face image, comprising a face structure matching unit that matches the face structure stored in the face structure storage unit.
(4)顔画像の黒色領域と肌色領域を定めるパラメータ
を算出する閾値決定手段を有する請求項3記載の顔画像
検出装置。
(4) The facial image detection device according to claim 3, further comprising a threshold value determining means for calculating parameters that define a black area and a skin color area of the facial image.
JP63147256A 1988-06-14 1988-06-14 Face image detection method and apparatus Expired - Fee Related JP2767814B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP63147256A JP2767814B2 (en) 1988-06-14 1988-06-14 Face image detection method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP63147256A JP2767814B2 (en) 1988-06-14 1988-06-14 Face image detection method and apparatus

Publications (2)

Publication Number Publication Date
JPH01314385A true JPH01314385A (en) 1989-12-19
JP2767814B2 JP2767814B2 (en) 1998-06-18

Family

ID=15426119

Family Applications (1)

Application Number Title Priority Date Filing Date
JP63147256A Expired - Fee Related JP2767814B2 (en) 1988-06-14 1988-06-14 Face image detection method and apparatus

Country Status (1)

Country Link
JP (1) JP2767814B2 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07311833A (en) * 1994-05-17 1995-11-28 Nec Corp Human face detecting device
WO2007074844A1 (en) * 2005-12-28 2007-07-05 Kao Corporation Detecting method and detecting system for positions of face parts
JP2007286923A (en) * 2006-04-17 2007-11-01 Kao Corp Face part position detection method and system
US7502494B2 (en) 2003-01-30 2009-03-10 Fujtisu Limited Face orientation detection apparatus, face orientation detection method, and computer memory product
JP2009244921A (en) * 2008-03-28 2009-10-22 Nec Infrontia Corp Face image characteristic extraction method, its device and its program

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7577297B2 (en) 2002-12-16 2009-08-18 Canon Kabushiki Kaisha Pattern identification method, device thereof, and program thereof
EP3358501B1 (en) 2003-07-18 2020-01-01 Canon Kabushiki Kaisha Image processing device, imaging device, image processing method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS61100872A (en) * 1984-10-23 1986-05-19 Sony Corp Method of discriminating individual
JPS61145690A (en) * 1984-12-19 1986-07-03 Matsushita Electric Ind Co Ltd Recognizing device of characteristic part of face
JPS6322397A (en) * 1986-07-04 1988-01-29 エムケ−精工株式会社 Oil exchanger

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS61100872A (en) * 1984-10-23 1986-05-19 Sony Corp Method of discriminating individual
JPS61145690A (en) * 1984-12-19 1986-07-03 Matsushita Electric Ind Co Ltd Recognizing device of characteristic part of face
JPS6322397A (en) * 1986-07-04 1988-01-29 エムケ−精工株式会社 Oil exchanger

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07311833A (en) * 1994-05-17 1995-11-28 Nec Corp Human face detecting device
US7502494B2 (en) 2003-01-30 2009-03-10 Fujtisu Limited Face orientation detection apparatus, face orientation detection method, and computer memory product
WO2007074844A1 (en) * 2005-12-28 2007-07-05 Kao Corporation Detecting method and detecting system for positions of face parts
US8131013B2 (en) 2005-12-28 2012-03-06 Kao Corporation Method and detecting system for positions of facial parts
JP2007286923A (en) * 2006-04-17 2007-11-01 Kao Corp Face part position detection method and system
JP4530173B2 (en) * 2006-04-17 2010-08-25 花王株式会社 Method and system for detecting the position of a facial part
JP2009244921A (en) * 2008-03-28 2009-10-22 Nec Infrontia Corp Face image characteristic extraction method, its device and its program
JP4725900B2 (en) * 2008-03-28 2011-07-13 Necインフロンティア株式会社 Facial image feature extraction method, apparatus and program thereof

Also Published As

Publication number Publication date
JP2767814B2 (en) 1998-06-18

Similar Documents

Publication Publication Date Title
CN109690617B (en) System and method for digital cosmetic mirror
EP3338217B1 (en) Feature detection and masking in images based on color distributions
US9245330B2 (en) Image processing device, image processing method, and computer readable medium
US7095879B2 (en) System and method for face recognition using synthesized images
CN100407221C (en) Central location of a face detecting device, method and program
WO2015182134A1 (en) Improved setting of virtual illumination environment
JP2001109907A (en) Three-dimensional model generation device, three- dimensional model generation method, and recording medium recording three-dimensional model generation program
WO2004025564A1 (en) Face direction estimation device, face direction estimation method, and face direction estimation program
CN111783511A (en) Beauty treatment method, device, terminal and storage medium
US20170193644A1 (en) Background removal
JP2007272435A (en) Face feature extraction device and face feature extraction method
JP2005276182A (en) Method and device for creating human skin and lip area mask data
CN113344836B (en) Face image processing method and device, computer readable storage medium and terminal
JP2005165984A (en) Method, system and program for detecting vertex of human face
JP2000311248A (en) Image processor
CN103997593A (en) Image creating device, image creating method and recording medium storing program
JPH01314385A (en) Method and device for detecting face image
JP2009205283A (en) Image processing apparatus, method and program
US9323981B2 (en) Face component extraction apparatus, face component extraction method and recording medium in which program for face component extraction method is stored
JPH1185988A (en) Face image recognition system
KR20010084996A (en) Method for generating 3 dimension avatar using one face image and vending machine with the same
CN111553217A (en) Driver call monitoring method and system
CN113674177B (en) Automatic makeup method, device, equipment and storage medium for portrait lips
CN114155569B (en) Cosmetic progress detection method, device, equipment and storage medium
KR20220134472A (en) Image processing apparatus, image processing method, and storage medium

Legal Events

Date Code Title Description
LAPS Cancellation because of no payment of annual fees