JPS61208185A - Recognizing device for feature part of face - Google Patents

Recognizing device for feature part of face

Info

Publication number
JPS61208185A
JPS61208185A JP4905185A JP4905185A JPS61208185A JP S61208185 A JPS61208185 A JP S61208185A JP 4905185 A JP4905185 A JP 4905185A JP 4905185 A JP4905185 A JP 4905185A JP S61208185 A JPS61208185 A JP S61208185A
Authority
JP
Japan
Prior art keywords
area
feature
face
image
specific
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP4905185A
Other languages
Japanese (ja)
Other versions
JPH0510707B2 (en
Inventor
Takashi Anezaki
姉崎 隆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Panasonic Holdings Corp
Original Assignee
Matsushita Electric Industrial Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Matsushita Electric Industrial Co Ltd filed Critical Matsushita Electric Industrial Co Ltd
Priority to JP4905185A priority Critical patent/JPS61208185A/en
Publication of JPS61208185A publication Critical patent/JPS61208185A/en
Publication of JPH0510707B2 publication Critical patent/JPH0510707B2/ja
Granted legal-status Critical Current

Links

Landscapes

  • Collating Specific Patterns (AREA)

Abstract

PURPOSE:To extract feature points with a small quantity of arithmetic by comparing a feature parameter value of each area of a binary image with the standard parameter value of a specific area and discriminating the area, and calculating presence positions of feature parts by using the relative position relation between the position information of the obtained area and a previously calculated feature part. CONSTITUTION:A face image inputted from an image input part 101 is stored in an image storage part 102 and converted by a binary coding part 103 into a binary signal. This binary image is labeled, area by area, by a specific face area detecting part 4 to calculate feature parameters (area, peripheral length, center of gravity, etc.) of each label (area), thereby detecting the specific area (e.g. eye, etc.) of the face on the basis of the parameters. Then, a feature part presence position calculating part 105 calculates the range of the presence position of each feature part by using a proportional expression which represents the relative position relation (which allows the two-dimensional calculation of the range of the presence of eyebrows by statistics when the positions and interval of, for example, eyes are known) among feature parts.

Description

【発明の詳細な説明】 産業上の利用分野 本発明は人の顔の識別等に用いる人相識別システムの特
徴点認識部分に関するものである。
DETAILED DESCRIPTION OF THE INVENTION Field of the Invention The present invention relates to a feature point recognition portion of a physiognomy recognition system used for identifying human faces.

従来の技術 人の顔の特徴点認識は古くて新しいテーマである。これ
まで、人間の顔の識別に関しては、顔写真分類のだめの
マンマシンシステム、顔の%徴パラメータの検討などの
研究が見られるが、いずれも研究の域を脱していない。
Recognition of facial feature points for traditional engineers is an old and new topic. So far, research on human face identification has been conducted, such as a man-machine system for classifying facial photographs and an examination of facial % signature parameters, but none of these have gone beyond the realm of research.

その後二つの研究が発表されている。一つは、横顔のシ
ルエット画像より境界線を検出し、境界線の線形状より
人相識別する方法(The AutomaticRec
ognition of Human Faces f
rom ProfileSilhouetteffiJ
 G 、 T 、KAUFMAN efc ・−・・I
EEETrons 、 on SMC、vol 、 S
MC−6、A2 、 FEB 。
Two studies have since been published. One is a method that detects boundaries from a silhouette image of a profile and identifies facial features based on the line shape of the boundaries (The AutomaticRec
ogration of Human Faces f
rom ProfileSilhouetteffiJ
G, T, KAUFMAN efc...I
EEETrons, on SMC, vol, S
MC-6, A2, FEB.

1976 、 P P 113 121 ) oもう一
つは、顔正面の画像データをフィルタリングしてエツジ
検出し、エツジ形状の特徴を検出して、個々の特徴点(
口・鼻・目・・・・・・等)を識別する方法(坂井他、
「計算機による顔写真の解析」、電子通信学会論文誌り
1976, P P 113 121) oThe other method is to filter the image data of the front of the face to detect edges, detect the features of the edge shape, and extract individual feature points (
mouth, nose, eyes, etc.) (Sakai et al.,
“Computer-based facial photo analysis,” Journal of the Institute of Electronics and Communication Engineers.

73年4月、PP、226−233)である。April 1973, PP, 226-233).

発明が解決しようとする問題点 本発明における主要な目的の一つは、顔画像データより
、目・鼻・口等の各特徴点の位置とその存在領域を得る
ことにある。得られた結果は人相識別の主要なパラメー
タとなる。前記技術においては、顔のシルエット画像を
使った場合、各特徴点の領域情報(存在範囲や形状等)
は画像入力時に欠落しておシ、前記主要目的に合致しな
い。また、フィルタリング画像を用いた特徴点検出方法
の場合、領域情報を得るためには、マスク・サイズの大
きなフィルタリング・オペレータを必要とし、これを全
画像サイズに施すには膨大な演算量を要する。さらにフ
ィルタリング画像の場合、顔向に出現する影に敏感とな
る。この結果、めがね等を着用した顔画像の場合、めが
ねの影やめがねのフレームのために目の識別が困難にな
る。
Problems to be Solved by the Invention One of the main objectives of the present invention is to obtain the positions of each feature point such as the eyes, nose, mouth, etc. and the region in which they exist from facial image data. The obtained results become the main parameters for physiognomic identification. In the above technology, when a face silhouette image is used, area information (existence range, shape, etc.) of each feature point is
is missing when inputting the image, and does not meet the above main purpose. Furthermore, in the case of a feature point detection method using a filtered image, a filtering operator with a large mask size is required in order to obtain region information, and a huge amount of calculation is required to apply this to the entire image size. Furthermore, in the case of a filtered image, it becomes sensitive to shadows appearing in the direction of the face. As a result, in the case of a face image in which the person is wearing glasses or the like, it becomes difficult to identify the eyes due to the shadows and frames of the glasses.

問題点を解決するための手段 かかる問題点を解決するため、本発明は適当な閾値を設
定して目や髪・眉等の領域を含む二値画像を生成し、得
られた前記領域中、画像入力時の条件等により、安定し
て出現する特定特徴点を選択し、この特徴点の領域を、
各領域の特徴パラメータ(面積・周囲長・重心座標等)
を用い識別するものである。
Means for Solving the Problems In order to solve the problems, the present invention sets an appropriate threshold value to generate a binary image including areas such as eyes, hair, eyebrows, etc., and in the resulting area, Select specific feature points that stably appear depending on the conditions at the time of image input, and define the area of these feature points as
Characteristic parameters of each area (area, perimeter, center of gravity coordinates, etc.)
It is used for identification.

作  用 本発明の顔の特徴部分の認識方法は、画像入力装置よシ
入力した人の顔の画像データを、画像メモリに記憶し、
記憶した画像データを二値化し、顔の特定領域を含む二
値画像を生成し、生成した二値画像中の各領域の特徴パ
ラメータ値を算出し、顔の特定領域の標準パラメータ値
と比較することにより、前記領域を識別し、得られた前
記領域の位置情報と、あらかじめ算出した顔の各特徴部
分の相対位置関係を用いて、顔の各特徴部分の存在位置
を算出することにより、顔の各特徴部分の位置検出と領
域形状の検出を行なうものである。
Function: The method for recognizing facial features of the present invention stores image data of a human face inputted from an image input device in an image memory;
Binarize the stored image data to generate a binary image that includes a specific area of the face, calculate the characteristic parameter values of each area in the generated binary image, and compare them with the standard parameter values of the specific area of the face. By identifying the area, and using the obtained positional information of the area and the relative positional relationship of each characteristic part of the face calculated in advance, the position of each characteristic part of the face is calculated. The position of each characteristic part and the shape of the region are detected.

実施例 第1図は本発明を実施するための装置例の概要を示す図
である。図において101は画像入力部である。画像入
力部101よシ入力された顔画像は画像記憶部102に
記憶される。記憶された顔画像データは二値化部103
で二値化され、この二値画像を一旦記憶する。二値画像
を各領域毎にラベル付けし、各ラベル(領域)毎に特徴
パラメータ(面積・周囲長・重心位置等)を算出し、こ
れ等を基に顔の特定(目の)領域を検出するのが、顔の
特定領域検出部104である。次に得られた特定領域の
位置情報と、あらかじめ算出した顔の特徴部分の相対位
置関係(例えば、目の位置と間隔がわかれば、統計的に
眉毛の存在すべき範囲、口の存在すべき範囲等が二次元
的に算出できる。)を表現する比例式を用いて、顔の各
特徴部分の存在位置の範囲を算出するのが特徴部分存在
位置算出部である。
Embodiment FIG. 1 is a diagram showing an outline of an example of an apparatus for carrying out the present invention. In the figure, 101 is an image input section. The facial image input through the image input unit 101 is stored in the image storage unit 102. The stored face image data is stored in the binarization unit 103.
The image is then binarized and this binary image is temporarily stored. Label each region of the binary image, calculate characteristic parameters (area, perimeter, center of gravity, etc.) for each label (region), and detect specific (eye) regions of the face based on these. The facial specific area detection unit 104 does this. Next, the relative positional relationship between the positional information of the specific area obtained and the facial features calculated in advance (for example, if the position and spacing of the eyes are known, the range where the eyebrows should be statistically, the range where the mouth should be The characteristic part existence position calculation unit calculates the range of the existence position of each characteristic part of the face using a proportional expression expressing the following.

本実施例では、顔の特定領域を目とし、以下具体的に認
識方法について説明する。
In this embodiment, a specific region of a face is used as an eye, and a recognition method will be specifically described below.

第2図は閾値θにて二値化した顔画像データの例である
。図で示されるように、簡潔な二値画像が得られ、また
目の形状もリアルを失っていない。
FIG. 2 is an example of face image data binarized using a threshold value θ. As shown in the figure, a concise binary image is obtained, and the eye shape remains realistic.

第3図は閾値θの関係をヒストグラム(濃度の度数分布
)で示した。図で左から一番目の谷での濃度レベルが閾
値θである。一定の照明条件で、背景を明るい単一色と
して画像を入力した場合、第3図のような明確な変化を
もつヒストグラムが得られる。このうち図の■のレベル
範囲には目や黒髪等が相当する。■のレベル範囲には、
前記以外の顔の部分領域が相当する。■のレベル範囲に
は背景の領域が相当する。この傾向は、前記条件を保っ
た場合、数百枚の画像に対しても、はとんど変化しない
。またθの値もほぼ一定となる。この性質を用い、■の
レベル範囲を二値化して抜き出したものが第2図である
FIG. 3 shows the relationship between the threshold value θ using a histogram (frequency distribution of concentration). The concentration level at the first valley from the left in the figure is the threshold value θ. When an image is input with a bright single color background under constant lighting conditions, a histogram with clear changes as shown in FIG. 3 is obtained. Of these, eyes, black hair, etc. correspond to the level range marked ■ in the figure. ■The level range includes
This corresponds to partial areas of the face other than those mentioned above. The level range (2) corresponds to the background area. This tendency hardly changes even for hundreds of images when the above conditions are maintained. Furthermore, the value of θ is also approximately constant. Using this property, the level range of ■ is binarized and extracted as shown in FIG.

第2図のデータは当初、画素の集合であ多領域としての
意味をもたない。この隣接する画素を結合し領域を抽出
するのがラベリングである。なお、ラベリングについて
は参考文献(P、H,ウィンストン他rLIsP」 培
風館、PP、133−136 )に詳しく述べられてい
る。
Initially, the data in FIG. 2 is a collection of pixels and has no meaning as a multi-region. Labeling involves combining these adjacent pixels and extracting a region. Note that labeling is described in detail in the reference document (P, H. Winston et al., LIsP, Baifukan, PP, 133-136).

第4図はラベリング後、領域毎に番号付けした結果であ
る0また第6図は各ラベル毎に特徴パラメータを算出し
た結果である。これら特徴パラメータにまず単独の目の
条件を適用し、次に一組のペアとしての目の条件を適用
することにより目の領域ペアを検出する。
FIG. 4 shows the result of numbering each region after labeling, and FIG. 6 shows the result of calculating the characteristic parameters for each label. Eye region pairs are detected by first applying a single eye condition to these feature parameters, and then applying a pair of eye conditions.

次に目の単独条件およびペア条件を示す。Next, the single eye condition and pair condition are shown.

目の単独条件 ■ Sl<  面積 <82 (下限面積)    (上限面積) ■ 面積 〉k1×周囲長 目のペア条件 ■ 1y1−y21 <ΔY (上下差の限界) ■ Δx1<1x1−x21<Δx2 (左右差の下限)  (左右差の上限)以上の結果得ら
れた目の領域が第6図である。
Single eye condition ■ Sl < Area < 82 (Lower limit area) (Upper limit area) ■ Area > k1 × Perimeter long eye pair condition ■ 1y1-y21 <ΔY (Limit of vertical difference) ■ Δx1<1x1-x21<Δx2 ( Lower limit of left-right difference) (Upper limit of left-right difference) The eye area obtained as above is shown in FIG.

検出された目の領域の重心間を結ぶ線を基軸(第7図の
L)とし、各特徴部分の存在範囲を第7図の如く、眉毛
マスク、目マスク、鼻マスク。
Using the line connecting the centroids of the detected eye areas as the base axis (L in Figure 7), the range of existence of each characteristic part is defined as an eyebrow mask, an eye mask, and a nose mask as shown in Figure 7.

ロマスクと呼ぶことにする。各マスクの端辺は軸りに平
行又は垂直であり、マスク自体は長方形とする。この場
合、各マスクの端辺と軸りとの距離は、第7図の矢印間
で示されるものとなり、数値はすべて(軸りの長さ)の
定数倍で示される0これは、顔の各特徴部分の相対位置
関係(距離)が個人によって大きく異ならないことを示
す。
I'll call it Lomask. The edges of each mask are parallel or perpendicular to the axis, and the mask itself is rectangular. In this case, the distance between the edge of each mask and the axis is shown between the arrows in Figure 7, and all values are expressed as a constant times (the length of the axis) 0. This shows that the relative positional relationship (distance) of each characteristic part does not differ greatly depending on the individual.

従って、両目の重心位置がわかれば自動的に、眉毛マス
ク、目マスク、鼻マスク、ロマスク等を算出できる。さ
らに上記マスクを用いて、各特徴部分にiした処理を施
すことができる・以上の方法で、第2図の二値直上に眉
毛マスク。
Therefore, if the positions of the centers of gravity of both eyes are known, the eyebrow mask, eye mask, nose mask, lobster mask, etc. can be automatically calculated. Furthermore, using the above mask, it is possible to apply i-specific processing to each characteristic part. Using the above method, an eyebrow mask is created directly above the binary values in Fig. 2.

目マスク、鼻マスク、ロマスクを発生させたのが第8図
である。
Figure 8 shows the generation of an eye mask, a nose mask, and a nose mask.

発明の効果 このように本発明は、識別のために二値画像を用いたこ
とで、演算量を小さくすることができた。
Effects of the Invention As described above, the present invention is able to reduce the amount of calculation by using a binary image for identification.

また、二値化の閾値を選択することにより、リアルな領
域形状を得ることが可能で、領域を識別する場合にも、
本来の形状、特徴をその識別関数に用いることができる
0さらに各特徴部分の相対位置関係を用いて、検出した
特徴領域以外の特徴部分の存在範囲を得ることが可能で
、前記範囲を用いて各特徴部分に適した処理を施すこと
が可能な方法である。
In addition, by selecting the binarization threshold, it is possible to obtain a realistic region shape, and when identifying regions,
The original shape and features can be used for the discriminant function. Furthermore, by using the relative positional relationship of each feature, it is possible to obtain the existence range of feature parts other than the detected feature area, and using the range This method allows processing suitable for each characteristic portion to be performed.

【図面の簡単な説明】[Brief explanation of drawings]

第1図は本発明を実施するための装置例の概要を示すブ
ロック図、第2図は二値化して得られた画像例を示す図
、第3図は第2図の原画像データのヒストグラムと閾値
θの関係を示す図、第4図は第2図の画像をラベリング
して各領域に番号付けした結果を示す図、第5図は第4
図の各領域に対し特徴パラメータを算出した結果を示す
図、第6図は第2図の画像より目を識別した結果を示す
図、第7図は各特徴部分の相対位置関係を両目を軸とし
て示す図、第8図は算出した各特徴部分の存在範囲(マ
スク)を第2図の画像上に示した図である。 101・・・・・・画像入力部、102・・・・・・画
像記憶部、103・・・・・・二値化部、104・・・
・・・目の位置検出部、106・・・・・・特徴部分存
在位置算出部。 代理人の氏名 弁理士 中 尾 敏 男 ほか1名第2
図 第4図 第5図 里心 第6図 第7図 第8図
Figure 1 is a block diagram showing an overview of an example of a device for carrying out the present invention, Figure 2 is a diagram showing an example of an image obtained by binarization, and Figure 3 is a histogram of the original image data in Figure 2. Figure 4 is a diagram showing the results of labeling the image in Figure 2 and numbering each area, Figure 5 is a diagram showing the relationship between
Figure 6 shows the results of calculating the feature parameters for each area in the figure. Figure 6 shows the results of identifying the eyes from the image in Figure 2. Figure 7 shows the relative positional relationship of each characteristic part based on both eyes. FIG. 8 is a diagram showing the calculated existence range (mask) of each characteristic portion on the image of FIG. 2. 101... Image input unit, 102... Image storage unit, 103... Binarization unit, 104...
. . . Eye position detection section, 106 . . . Characteristic part existence position calculation section. Name of agent: Patent attorney Toshio Nakao and 1 other person 2nd
Figure 4 Figure 5 Rishin Figure 6 Figure 7 Figure 8

Claims (3)

【特許請求の範囲】[Claims] (1)画像入力装置より入力した人の顔の画像データを
、画像メモリに記憶する手段と、記憶した画像データを
二値化し、顔の特定領域を含む二値画像を生成する手段
と、生成した二値画像中の各領域の特徴パラメータ値を
算出し、予め用意した顔の特定部の特徴を示す標準パラ
メータ値と比較することにより、前記特定領域を識別す
る手段と、得られた前記特定領域の位置情報と、あらか
じめ算出した顔の特徴部分の相対位置関係を用いて、顔
の各特徴部分の存在位置を算出する手段とで構成される
ことを特徴とする、顔の特徴部分の認識装置。
(1) means for storing image data of a human face input from an image input device in an image memory; means for binarizing the stored image data to generate a binary image including a specific region of the face; means for identifying the specific area by calculating the feature parameter value of each area in the binary image and comparing it with standard parameter values prepared in advance indicating the characteristics of the specific part of the face; Recognition of facial feature parts, characterized by comprising means for calculating the location of each feature part of the face using region position information and a relative positional relationship of the feature parts of the face calculated in advance. Device.
(2)特定領域は目であることを特徴とする、特許請求
の範囲第1項記載の顔の特徴部分の認識装置。
(2) The facial feature recognition device according to claim 1, wherein the specific region is an eye.
(3)特徴パラメータを二値画像中の各領域の面積・重
心座標・周囲長とすることを特徴とする特許請求の範囲
第1項記載の顔の特徴部分の認識装置。
(3) The facial feature recognition device according to claim 1, wherein the feature parameters are the area, center of gravity coordinates, and perimeter of each region in the binary image.
JP4905185A 1985-03-12 1985-03-12 Recognizing device for feature part of face Granted JPS61208185A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP4905185A JPS61208185A (en) 1985-03-12 1985-03-12 Recognizing device for feature part of face

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP4905185A JPS61208185A (en) 1985-03-12 1985-03-12 Recognizing device for feature part of face

Publications (2)

Publication Number Publication Date
JPS61208185A true JPS61208185A (en) 1986-09-16
JPH0510707B2 JPH0510707B2 (en) 1993-02-10

Family

ID=12820281

Family Applications (1)

Application Number Title Priority Date Filing Date
JP4905185A Granted JPS61208185A (en) 1985-03-12 1985-03-12 Recognizing device for feature part of face

Country Status (1)

Country Link
JP (1) JPS61208185A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS5499529A (en) * 1978-01-17 1979-08-06 Nec Corp Pattern recognizer
JPS59790A (en) * 1982-06-28 1984-01-05 Fuji Electric Co Ltd Pattern recognition device
JPS59194274A (en) * 1983-04-18 1984-11-05 Nippon Telegr & Teleph Corp <Ntt> Person deciding device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS5499529A (en) * 1978-01-17 1979-08-06 Nec Corp Pattern recognizer
JPS59790A (en) * 1982-06-28 1984-01-05 Fuji Electric Co Ltd Pattern recognition device
JPS59194274A (en) * 1983-04-18 1984-11-05 Nippon Telegr & Teleph Corp <Ntt> Person deciding device

Also Published As

Publication number Publication date
JPH0510707B2 (en) 1993-02-10

Similar Documents

Publication Publication Date Title
CN104036278B (en) The extracting method of face algorithm standard rules face image
WO2019000653A1 (en) Image target identification method and apparatus
CN103034852B (en) The detection method of particular color pedestrian under Still Camera scene
CN110378179B (en) Subway ticket evasion behavior detection method and system based on infrared thermal imaging
CN106960202A (en) A kind of smiling face&#39;s recognition methods merged based on visible ray with infrared image
CN109918971B (en) Method and device for detecting number of people in monitoring video
CN106650623A (en) Face detection-based method for verifying personnel and identity document for exit and entry
CN105447859A (en) Field wheat aphid counting method
WO2019119515A1 (en) Face analysis and filtering method, device, embedded apparatus, dielectric and integrated circuit
JP2007272435A (en) Face feature extraction device and face feature extraction method
JP2872776B2 (en) Face image matching device
CN109359577A (en) A kind of Complex Background number detection system based on machine learning
CN111860369A (en) Fraud identification method and device and storage medium
CN110443184A (en) ID card information extracting method, device and computer storage medium
CN114241542A (en) Face recognition method based on image stitching
CN103927518B (en) A kind of face feature extraction method for human face analysis system
CN106203338A (en) Based on net region segmentation and the human eye state method for quickly identifying of threshold adaptive
CN110222647A (en) A kind of human face in-vivo detection method based on convolutional neural networks
WO2011074014A2 (en) A system for lip corner detection using vision based approach
Das et al. Human face detection in color images using HSV color histogram and WLD
CN108416304A (en) A kind of three classification method for detecting human face using contextual information
CN117475353A (en) Video-based abnormal smoke identification method and system
CN110427907B (en) Face recognition preprocessing method for gray level image boundary detection and noise frame filling
JPH05108804A (en) Identifying method and executing device for three-dimensional object
CN111163332A (en) Video pornography detection method, terminal and medium

Legal Events

Date Code Title Description
EXPY Cancellation because of completion of term