JPH0877334A - Automatic feature point extracting method for face image - Google Patents

Automatic feature point extracting method for face image

Info

Publication number
JPH0877334A
JPH0877334A JP21584594A JP21584594A JPH0877334A JP H0877334 A JPH0877334 A JP H0877334A JP 21584594 A JP21584594 A JP 21584594A JP 21584594 A JP21584594 A JP 21584594A JP H0877334 A JPH0877334 A JP H0877334A
Authority
JP
Japan
Prior art keywords
point
end point
eye
nose
eyebrow
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
JP21584594A
Other languages
Japanese (ja)
Inventor
Shigeo Morishima
繁生 森島
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Konica Minolta Inc
Original Assignee
Konica Minolta Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Konica Minolta Inc filed Critical Konica Minolta Inc
Priority to JP21584594A priority Critical patent/JPH0877334A/en
Publication of JPH0877334A publication Critical patent/JPH0877334A/en
Withdrawn legal-status Critical Current

Links

Abstract

PURPOSE: To automatically and easily extract the feature point of a face image at high speed by changing the algorithm of feature point detection corresponding to the part of the face and performing the detection in the optimum order of detection. CONSTITUTION: A head apex 1, right eyebrow left end point 2, right eyebrow right end point 3, right eyebrow upper end point 4, right eyebrow lower end point 5, left eyebrow left end point 6, left eyebrow right end point 7, left eyebrow upper end point 8, left eyebrow lower end point 9, right eye left end point 10, right eye right end point 11, right eye upper end point 12, right eye lower end point 13, left eye left end point 14, left eye right end point 15, left eye upper end point 16, left eye lower end point 17, nose left end point 18, nose right end point 19, nose lower end point 20, lips left end point 21, lips right end point 22, lips upper end point 23, lips center end point 24, lips lower end point 25, and chin lower end point 32 or the like are used as the feature points of the face image. When calculating these feature points, binarization processing is executed within respective candidate areas in the order of the lips, eyes and eyebrows, the feature points are extracted from the extracted object areas, the feature points are decided by edge processing within the candidate areas for the other parts, and a feature point group can be automatically extracted.

Description

【発明の詳細な説明】Detailed Description of the Invention

【0001】[0001]

【産業上の利用分野】本発明は顔画像の特徴点を自動的
に抽出する方法に関し、詳しくは、顔画像の各部の候補
領域を設定し、この領域内でしきい値の自動設定による
2値化処理を行い顔画像の特徴点を自動的に抽出する方
法に関する。
BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a method for automatically extracting a feature point of a face image. More specifically, a candidate area for each part of the face image is set, and a threshold value is automatically set within this area. The present invention relates to a method of automatically extracting feature points of a face image by performing a binarization process.

【0002】[0002]

【従来の技術】人の顔は変化や動きに富み表情が豊かで
あるため、人と人との、また人と計算機との情報伝達媒
体として大きな役割を持つものとして多く研究されてい
る。たとえば、人どうしが言葉によらないコミュニケー
ションを行う場合、顔の表情が情報伝達要素として大き
な役割を果たしている。
2. Description of the Related Art Since human faces are rich in changes and movements and rich in facial expressions, many studies have been conducted as having a great role as a medium for transmitting information between humans and computers and between humans and computers. For example, facial expressions play a major role as information transmission elements when people communicate with each other without using words.

【0003】一方、人の顔は個人ごとに異なっており、
各個人やグループ(たとえば、人種グループ)を特徴づ
けるための有効な情報となり得る。
On the other hand, each person's face is different,
It can be useful information to characterize each individual or group (eg, racial group).

【0004】このような顔画像の情報は小さな点の集合
と考えたビットマップ画像として扱えば非常に膨大な情
報量になってしまうが、他との違いのみをとり出したい
わゆる「特徴点」に着目すれば少ない情報量をもって全
体を表現でき、電子的あるいは機械的処理に対しても非
常に有効となる。
If such face image information is handled as a bitmap image, which is considered as a set of small points, the amount of information becomes very large, but so-called "characteristic points" are taken out only from the difference. If you pay attention to, you can express the whole with a small amount of information, and it will be very effective for electronic or mechanical processing.

【0005】CCDカメラやスキャナ等により取り込ま
れた顔画像はビットマップ情報としてメモリに蓄えら
れ、あるいはディスプレイ上に表示されている。特徴点
を決める古くから行われている方法は顔画像が表示され
たディスプレイを見ながらマウスあるいはキーボードの
操作により特徴点をマニュアルで決定していく方法であ
った。しかしこの方法は多大な時間と労力が費やされる
ことになり実用的であるとは言い難いものであった。
A face image captured by a CCD camera, a scanner or the like is stored in a memory as bitmap information or displayed on a display. The method that has been used for a long time to determine the feature points is to manually determine the feature points by operating the mouse or keyboard while looking at the display on which the face image is displayed. However, this method requires a great deal of time and labor and is not practical.

【0006】近年画像処理の発達によって顔画像の特徴
点抽出を自動化しようとするいくつかの試みが現れてい
る。
[0006] In recent years, some attempts have been made to automate the extraction of feature points of a face image due to the development of image processing.

【0007】これら従来の特徴点抽出方法としては、H
VS表色系により人を含む画像から顔の特徴点抽出を試
みた方法や、YIQ表色系を用いて面積変化によるしき
い値の自動設定を施し特徴点抽出を行う方法等がある。
These conventional feature point extraction methods include H
There are a method of attempting to extract feature points of a face from an image including a person by the VS color system, and a method of performing feature point extraction by automatically setting a threshold value according to an area change using the YIQ color system.

【0008】[0008]

【発明が解決しようとする課題】これらの方法は特徴点
抽出に際し、顔における検出部位が異なったとしてもア
ルゴリズムを固定していたため部位によっては検出精度
が著しく低下してしまうという問題点があった。
However, these methods have a problem in that when extracting feature points, the detection accuracy is remarkably lowered depending on the part because the algorithm is fixed even if the detected part on the face is different. .

【0009】本発明は上記の点にかんがみてなされたも
ので、従来多くの時間と労力を費やしていた顔画像の特
徴点抽出を自動的に高速且つ簡便に行うことのできる方
法を提供することを目的とする。
The present invention has been made in view of the above points, and provides a method capable of automatically and quickly and easily extracting feature points of a face image, which has been spending a lot of time and labor in the past. With the goal.

【0010】本発明の他の目的は、特徴点を利用した表
情合成、表情変形、表情認識、個人認証、平均値合成な
ど各種の応用に対して適合した特徴点群を自動的に抽出
する方法を提供することである。
Another object of the present invention is to automatically extract a feature point group suitable for various applications such as facial expression synthesis using facial feature points, facial expression transformation, facial expression recognition, personal identification, and average value synthesis. Is to provide.

【0011】[0011]

【課題を解決するための手段】本発明は上記の目的を達
成するために、顔画像の特徴点として、頭頂点、右眉左
端点、右眉右端点、右眉上端点、右眉下端点、左眉左端
点、左眉右端点、左眉上端点、左眉下端点、右目左端
点、右目右端点、右目上端点、右目下端点、左目左端
点、左目右端点、左目上端点、左目下端点、鼻左端点、
鼻右端点、鼻下端点、唇左端点、唇右端点、唇上端点、
唇中心点、唇下端点、鼻左横顔輪郭点、鼻右横顔輪郭
点、唇左横顔輪郭点、唇右横顔輪郭点、顎左横顔輪郭
点、顎右横顔輪郭点、顎下端点を用い、これらの特徴点
を求めるに際し、唇、目、眉の順序でそれぞれの候補領
域内で2値化処理を施し、抽出された対象領域から特徴
点抽出を行い、その他の部位は候補領域内でのエッジ処
理によって特徴点を決定する。
In order to achieve the above object, the present invention provides, as feature points of a face image, a head vertex, a right eyebrow left end point, a right eyebrow right end point, a right eyebrow upper end point, a right eyebrow lower end point. , Left eyebrow left endpoint, left eyebrow right endpoint, left eyebrow upper edge point, left eyebrow lower edge point, right eye left edge point, right eye right edge point, right eye upper edge point, right eye lower edge point, left eye left edge point, left eye right edge point, left eye upper edge point, left eye Bottom point, left nose endpoint,
Right nose, bottom nose, left lip, right lip, top lip,
Lip center point, lower lip point, nose left side face contour point, nose right side face contour point, lip left side face contour point, lip right side face contour point, chin left side face contour point, chin right side face contour point, lower jaw point, When obtaining these feature points, binarization processing is performed in each candidate area in the order of lips, eyes, and eyebrows, feature points are extracted from the extracted target area, and other parts are extracted in the candidate area. Feature points are determined by edge processing.

【0012】[0012]

【作用】本発明は以上の構成によって、顔の部位によっ
て特徴点検出のアルゴリズムを変えるとともに、最適な
検出順序で検出を行う。
With the above-described structure, the present invention changes the feature point detection algorithm depending on the face part and performs detection in the optimum detection order.

【0013】[0013]

【実施例】以下本発明を図面に基づいて説明する。DESCRIPTION OF THE PREFERRED EMBODIMENTS The present invention will be described below with reference to the drawings.

【0014】顔の形状に着目した場合に、各個人の特徴
を代表するものは、顔輪郭線、目、唇、眉等の構成部位
に関する情報である。
When attention is paid to the shape of the face, the information representative of the characteristics of each individual is the information on the constituent parts such as the facial contour line, eyes, lips, and eyebrows.

【0015】特徴情報として、 ・各個人を特徴づけるために有効な情報 ・画像処理法により比較的容易に抽出できる情報 という観点より特徴点を決定すると図1のようになる。The characteristic information is as follows: -Effective information for characterizing each individual-The characteristic points are determined from the viewpoint of information that can be extracted relatively easily by the image processing method, as shown in FIG.

【0016】これら32点は顔の特徴点を示す最小の点
数と考えられるが、さらに少ない点数を要求される場合
で多少の不具合を容認できるときには、26−27、3
0−31のペア、または28−29、30−31のペア
を省略することができる。
These 32 points are considered to be the minimum points indicating the characteristic points of the face, but if a smaller number of points is required and some trouble can be tolerated, 26-27, 3
The 0-31 pair or 28-29, 30-31 pair can be omitted.

【0017】顔画像の入力から特徴点抽出までの一連の
プロセスを考える場合、まず最初にどの部位を抽出する
かという問題が出てくる。その部位には、 ・色や形状に関して個人差が少ないこと ・他の部位の抽出処理にスムーズに誘導できるような位
置にあること ・できるだけ抽出が容易であること といった条件が望まれる。
When considering a series of processes from the input of a face image to the extraction of feature points, the problem of which part to extract first appears. For that part, conditions such as: -There are few individual differences in color and shape-It is in a position where it can be smoothly guided to the extraction process of other parts-Extraction is as easy as possible.

【0018】これらの条件を満たす部位としては唇が考
えられる。唇には個人差が少なく、また唇を基準に目、
鼻、顎といった重要な特徴が抽出できる位置関係にあ
り、そしてYIQ表色系のQ軸のようにRGB系を操作
することで領域が特化されることから比較的抽出が容易
である。
The lips can be considered as a part satisfying these conditions. There are few individual differences in the lips, and the eyes are based on the lips.
Since there is a positional relationship in which important features such as the nose and chin can be extracted, and the region is specialized by operating the RGB system like the Q axis of the YIQ color system, extraction is relatively easy.

【0019】通常、画像信号はRGB系で与えられてお
り、多くの画像機器などもこのRGBを前提にしてお
り、色情報の取り組みや、表示にとって都合がよい。
Normally, the image signal is given in RGB system, and many image devices and the like also assume this RGB, which is convenient for approaching color information and displaying.

【0020】しかしながら、人間の視覚のように優れた
色判別情報を必要としたり、色情報を正確に利用するた
めには、用途に応じて変換された表色系を用いることが
必要とされる。
However, it is necessary to use a color system converted according to the intended use in order to require excellent color discrimination information as in human vision and to use the color information accurately. .

【0021】非RGB系の代表例としては、輝度信号、
色差信号から成るYIQ表色系がある。YIQ表色系は
RGB表色系を数1のように1次変換することで求めら
れる。
As a representative example of the non-RGB system, a luminance signal,
There is a YIQ color system that consists of color difference signals. The YIQ color system is obtained by linearly converting the RGB color system as shown in Equation 1.

【0022】[0022]

【数1】 Y軸は輝度信号を、I軸は赤と青緑との色差信号を、Q
軸は黄緑と紫との色差信号を表し、色の変化に関する視
覚特性が敏感な軸がI、鈍い軸がQとなるように係数が
選ばれている。本発明では唇の存在位置を求めるため
に、まず特定の色差信号を設定する。
[Equation 1] The Y-axis represents the luminance signal, the I-axis represents the color difference signal between red and blue-green, and the Q-axis represents the color difference signal.
The axis represents a color difference signal between yellow-green and purple, and the coefficient is selected so that the axis having a sensitive visual characteristic regarding color change is I and the axis having a dull axis is Q. In the present invention, a specific color difference signal is first set in order to find the position of the lips.

【0023】すなわち、唇領域の強調のために(R−G
−B/2)といった色差信号を設定する。(R−G)に
よって唇領域が差分として求められるが、多少の肌領域
も含まれてしまうため、それを補正するためにB/2を
設定し、(R−G)より減じた。
That is, in order to emphasize the lip region (R-G
Set a color difference signal such as -B / 2). The lip area is obtained as a difference by (RG), but some skin area is also included. Therefore, B / 2 was set to correct it, and it was reduced from (RG).

【0024】この色差信号をさらに2乗し、正規化した
ものを次の処理に用いる。
This color difference signal is further squared and the normalized signal is used for the next processing.

【0025】本発明では、Y(X)方向1ラインごとの
画素値をX(Y)方向に総和を求めて得られる「重み付
きヒストグラム」を用いて、唇、目、眉の候補領域設定
を行う。
In the present invention, the "weighted histogram" obtained by summing the pixel values for each line in the Y (X) direction in the X (Y) direction is used to set the candidate regions for the lips, eyes, and eyebrows. To do.

【0026】図2に重み付きヒストグラムの概念図を示
す。
FIG. 2 shows a conceptual diagram of the weighted histogram.

【0027】高輝度(もしくは低輝度)の画素が密集す
る領域ほどヒストグラム値が高くなる。このヒストグラ
ムにおける山や谷等に関する拘束条件を各部位ごとに設
定し、それに応じて矩形候補領域が設定される。
The histogram value becomes higher in a region where pixels of high brightness (or low brightness) are densely arranged. The constraint conditions regarding the peaks and valleys in this histogram are set for each part, and the rectangular candidate regions are set accordingly.

【0028】以下、この重み付きヒストグラムを用いて
唇候補領域の決定法について説明する。
Hereinafter, a method for determining the lip candidate area using this weighted histogram will be described.

【0029】まず、Y方向の重み付きヒストグラムを求
める。ここで、ヒストグラムがピーク値をとる点が唇の
Y方向の存在位置を示すと考えられる。このヒストグラ
ムを低域フィルタで平滑化し、ピークをはさむ2つの谷
の位置より唇候補領域のY方向の幅を決定する。
First, a weighted histogram in the Y direction is obtained. Here, it is considered that the point where the histogram has the peak value indicates the position of the lip in the Y direction. This histogram is smoothed by a low pass filter, and the width of the lip candidate region in the Y direction is determined from the positions of two valleys that sandwich the peak.

【0030】次に、Y方向の幅の中で、X方向について
同様に重み付きヒストグラムを用いてX方向の幅を決定
する。唇の形状から考えて、ピーク近くに小さな谷があ
る場合がある。その場合はピークと谷の大きさおよび位
置関係から適切な値かどうかを判断する。
Next, of the widths in the Y direction, the widths in the X direction are determined by similarly using the weighted histogram for the X direction. Given the shape of the lips, there may be small valleys near the peaks. In that case, it is determined whether the value is appropriate from the size of the peak and the valley and the positional relationship.

【0031】このように(R−G−B/2)の色差信号
と重み付きヒストグラムとにより唇が存在すると思われ
る候補領域を設定する。
In this way, the candidate area where the lips are likely to exist is set by the (R-G-B / 2) color difference signal and the weighted histogram.

【0032】候補領域が設定されたならば、この領域内
で2値化処理を行い、2値データから特徴点抽出を行
う。
When the candidate area is set, binarization processing is performed in this area, and feature points are extracted from the binary data.

【0033】唇領域は輝度変化が比較的単調なため、そ
れに適したアルゴリズムの2値化処理を行う。その操作
は以下のステップで行う。 (1)X方向1ラインごとに輝度変化が最大となる点を
検索し、その点における輝度値で1ラインごとにしきい
値処理を行う。 (2)(1)の処理をY方向にも行う。 (3)X、Y各方向の2値データのANDおよびEOR
を求める。 (4)ANDとEORとの差分ヒストグラムの零交差点
をしきい値とし、矩形領域内でしきい値処理を行う。 (5)(3)で求めたAND値と、(4)で求めた2値
画像とのANDをとる。 (6)対象領域以外の領域を面積の大きさによって判断
し、非対象領域を除去する。
Since the luminance change in the lip region is relatively monotonous, the binarization process of an algorithm suitable for it is performed. The operation is performed in the following steps. (1) A point at which the luminance change is maximum is searched for each line in the X direction, and threshold processing is performed for each line using the luminance value at that point. (2) The processing of (1) is also performed in the Y direction. (3) AND and EOR of binary data in X and Y directions
Ask for. (4) Using the zero crossing point of the difference histogram between AND and EOR as a threshold value, threshold processing is performed within the rectangular area. (5) The AND value obtained in (3) is ANDed with the binary image obtained in (4). (6) Areas other than the target area are determined by the size of the area, and the non-target area is removed.

【0034】(1)〜(6)のステップによる2値化ア
ルゴリズムを図3に示す。ただし、唇領域の2値化処理
と眉領域の2値化処理とは同じアルゴリズムで行うの
で、図3では代表して右眉の画像とともにアルゴリズム
を示してある。
FIG. 3 shows the binarization algorithm by the steps (1) to (6). However, since the binarization process for the lip region and the binarization process for the eyebrow region are performed by the same algorithm, FIG. 3 representatively shows the algorithm together with the image of the right eyebrow.

【0035】次に、このようにして得られた2値画像か
らの特徴点抽出方法について説明する。
Next, a method of extracting feature points from the binary image thus obtained will be described.

【0036】唇の特徴点は図4に示すようにA、B、
C、D、Eの5点である。これらの点は図1における点
21〜25に対応する。
As shown in FIG. 4, the characteristic points of the lips are A, B,
There are five points, C, D and E. These points correspond to points 21-25 in FIG.

【0037】候補領域内で抽出された唇の2値画像に対
し、図5のようにX軸方向のヒストグラムをとる。ヒス
トグラムの最大値よりX正負方向にヒストグラム値が最
初に0となる点を検索し、左右端点のX座標AX、BX
を決定する。
As shown in FIG. 5, a histogram in the X-axis direction is taken for the binary image of the lips extracted in the candidate area. From the maximum value of the histogram, the point where the histogram value becomes 0 first in the positive or negative X direction is searched, and the X coordinates AX and BX of the left and right end points are searched.
To decide.

【0038】次に、図6のように、AX、BXを中心に
両側に幅を持たせた短冊領域を設定し、その領域内でY
軸方向のヒストグラムをとり、ヒストグラム値が0を越
える点のY座標の最大値と最小値の平均値をそれぞれ左
右端点のY座標AY、BYとする。
Next, as shown in FIG. 6, a strip area having widths on both sides around AX and BX is set, and Y is set in the area.
The histogram in the axial direction is taken, and the average value of the maximum value and the minimum value of the Y coordinate of the points where the histogram value exceeds 0 is defined as the Y coordinates AY and BY of the left and right end points, respectively.

【0039】さらに、図7のように、上下端点のX座標
はAXとBXの平均をCXとし、Y座標はAY、BYを
求めたのと同様に、CXを中心に両側に幅を持たせた短
冊領域を設定し、その領域内でY軸方向のヒストグラム
をとり、ヒストグラム値が0を越える点のY座標の最大
値と最小値をそれぞれ上下端点のY座標CY、DYとす
る。
Further, as shown in FIG. 7, the X coordinate of the upper and lower end points is the average of AX and BX as CX, and the Y coordinate is AY and BY. A strip area is set, a histogram in the Y-axis direction is taken in that area, and the maximum and minimum values of the Y coordinate of points where the histogram value exceeds 0 are set as the Y coordinates CY and DY of the upper and lower end points, respectively.

【0040】また、唇の中心である図4に示した点Eは
R、G、B画像それぞれについて、点C、D間を検索
し、(R+G+B)の値が最小となる点とする。
The point E shown in FIG. 4 which is the center of the lips is the point where the value of (R + G + B) becomes the minimum, by searching between the points C and D for each of the R, G and B images.

【0041】このようなアルゴリズムにより、唇の特徴
点である5つの点が抽出される。
With such an algorithm, five points, which are characteristic points of the lips, are extracted.

【0042】唇の特徴点が抽出されたならば、以下の4
つの候補領域が設定できる。 (1)顎下端点の候補領域 X座標:Xmouth_lower ±β Y座標:Ymouth_lower 〜Ymouth_lower +(Y
mouth_lower −Ymouth_upper)×3 (2)鼻下端点の候補領域 X座標:Xmouth_upper ±β Y座標:Ychin−(Ychin−Ymouth_center)×0.7
〜Ymouth_upper −5 (3)鼻横端点の候補領域として、 X座標:Xnose_top〜Xmouth_right 、Xnose_top〜X
mouth_left Y座標:Ynose_top±β(ここで、Xnose_top=X
mouth_upper 、Ynose_top=(Ychin−Ynose)×1.
05) (4)顔輪郭点の候補領域として、 X座標:Xmouth_right 〜(Xmouth_right −X
mouth_left)×1.2 Y座標:Ymouth_right ±β、Ymouth_left±β ここで、βは(0.25/100)N〜(1.25/1
00)Nの間の整数値、Nは頭頂点から顎下端点までの
画素数である。
Once the feature points of the lips have been extracted, the following 4
Two candidate areas can be set. (1) Candidate region of lower jaw point X coordinate: X mouth_lower ± β Y coordinate: Y mouth_lower to Y mouth_lower + (Y
mouth_lower −Y mouth_upper ) × 3 (2) Candidate area for the lower end point of nose X coordinate: X mouth_upper ± β Y coordinate: Y chin − (Y chin −Y mouth_center ) × 0.7
~ Y mouth_upper -5 (3) As a candidate region of the lateral end point of the nose, X coordinate: X nose_top ~ X mouth_right , X nose_top ~ X
mouth_left Y coordinate: Y nose_top ± β (where X nose_top = X
mouth_upper , Y nose_top = (Y chin −Y nose ) × 1.
05) (4) X coordinate: X mouth_right ~ (X mouth_right -X
mouth_left ) × 1.2 Y coordinate: Y mouth_right ± β, Y mouth_left ± β where β is (0.25 / 100) N to (1.25 / 1)
00) is an integer between N and N is the number of pixels from the top of the head to the lower end of the jaw.

【0043】これらの候補領域の中から特徴点を抽出す
るにはエッジ処理(輝度変化)を用いる。
Edge processing (luminance change) is used to extract feature points from these candidate areas.

【0044】用いる画像はY軸画像で、エッジの大きさ
や位置関係を用いて各特徴点を決定する。条件は各部位
ごとに設定する。たとえば、図8に示されるような顔輪
郭点の抽出の場合には、唇に近い方のエッジが選ばれ
る。
The image used is a Y-axis image, and each feature point is determined by using the edge size and positional relationship. Conditions are set for each part. For example, in the case of extracting face contour points as shown in FIG. 8, the edge closer to the lips is selected.

【0045】次に、目の候補領域設定方法について述べ
る。
Next, a method for setting the eye candidate area will be described.

【0046】まず、前処理として、抽出された唇の特徴
点、唇の高さに相当する顔輪郭点、鼻下端点および顎下
端点により得られる図9のような広い領域を、Y軸(輝
度信号)について設定する。目領域に含まれる黒目(低
輝度)と、白目および肌(高輝度)との差を明確にする
ために、図9に示した領域内でY軸値を2乗し、それを
0〜255で正規化する。また、ヒストグラムが低輝度
値に反応するように輝度を反転した画像を用いる。
First, as preprocessing, a wide area as shown in FIG. 9 obtained by the extracted feature points of the lips, the face contour points corresponding to the height of the lips, the lower end point of the nose and the lower end point of the chin is set to the Y-axis ( Brightness signal). In order to clarify the difference between the black eye (low brightness) included in the eye area and the white eye and skin (high brightness), the Y-axis value is squared within the area shown in FIG. Normalize with. Also, an image in which the brightness is inverted so that the histogram responds to the low brightness value is used.

【0047】次に、Y方向の重み付きヒストグラムを求
める。低域フィルタ処理後、ヒストグラムを−Y方向に
検索し、最初に観測される大きなピークの位置が目のY
方向の存在位置であると考えられる。画像によっては鼻
横の陰影が最初のピークとして観測される場合が考えら
れるので、ピークの大きさおよび位置関係に基づいて判
断を行い、適切な値を導出する。そして、求まったY座
標に±20β画素の幅を付け、これを候補領域のY方向
の幅とする。
Next, a weighted histogram in the Y direction is obtained. After low-pass filtering, the histogram is searched in the -Y direction, and the position of the first large peak observed is Y
It is considered to be the existence position of the direction. Depending on the image, the shadow on the side of the nose may be observed as the first peak, so a judgment is made based on the size and positional relationship of the peak, and an appropriate value is derived. Then, a width of ± 20β pixels is added to the obtained Y coordinate, and this is set as the width of the candidate area in the Y direction.

【0048】次に、Y方向の幅の中で、X方向について
重み付きヒストグラムをとる。唇の高さに相当する輪郭
点のX座標から顔の内側へヒストグラムを検索し、ピー
クの大きさや位置関係より、X方向の幅を決定する。
Next, in the width in the Y direction, a weighted histogram is taken in the X direction. A histogram is searched from the X coordinate of the contour point corresponding to the height of the lips to the inside of the face, and the width in the X direction is determined from the size of the peak and the positional relationship.

【0049】目の候補領域が設定されたならば、次に2
値化処理により目の特徴点を抽出する。2値化のアルゴ
リズムは以下のとおりである。
If the eye candidate area is set, then 2
Feature points of the eye are extracted by the binarization process. The binarization algorithm is as follows.

【0050】Y軸画像に設定された矩形領域内の輝度値
をみると、求めたい領域は肌領域に比べて低い輝度の領
域、すなわち輝度の谷の部分に多く存在することがわか
る。そこで、図10のように、1ラインごとに輝度値を
検索し、極大点から1画素分次の点と、次の極大点の1
画素分手前の点との間を”1”とし、それ以外の領域
を”0”として、矩形領域内で2値画像を生成する。
Looking at the brightness values in the rectangular area set in the Y-axis image, it can be seen that there are many areas to be obtained in areas of lower brightness than the skin area, that is, in the valleys of brightness. Therefore, as shown in FIG. 10, the brightness value is searched for each line, and the next point for one pixel from the maximum point and the next maximum point 1
A binary image is generated in the rectangular area by setting "1" between the point before the pixel and the other area.

【0051】さらに精度を上げるためには次のような拘
束条件を導入する。 ・いま注目している極大値と極小値との差が10以下で
あれば無視する。 ・いま注目している極大点と次の極大点との間が1画素
分しかなければ無視する。
In order to further improve accuracy, the following constraint conditions are introduced.・ Ignore if the difference between the maximum value and the minimum value that is currently being focused is 10 or less.・ Ignore if there is only one pixel between the maximum point that is currently focused and the next maximum point.

【0052】この段階で得られる2値画像には対象とな
る領域の他にまだ多くの不要な領域が含まれるため、連
結している領域を検索し、それらの面積によって対象領
域か非対象領域かを判断し、非対象領域を除去する。
Since the binary image obtained at this stage contains many unnecessary regions in addition to the target region, the connected regions are searched and the target region or the non-target region is searched according to their areas. Then, the non-target area is removed.

【0053】矩形領域内での対象領域と非対象領域への
分割(たとえば目の矩形領域内での目の2値化領域と眉
の2値化領域への分割は)にはラベリングを用いる。
Labeling is used for division into a target area and a non-target area within the rectangular area (for example, division into a binarized area of the eye and a binarized area of the eyebrow within the rectangular area of the eye).

【0054】図11(a)のような目の矩形領域内の2
値領域から目領域のみを抽出するために、八方の近傍で
連結している領域を検索し、図11(b)のようにラベ
リングする。そしてこれらラベリングされた領域のうち
で面積の最も大きなものを対象領域とする(図11
(c)参照)。
2 in the rectangular area of the eye as shown in FIG.
In order to extract only the eye area from the value area, the connected areas in the eight directions are searched for and labeled as shown in FIG. 11 (b). Then, of these labeled regions, the one having the largest area is set as the target region (FIG. 11).
(C)).

【0055】従来、目領域の抽出にはしきい値処理など
によって2値画像を生成するといった方法が多く用いら
れていた。しかし、しきい値処理によって得られた領域
は、陰影等の影響により必ずしも真の形状に一致しなか
った。ところが、本発明の方法によれば、高い精度で目
の特徴点を決定することが可能となる。
Conventionally, a method of generating a binary image by threshold processing or the like has been often used for extracting the eye region. However, the region obtained by the threshold processing did not always match the true shape due to the influence of shadows and the like. However, according to the method of the present invention, it becomes possible to determine the eye feature points with high accuracy.

【0056】目の特徴点は図12に示すようにA、B、
C、Dの4点である。
The characteristic points of the eyes are A, B, and
There are four points, C and D.

【0057】2値画像から特徴点を抽出する方法はすで
に唇の特徴点抽出方法で述べたアルゴリズムを用いる。
As the method for extracting the feature points from the binary image, the algorithm already described in the lip feature point extraction method is used.

【0058】目の特徴点が抽出されたならば、これを利
用して頭頂点候補領域の設定が可能となる。すなわち、 頭頂点の候補領域 X座標:((Xeye_right −Xeye_left)/2)±β Y座標:0〜Yeye この候補領域の中から特徴点を抽出するにはエッジ処理
(輝度変化)を用いる。
Once the eye feature points have been extracted, it is possible to set the head vertex candidate area by utilizing this. That is, the candidate region of the head vertex X coordinate: ((X eye_right −X eye_left ) / 2) ± β Y coordinate: 0 to Y eye To extract feature points from this candidate region, edge processing (luminance change) is performed. To use.

【0059】唇、目、眉以外の候補領域設定に必要な特
徴点をまとめて図13に示す。
FIG. 13 shows a summary of feature points required for setting candidate regions other than lips, eyes, and eyebrows.

【0060】次に、眉の特徴点抽出方法について説明す
る。
Next, a method of extracting eyebrow feature points will be described.

【0061】眉候補領域の設定も同様に重み付きヒスト
グラムを利用する。
Similarly, a weighted histogram is used to set the eyebrow candidate area.

【0062】目のY方向の存在位置がヒストグラムの最
初のピークの位置として与えられることはすでに述べ
た。よって、そのピークの次に出現するピークの位置が
眉のY方向の存在位置であるといえる。求まったY座標
−20β画素の位置を矩形領域の上限とし、下限はこの
段階ですでに求まっている目上端点のY座標とする。
As described above, the position of the eye in the Y direction is given as the position of the first peak in the histogram. Therefore, it can be said that the position of the peak that appears next to the peak is the position of the eyebrow in the Y direction. The obtained Y coordinate −20β pixel position is the upper limit of the rectangular area, and the lower limit is the Y coordinate of the eye upper end point already obtained at this stage.

【0063】次に、顔の位置関係よりX方向の幅を決定
する。
Next, the width in the X direction is determined from the positional relationship of the faces.

【0064】眉領域の2値化アルゴリズムは唇の場合と
同じアルゴリズムを用いる。2値化の前処理として、眉
領域の場合は、Y軸画像上に設定された矩形領域の左右
両端1ラインの輝度を比較し、陰影成分を簡単に線形的
に補正する。
The binarization algorithm for the eyebrow region uses the same algorithm as for the lips. As preprocessing for binarization, in the case of the eyebrow area, the brightness of one line at the left and right ends of the rectangular area set on the Y-axis image is compared to easily and linearly correct the shadow component.

【0065】眉の特徴点は図14に示すようにA、B、
C、Dの4点である。唇に対して行ったのと同じ方法に
より4つの特徴点を抽出する。
As shown in FIG. 14, the feature points of the eyebrows are A, B,
There are four points, C and D. Four feature points are extracted in the same way as done for the lips.

【0066】このようにして求められた顔画像の特徴点
を用いればワイヤフレームモデルを自動的に生成するこ
とができる。ワイヤフレームモデルを構成するための全
格子点N点をこれまで得られた32点の特徴点および標
準ワイヤフレームモデルの比率により補間し生成する。
By using the feature points of the face image thus obtained, the wire frame model can be automatically generated. All the grid points N for constructing the wireframe model are interpolated and generated by the ratio of the 32 feature points obtained so far and the standard wireframe model.

【0067】ただし、ここでは、特徴点32点からN点
を直接生成するのではなく、まず眼球や鼻孔、顎輪郭点
等の個人の顔情報をより多く含んだN/5〜N/3点を
生成し、これから最終的な格子点N点を生成する。
However, here, N points are not directly generated from 32 characteristic points, but N / 5 to N / 3 points which include more face information of an individual such as an eyeball, a nostril, and a chin contour point are first generated. Is generated, and the final grid point N is generated from this.

【0068】これまでに述べた特徴点自動抽出方法のフ
ローチャートを図15に示す。
FIG. 15 shows a flowchart of the feature point automatic extraction method described so far.

【0069】[0069]

【発明の効果】以上説明したように、本発明によれば、
従来多くの時間と労力を費やしていた顔画像の特徴点抽
出を自動的に高速且つ簡便に行うことができる。
As described above, according to the present invention,
It is possible to automatically and quickly perform feature point extraction of a face image, which has conventionally required a lot of time and labor.

【0070】また、本発明によれば、特徴点を利用した
表情合成、表情変形、表情認識、個人認証、平均値合成
など各種の応用に対して適合した特徴点群を自動的に抽
出することができる。
Further, according to the present invention, a feature point group suitable for various applications such as facial expression synthesis using facial feature points, facial expression transformation, facial expression recognition, personal identification, and average value synthesis is automatically extracted. You can

【0071】また、従来の方法は特徴点抽出に際し、顔
における検出部位が異なったとしてもアルゴリズムを固
定していたため部位によっては検出精度が著しく低下し
てしまうという問題点があった。本発明では部位に応じ
て最適なアルゴリズムを用いているため検出精度を一定
して高くすることができる。
In addition, the conventional method has a problem in that the extraction of feature points significantly reduces the detection accuracy depending on the part because the algorithm is fixed even if the detected part on the face is different. In the present invention, since the optimum algorithm is used according to the part, the detection accuracy can be constantly increased.

【0072】本発明により求められた顔画像の特徴点を
用いれば、コンピュータ等を用いて以下のような処理や
分析が容易にできるようになる。 (1)表情変形 特徴点を用いてワイヤフレームモデルを構築し、これを
変化させることによって容易に表情変形が行える。
By using the feature points of the face image obtained by the present invention, the following processing and analysis can be easily performed using a computer or the like. (1) Facial expression transformation A facial expression can be easily transformed by constructing a wireframe model using feature points and changing it.

【図面の簡単な説明】[Brief description of drawings]

【図1】顔の特徴点を示す図である。FIG. 1 is a diagram showing feature points of a face.

【図2】重み付きヒストグラムの概念図である。FIG. 2 is a conceptual diagram of a weighted histogram.

【図3】唇、眉の2値化アルゴリズムを示す図である。FIG. 3 is a diagram showing a binarization algorithm for lips and eyebrows.

【図4】唇の特徴点を示す図である。FIG. 4 is a diagram showing feature points of a lip.

【図5】X軸方向のヒストグラムの図である。FIG. 5 is a diagram of a histogram in the X-axis direction.

【図6】Y軸方向のヒストグラムの図である。FIG. 6 is a diagram of a histogram in the Y-axis direction.

【図7】上下端点のX座標値、Y座標値の求め方を説明
する図である。
FIG. 7 is a diagram illustrating a method of obtaining X coordinate values and Y coordinate values of upper and lower end points.

【図8】エッジ強度による特徴点の決定を説明する図で
ある。
FIG. 8 is a diagram illustrating determination of a feature point based on edge strength.

【図9】目の候補領域設定のための領域を示す図であ
る。
FIG. 9 is a diagram showing an area for setting an eye candidate area.

【図10】目領域の2値化アルゴリズムを説明する図で
ある。
FIG. 10 is a diagram illustrating a binarization algorithm of an eye area.

【図11】ラベリングによる対象領域の抽出を説明する
図であり、(a)は目の矩形領域内の2値領域を示す
図、(b)はラベリングした図、(c)は対象領域を示
す図である。
11A and 11B are diagrams illustrating extraction of a target region by labeling, FIG. 11A is a diagram showing a binary region in a rectangular region of an eye, FIG. 11B is a labeled diagram, and FIG. 11C is a target region. It is a figure.

【図12】目の特徴点を示す図である。FIG. 12 is a diagram showing eye feature points.

【図13】唇、目、眉以外の候補領域設定に必要な特徴
点を示す図である。
FIG. 13 is a diagram showing feature points required for setting candidate regions other than lips, eyes, and eyebrows.

【図14】眉の特徴点を示す図である。FIG. 14 is a diagram showing feature points of eyebrows.

【図15】本発明による特徴点自動抽出方法のフローチ
ャートである。
FIG. 15 is a flowchart of a feature point automatic extraction method according to the present invention.

【符号の説明】[Explanation of symbols]

1 頭頂点 2 右眉左端点 3 右眉右端点 4 右眉上端点 5 右眉下端点 6 左眉左端点 7 左眉右端点 8 左眉上端点 9 左眉下端点 10 右目左端点 11 右目右端点 12 右目上端点 13 右目下端点 14 左目左端点 15 左目右端点 16 左目上端点 17 左目下端点 18 鼻左端点 19 鼻右端点 20 鼻下端点 21 唇左端点 22 唇右端点 23 唇上端点 24 唇中心点 25 唇下端点 26 鼻左横顔輪郭点 27 鼻右横顔輪郭点 28 唇左横顔輪郭点 29 唇右横顔輪郭点 30 顎左横顔輪郭点 31 顎右横顔輪郭点 32 顎下端点 1 head vertex 2 right eyebrow left end point 3 right eyebrow right end point 4 right eyebrow upper end point 5 right eyebrow lower end point 6 left eyebrow left end point 7 left eyebrow right end point 8 left eyebrow upper end point 9 left eyebrow lower end point 10 right eye left end point 11 right eye right end point Point 12 Right eye upper end point 13 Right eye lower end point 14 Left eye left end point 15 Left eye right end point 16 Left eye upper end point 17 Left eye lower end point 18 Nose left end point 19 Nose right end point 20 Nose lower end point 21 Lip left end point 22 Lip right end point 23 Lip upper end point 24 Lip center point 25 Lip lower edge point 26 Nose left side face contour point 27 Nose right side face contour point 28 Lip left side face contour point 29 Lip right side face contour point 30 Jaw left side face contour point 31 Jaw right side face contour point 32 Jaw lower end point

Claims (16)

【特許請求の範囲】[Claims] 【請求項1】 顔画像の特徴点として、頭頂点、右眉左
端点、右眉右端点、右眉上端点、右眉下端点、左眉左端
点、左眉右端点、左眉上端点、左眉下端点、右目左端
点、右目右端点、右目上端点、右目下端点、左目左端
点、左目右端点、左目上端点、左目下端点、鼻左端点、
鼻右端点、鼻下端点、唇左端点、唇右端点、唇上端点、
唇中心点、唇下端点、鼻左横顔輪郭点、鼻右横顔輪郭
点、唇左横顔輪郭点、唇右横顔輪郭点、顎左横顔輪郭
点、顎右横顔輪郭点、顎下端点を用いることを特徴とす
る顔画像の特徴点自動抽出方法。
1. A feature point of a face image is a head vertex, a right eyebrow left end point, a right eyebrow right end point, a right eyebrow upper end point, a right eyebrow lower end point, a left eyebrow left end point, a left eyebrow right end point, a left eyebrow upper end point, Left eyebrow lower end point, right eye left end point, right eye right end point, right eye upper end point, right eye lower end point, left eye left end point, left eye right end point, left eye upper end point, left eye lower end point, nose left end point,
Right nose, bottom nose, left lip, right lip, top lip,
Use lip center point, lip bottom point, nose left profile point, nose right profile point, lip left profile point, lip right profile point, chin left profile point, chin right profile point, chin bottom point Automatic feature extraction method for face images.
【請求項2】 前記頭頂点から前記顎下端点までの画素
数Nが256点以上であることを特徴とする請求項1に
記載の顔画像の特徴点自動抽出方法。
2. The method for automatically extracting feature points of a face image according to claim 1, wherein the number of pixels N from the apex of the head to the lower end of the jaw is 256 or more.
【請求項3】 唇、目、眉の候補領域設定に際し、Y
(X)方向1ラインごとの画素数をX(Y)方向に総和
を求めて得られる重み付きヒストグラムを用いることを
特徴とする顔画像の特徴点自動抽出方法。
3. When setting candidate regions for lips, eyes, and eyebrows, Y
An automatic feature point extraction method for a face image, which uses a weighted histogram obtained by summing the number of pixels per line in the (X) direction in the X (Y) direction.
【請求項4】 唇、目、眉の順序で候補領域内で2値化
処理を施し、抽出された対象領域から特徴点抽出を行
い、その他の部位は候補領域内でのエッジ処理によって
特徴点を決定することを特徴とする顔画像の特徴点自動
抽出方法。
4. A binarization process is performed in the candidate region in the order of lips, eyes, and eyebrows, feature points are extracted from the extracted target region, and other parts are feature points by edge processing in the candidate region. A method for automatically extracting feature points of a face image, which is characterized by determining.
【請求項5】 唇候補領域の自動設定に際して、(R−
G−B/2)という色差信号と重み付きヒストグラムと
を用いることを特徴とする顔画像の特徴点自動抽出方
法。
5. When automatically setting a lip candidate region, (R-
A method for automatically extracting a feature point of a face image, which uses a color difference signal of GB / 2) and a weighted histogram.
【請求項6】 目、眉の候補領域を決定するに際し、抽
出された唇の特徴点、唇の高さに相当する顔輪郭点、鼻
下端点および顎下端点により得られる領域を、Y軸(輝
度信号)について設定し、前記領域内でY軸値を2乗
し、それを0〜255で正規化するとともに、輝度を反
転した画像を用いることを特徴とする顔画像の特徴点自
動抽出方法。
6. When deciding the eye and eyebrow candidate regions, the region obtained by the extracted feature points of the lips, the face contour points corresponding to the height of the lips, the lower end point of the nose and the lower end point of the chin are set on the Y-axis. (Brightness signal) is set, the Y-axis value is squared in the area, the value is normalized from 0 to 255, and an image in which the brightness is inverted is used, and the feature point automatic extraction of the face image is used. Method.
【請求項7】 目の候補領域内での2値化において、1
ラインごとに輝度値を検索し、極大点から1画素分次の
点と、次の極大点の1画素分手前の点との間を”1”と
し、それ以外の領域を”0”として、矩形領域内で2値
画像を生成し、拘束条件として、注目している極大値と
極小値との差が10以下であれば無視し、注目している
極大点と次の極大点との間が1画素分しかなければ無視
し、さらに、連結している領域を検索し、それらの面積
によって対象領域か非対象領域かを判断し、非対象領域
を除去することを特徴とする顔画像の特徴点自動抽出方
法。
7. In the binarization in the eye candidate region, 1
The brightness value is searched for each line, and "1" is set between the point next to the maximum point by one pixel and the point one pixel before the next maximum point, and "0" is set in the other areas. A binary image is generated in the rectangular area, and if the difference between the maximum value of interest and the minimum value of 10 or less is ignored as a constraint condition, it is ignored, and between the maximum point of interest and the next maximum point. If there is only one pixel, it is ignored. Further, the connected regions are searched, the target region or the non-target region is judged by their areas, and the non-target region is removed. Feature point automatic extraction method.
【請求項8】 唇、眉の2値化手段として、X方向1ラ
インごとに輝度変化が最大となる点を検索し、その点に
おける輝度値で1ラインごとにしきい値処理を行い、同
様の処理をY方向にも行い、X、Y方向の2値データの
ANDおよびEORを求め、ANDとEORとの差分ヒ
ストグラムの零交差点をしきい値とし、矩形領域内でし
きい値処理を行い、前記AND値と前記しきい値処理後
の2値画像とのANDをとり、対象領域以外の領域を面
積の大きさによって判断し、非対象領域を除去すること
を特徴とする顔画像の特徴点自動抽出方法。
8. As a lips and eyebrows binarization means, a point where the luminance change is maximum is searched for each line in the X direction, and the threshold value processing is performed for each line by the luminance value at that point, The process is also performed in the Y direction, AND and EOR of binary data in the X and Y directions are obtained, the zero crossing point of the difference histogram between AND and EOR is used as the threshold value, and the threshold value process is performed in the rectangular area. A feature point of a face image, characterized in that the AND value and the binary image after the threshold processing are ANDed to determine a region other than the target region based on the size of the area and remove the non-target region. Automatic extraction method.
【請求項9】 目の候補領域として、 X座標:XPEAK±2α、 Y座標:YPEAK±α(ここで、α=0.1N〜0.11
Nの間の整数値、Nは頭頂点から顎下端点までの画素
数)を用いることを特徴とする顔画像の特徴点自動抽出
方法。
9. As eye candidate regions, X coordinate: X PEAK ± 2α, Y coordinate: Y PEAK ± α (where α = 0.1N to 0.11).
A method for automatically extracting a feature point of a face image, wherein an integer value between N and N is the number of pixels from the top of the head to the lower end of the jaw.
【請求項10】 唇の候補領域として、 X座標:XPEAK±2α、 Y座標:YPEAK±α(ここで、α=0.1N〜0.11
Nの間の整数値、Nは頭頂点から顎下端点までの画素
数)を用いることを特徴とする顔画像の特徴点自動抽出
方法。
10. As the candidate region of the lips, X coordinate: X PEAK ± 2α, Y coordinate: Y PEAK ± α (where α = 0.1N to 0.11).
A method for automatically extracting a feature point of a face image, wherein an integer value between N and N is the number of pixels from the top of the head to the lower end of the jaw.
【請求項11】 眉の候補領域として、 X座標:XPEAK±2α、 Y座標:YPEAK−α(ここで、α=0.1N〜0.11
Nの間の整数値、Nは頭頂点から顎下端点までの画素
数)を用いることを特徴とする顔画像の特徴点自動抽出
方法。
11. As eyebrow candidate regions, X coordinate: X PEAK ± 2α, Y coordinate: Y PEAK −α (where α = 0.1N to 0.11).
A method for automatically extracting a feature point of a face image, wherein an integer value between N and N is the number of pixels from the top of the head to the lower end of the jaw.
【請求項12】 顎下端点の候補領域として、 X座標:Xmouth_lower ±β、 Y座標:Ymouth_lower 〜Ymouth_lower +(Y
mouth_lower −Ymouth_upper)×3(ここで、β=
(0.25/100)N〜(1.25/100)Nの間
の整数値、Nは頭頂点から顎下端点までの画素数)を用
いることを特徴とする顔画像の特徴点自動抽出方法。
12. As a candidate region for the lower jaw point, X coordinate: X mouth_lower ± β, Y coordinate: Y mouth_lower to Y mouth_lower + (Y
mouth_lower −Y mouth_upper ) × 3 (where β =
Automatic extraction of feature points of a face image using an integer value between (0.25 / 100) N and (1.25 / 100) N, where N is the number of pixels from the head vertex to the lower jaw point Method.
【請求項13】 鼻下端点の候補領域として、 X座標:Xmouth_upper ±β、 Y座標:Ychin−(Ychin−Ymouth_center)×0.7
〜Ymouth_upper −5β(ここで、β=(0.25/1
00)N〜(1.25/100)Nの間の整数値、Nは
頭頂点から顎下端点までの画素数)を用いることを特徴
とする顔画像の特徴点自動抽出方法。
13. A nose lower end point candidate region has: X coordinate: X mouth_upper ± β, Y coordinate: Y chin − (Y chin −Y mouth_center ) × 0.7
~ Y mouth_upper −5β (where β = (0.25 / 1
00) N to (1.25 / 100) N, where N is the number of pixels from the apex of the head to the lower end of the jaw).
【請求項14】 鼻横端点の候補領域として、 X座標:Xnose_top〜Xmouth_right 、Xnose_top〜X
mouth_left、 Y座標:Ynose_top±β(ここで、Xnose_top=X
mouth_upper 、Ynose_top=(Ychin−Ynose)×1.
05、β=(0.25/100)N〜(1.25/10
0)Nの間の整数値、Nは頭頂点から顎下端点までの画
素数)を用いることを特徴とする顔画像の特徴点自動抽
出方法。
14. As a candidate region of a lateral nose end point, X coordinates: X nose_top to X mouth_right , X nose_top to X.
mouth_left , Y coordinate: Y nose_top ± β (where X nose_top = X
mouth_upper , Y nose_top = (Y chin −Y nose ) × 1.
05, β = (0.25 / 100) N to (1.25 / 10)
0) An integer value between N, N is the number of pixels from the apex of the head to the lower end of the jaw, and is used to automatically extract the feature points of the face image.
【請求項15】 顔輪郭点の候補領域として、 X座標:Xmouth_right 〜(Xmouth_right −X
mouth_left)×1.2、 Y座標:Ymouth_right ±β、Ymouth_left±β(ここ
で、β=(0.25/100)N〜(1.25/10
0)Nの間の整数値、Nは頭頂点から顎下端点までの画
素数)を用いることを特徴とする顔画像の特徴点自動抽
出方法。
15. A face contour point candidate area is defined by: X coordinate: X mouth_right to (X mouth_right −X
mouth_left ) × 1.2, Y coordinate: Y mouth_right ± β, Y mouth_left ± β (where β = (0.25 / 100) N to (1.25 / 10)
0) An integer value between N, N is the number of pixels from the apex of the head to the lower end of the jaw, and is used to automatically extract the feature points of the face image.
【請求項16】 頭頂点の候補領域として、 X座標:((Xeye_right −Xeye_left)/2)±β、 Y座標:0〜Yeye (ここで、β=(0.25/10
0)N〜(1.25/100)Nの間の整数値、Nは頭
頂点から顎下端点までの画素数)を用いることを特徴と
する顔画像の特徴点自動抽出方法。
16. As a candidate region of a head vertex, X coordinate: ((X eye_right −X eye_left ) / 2) ± β, Y coordinate: 0 to Y eye (where β = (0.25 / 10
0) An integer value between N and (1.25 / 100) N, where N is the number of pixels from the apex of the head to the lower end of the jaw.
JP21584594A 1994-09-09 1994-09-09 Automatic feature point extracting method for face image Withdrawn JPH0877334A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP21584594A JPH0877334A (en) 1994-09-09 1994-09-09 Automatic feature point extracting method for face image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP21584594A JPH0877334A (en) 1994-09-09 1994-09-09 Automatic feature point extracting method for face image

Publications (1)

Publication Number Publication Date
JPH0877334A true JPH0877334A (en) 1996-03-22

Family

ID=16679229

Family Applications (1)

Application Number Title Priority Date Filing Date
JP21584594A Withdrawn JPH0877334A (en) 1994-09-09 1994-09-09 Automatic feature point extracting method for face image

Country Status (1)

Country Link
JP (1) JPH0877334A (en)

Cited By (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000311248A (en) * 1999-04-28 2000-11-07 Sharp Corp Image processor
KR100343223B1 (en) * 1999-12-07 2002-07-10 윤종용 Apparatus for eye and face detection and method thereof
KR100390569B1 (en) * 1999-10-14 2003-07-07 주식회사 드림미르 Scale and Rotation Invariant Intelligent Face Detection
WO2005055143A1 (en) * 2003-12-05 2005-06-16 Seiko Epson Corporation Person head top detection method, head top detection system, and head top detection program
WO2005055144A1 (en) * 2003-12-05 2005-06-16 Seiko Epson Corporation Person face jaw detection method, jaw detection system, and jaw detection program
JP2005208760A (en) * 2004-01-20 2005-08-04 Fujitsu Ltd Person image extraction device and computer program
US7764828B2 (en) 2004-12-08 2010-07-27 Sony Corporation Method, apparatus, and computer program for processing image
EP2391115A2 (en) 2010-05-24 2011-11-30 Canon Kabushiki Kaisha Image processing apparatus, control method, and program
US8374439B2 (en) 2008-06-25 2013-02-12 Canon Kabushiki Kaisha Image processing apparatus, image processing method, program, and computer-readable print medium
US8391595B2 (en) 2006-05-26 2013-03-05 Canon Kabushiki Kaisha Image processing method and image processing apparatus
US8442315B2 (en) 2010-07-16 2013-05-14 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and computer-readable medium
US8457397B2 (en) 2010-07-16 2013-06-04 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and computer-readable medium
US8532431B2 (en) 2007-05-08 2013-09-10 Canon Kabushiki Kaisha Image search apparatus, image search method, and storage medium for matching images with search conditions using image feature amounts
US8630503B2 (en) 2008-06-25 2014-01-14 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and computer program
US9002107B2 (en) 2010-07-16 2015-04-07 Canon Kabushiki Kaisha Color balance correction based on skin color and highlight color
US9014487B2 (en) 2012-07-09 2015-04-21 Canon Kabushiki Kaisha Image processing apparatus and image processing method
US9189681B2 (en) 2012-07-09 2015-11-17 Canon Kabushiki Kaisha Image processing apparatus, method thereof, and computer-readable storage medium
US9208595B2 (en) 2012-07-09 2015-12-08 Canon Kabushiki Kaisha Apparatus, image processing method and storage medium storing program
US9214027B2 (en) 2012-07-09 2015-12-15 Canon Kabushiki Kaisha Apparatus, method, and non-transitory computer-readable medium
US9275270B2 (en) 2012-07-09 2016-03-01 Canon Kabushiki Kaisha Information processing apparatus and control method thereof
US9280720B2 (en) 2012-07-09 2016-03-08 Canon Kabushiki Kaisha Apparatus, method, and computer-readable storage medium
US9292760B2 (en) 2012-07-09 2016-03-22 Canon Kabushiki Kaisha Apparatus, method, and non-transitory computer-readable medium
US9299177B2 (en) 2012-07-09 2016-03-29 Canon Kabushiki Kaisha Apparatus, method and non-transitory computer-readable medium using layout similarity
JP2016066327A (en) * 2014-09-26 2016-04-28 株式会社Jvcケンウッド Image processing device, image processing method and image processing program
US9436706B2 (en) 2013-09-05 2016-09-06 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and storage medium for laying out images
US9501688B2 (en) 2012-07-09 2016-11-22 Canon Kabushiki Kaisha Apparatus, processing method and storage medium storing program
US9509870B2 (en) 2013-09-05 2016-11-29 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and storage medium enabling layout varations
US9519842B2 (en) 2012-07-09 2016-12-13 Canon Kabushiki Kaisha Apparatus and method for managing an object extracted from image data
US9542594B2 (en) 2013-06-28 2017-01-10 Canon Kabushiki Kaisha Information processing apparatus, method for processing information, and program
US9558212B2 (en) 2012-07-09 2017-01-31 Canon Kabushiki Kaisha Apparatus, image processing method and computer-readable storage medium for object identification based on dictionary information
US9563823B2 (en) 2012-07-09 2017-02-07 Canon Kabushiki Kaisha Apparatus and method for managing an object extracted from image data
US9846681B2 (en) 2012-07-09 2017-12-19 Canon Kabushiki Kaisha Apparatus and method for outputting layout image
US9904879B2 (en) 2013-09-05 2018-02-27 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and storage medium
US10013395B2 (en) 2012-07-09 2018-07-03 Canon Kabushiki Kaisha Apparatus, control method thereof, and storage medium that determine a layout image from a generated plurality of layout images by evaluating selected target images
WO2023062762A1 (en) * 2021-10-13 2023-04-20 富士通株式会社 Estimation program, estimation method, and information processing device

Cited By (44)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000311248A (en) * 1999-04-28 2000-11-07 Sharp Corp Image processor
KR100390569B1 (en) * 1999-10-14 2003-07-07 주식회사 드림미르 Scale and Rotation Invariant Intelligent Face Detection
KR100343223B1 (en) * 1999-12-07 2002-07-10 윤종용 Apparatus for eye and face detection and method thereof
WO2005055143A1 (en) * 2003-12-05 2005-06-16 Seiko Epson Corporation Person head top detection method, head top detection system, and head top detection program
WO2005055144A1 (en) * 2003-12-05 2005-06-16 Seiko Epson Corporation Person face jaw detection method, jaw detection system, and jaw detection program
US7460705B2 (en) 2003-12-05 2008-12-02 Seiko Epson Corporation Head-top detecting method, head-top detecting system and a head-top detecting program for a human face
JP2005208760A (en) * 2004-01-20 2005-08-04 Fujitsu Ltd Person image extraction device and computer program
US7764828B2 (en) 2004-12-08 2010-07-27 Sony Corporation Method, apparatus, and computer program for processing image
US8391595B2 (en) 2006-05-26 2013-03-05 Canon Kabushiki Kaisha Image processing method and image processing apparatus
US8532431B2 (en) 2007-05-08 2013-09-10 Canon Kabushiki Kaisha Image search apparatus, image search method, and storage medium for matching images with search conditions using image feature amounts
US8374439B2 (en) 2008-06-25 2013-02-12 Canon Kabushiki Kaisha Image processing apparatus, image processing method, program, and computer-readable print medium
US8630503B2 (en) 2008-06-25 2014-01-14 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and computer program
EP2391115A2 (en) 2010-05-24 2011-11-30 Canon Kabushiki Kaisha Image processing apparatus, control method, and program
US8639030B2 (en) 2010-05-24 2014-01-28 Canon Kabushiki Kaisha Image processing using an adaptation rate
US9398282B2 (en) 2010-05-24 2016-07-19 Canon Kabushiki Kaisha Image processing apparatus, control method, and computer-readable medium
US8442315B2 (en) 2010-07-16 2013-05-14 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and computer-readable medium
US8457397B2 (en) 2010-07-16 2013-06-04 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and computer-readable medium
US8842914B2 (en) 2010-07-16 2014-09-23 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and computer-readable medium
US8934712B2 (en) 2010-07-16 2015-01-13 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and computer-readable medium
US9002107B2 (en) 2010-07-16 2015-04-07 Canon Kabushiki Kaisha Color balance correction based on skin color and highlight color
US9406003B2 (en) 2010-07-16 2016-08-02 Canon Kabushiki Kaisha Image processing with color balance correction
US9214027B2 (en) 2012-07-09 2015-12-15 Canon Kabushiki Kaisha Apparatus, method, and non-transitory computer-readable medium
US10055640B2 (en) 2012-07-09 2018-08-21 Canon Kabushiki Kaisha Classification of feature information into groups based upon similarity, and apparatus, image processing method, and computer-readable storage medium thereof
US9275270B2 (en) 2012-07-09 2016-03-01 Canon Kabushiki Kaisha Information processing apparatus and control method thereof
US9280720B2 (en) 2012-07-09 2016-03-08 Canon Kabushiki Kaisha Apparatus, method, and computer-readable storage medium
US9292760B2 (en) 2012-07-09 2016-03-22 Canon Kabushiki Kaisha Apparatus, method, and non-transitory computer-readable medium
US9299177B2 (en) 2012-07-09 2016-03-29 Canon Kabushiki Kaisha Apparatus, method and non-transitory computer-readable medium using layout similarity
US10127436B2 (en) 2012-07-09 2018-11-13 Canon Kabushiki Kaisha Apparatus, image processing method and storage medium storing program
US9189681B2 (en) 2012-07-09 2015-11-17 Canon Kabushiki Kaisha Image processing apparatus, method thereof, and computer-readable storage medium
US9014487B2 (en) 2012-07-09 2015-04-21 Canon Kabushiki Kaisha Image processing apparatus and image processing method
US10013395B2 (en) 2012-07-09 2018-07-03 Canon Kabushiki Kaisha Apparatus, control method thereof, and storage medium that determine a layout image from a generated plurality of layout images by evaluating selected target images
US9501688B2 (en) 2012-07-09 2016-11-22 Canon Kabushiki Kaisha Apparatus, processing method and storage medium storing program
US9208595B2 (en) 2012-07-09 2015-12-08 Canon Kabushiki Kaisha Apparatus, image processing method and storage medium storing program
US9519842B2 (en) 2012-07-09 2016-12-13 Canon Kabushiki Kaisha Apparatus and method for managing an object extracted from image data
US9852325B2 (en) 2012-07-09 2017-12-26 Canon Kabushiki Kaisha Apparatus, image processing method and storage medium storing program
US9558212B2 (en) 2012-07-09 2017-01-31 Canon Kabushiki Kaisha Apparatus, image processing method and computer-readable storage medium for object identification based on dictionary information
US9563823B2 (en) 2012-07-09 2017-02-07 Canon Kabushiki Kaisha Apparatus and method for managing an object extracted from image data
US9846681B2 (en) 2012-07-09 2017-12-19 Canon Kabushiki Kaisha Apparatus and method for outputting layout image
US9542594B2 (en) 2013-06-28 2017-01-10 Canon Kabushiki Kaisha Information processing apparatus, method for processing information, and program
US9509870B2 (en) 2013-09-05 2016-11-29 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and storage medium enabling layout varations
US9904879B2 (en) 2013-09-05 2018-02-27 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and storage medium
US9436706B2 (en) 2013-09-05 2016-09-06 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and storage medium for laying out images
JP2016066327A (en) * 2014-09-26 2016-04-28 株式会社Jvcケンウッド Image processing device, image processing method and image processing program
WO2023062762A1 (en) * 2021-10-13 2023-04-20 富士通株式会社 Estimation program, estimation method, and information processing device

Similar Documents

Publication Publication Date Title
JPH0877334A (en) Automatic feature point extracting method for face image
EP2833288B1 (en) Face calibration method and system, and computer storage medium
US7035461B2 (en) Method for detecting objects in digital images
JP5538909B2 (en) Detection apparatus and method
US7068840B2 (en) Determination of an illuminant of digital color image by segmentation and filtering
CN112819094A (en) Target detection and identification method based on structural similarity measurement
KR100422709B1 (en) Face detecting method depend on image
CN110032932B (en) Human body posture identification method based on video processing and decision tree set threshold
JP2007272435A (en) Face feature extraction device and face feature extraction method
CN109740572A (en) A kind of human face in-vivo detection method based on partial color textural characteristics
WO2020038312A1 (en) Multi-channel tongue body edge detection device and method, and storage medium
WO2020173024A1 (en) Multi-gesture precise segmentation method for smart home scenario
WO2005055143A1 (en) Person head top detection method, head top detection system, and head top detection program
JP2000105819A (en) Face image area detecting device
CN109948461A (en) A kind of sign language image partition method based on center coordination and range conversion
JP3459950B2 (en) Face detection and face tracking method and apparatus
CN114863492B (en) Method and device for repairing low-quality fingerprint image
CN111161281A (en) Face region identification method and device and storage medium
JP4625949B2 (en) Object tracking method, object tracking apparatus, and program
Arsic et al. Improved lip detection algorithm based on region segmentation and edge detection
JP2007188407A (en) Image processing device and image processing program
KR20130111021A (en) Device and method for processing image
CN110458012B (en) Multi-angle face recognition method and device, storage medium and terminal
CN109766860B (en) Face detection method based on improved Adaboost algorithm
KR20020085669A (en) The Apparatus and Method for Abstracting Peculiarity of Two-Dimensional Image & The Apparatus and Method for Creating Three-Dimensional Image Using Them

Legal Events

Date Code Title Description
A300 Withdrawal of application because of no request for examination

Free format text: JAPANESE INTERMEDIATE CODE: A300

Effective date: 20011120