JP2004021924A - Face feature extraction method, apparatus and information storage medium - Google Patents

Face feature extraction method, apparatus and information storage medium Download PDF

Info

Publication number
JP2004021924A
JP2004021924A JP2002180080A JP2002180080A JP2004021924A JP 2004021924 A JP2004021924 A JP 2004021924A JP 2002180080 A JP2002180080 A JP 2002180080A JP 2002180080 A JP2002180080 A JP 2002180080A JP 2004021924 A JP2004021924 A JP 2004021924A
Authority
JP
Japan
Prior art keywords
face
information
extracting
feature
color
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
JP2002180080A
Other languages
Japanese (ja)
Inventor
Atsushi Marukame
丸亀 敦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
NEC Corp
Original Assignee
NEC Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by NEC Corp filed Critical NEC Corp
Priority to JP2002180080A priority Critical patent/JP2004021924A/en
Publication of JP2004021924A publication Critical patent/JP2004021924A/en
Withdrawn legal-status Critical Current

Links

Images

Abstract

<P>PROBLEM TO BE SOLVED: To extract feature objects, such as a mouse, a nose and eyes, from 3-dimensional face data. <P>SOLUTION: In this method, using a high-frequency component removal means 100, data D200, in which high-frequency components related to shapes are removed from 3-dimensional original face data D100 having 3-dimensional shape data and its color/luminosity information, is generated. Based on patterns D300 related to colors and shapes of face feature objects, such as a mouse, a nose and eyes, stored in a color and luminosity/shape pattern storage means 300, face feature objects D400 are extracted by a feature extracting means 200 from the data D200 without high-frequency components. In the color and luminosity/shape pattern storage means 300, shape patterns and color/luminosity patterns related to desired feature objects, and position relations D300 between other feature objects are pre-stored. To find a plurality of feature objects, not only pre-stored information, but position information D500 on other feature objects that are acquired using the feature extraction means 200 is also stored. <P>COPYRIGHT: (C)2004,JPO

Description

【0001】
【発明の属する技術分野】
本発明は、三次元の顔データから口、鼻、目等の特徴部位を抽出する方法、およびその装置と情報記憶媒体に関するものである。
【0002】
【従来の技術】
形状と色輝度情報が保存された顔三次元レンジデータから口、鼻、目等の特徴部位を抽出する技術において、一般的に人が専用のユーザーインターフェースを用いて手作業で行うか、二次元の画像から画像上の特徴的な部位を抽出する画像処理を応用し、色輝度情報のみを用いて抽出するかのいずれかが主流である。
【0003】
他には形状情報を用いる方法として、特開平02−311962や文献”Curvature−based face surface recognition using spherical correlation. Principal Directions for curved object recognition”, Tanaka, H.T., Ikeda, M., Chiaki, H., Proceedings of Third IEEE International Conference on Automatic Face and Gesture Recognition, 1998, pp.372 −377.などがあるが、これらは、口、鼻、目のような顔の部位特徴ではなく、顔全体において特徴的な凹凸となっている部位を特徴として抽出するものである。
【0004】
【発明が解決しようとする課題】
しかしながら、上記の方式には次のような問題点がある。
【0005】
専用のユーザーインターフェースを用いる方式は、人が手作業で行うので明白な誤抽出は確実に避けることができるが、作業の負担が大きい、詳細な抽出はインタフェースの完成度や作業者の技量に大きく依存する、といった問題が生じる。
【0006】
また、色輝度情報を用いる方法は、照明や姿勢の変動など画像の撮影状況や、被撮影者間の個人差に影響されやすいので、これだけでは安定な抽出は望みにくい。
【0007】
一方、形状情報は、色輝度情報とは独立した有用な情報を含んでいる。特に鼻は、周囲の部分と色輝度はあまり変わらないが、曲率で表されるような形状の変化は富んでいる。ただし、曲率などの形状情報は微分演算により得られるので、形状を取得する計測装置の性能が低いとノイズの影響を受けやすい問題がある。
【0008】
また、口、鼻、目のような特徴部位抽出は、三次元特徴のどのような情報を用いれば、安定して抽出できるかこれまで知られていなかった。
【0009】
そこで、発明の手法では、口、鼻、目のような特徴部位の形状および色に関する特徴的なパタンを類型化してテンプレートにし、これと入力三次元データを比較して特徴部位を抽出する。形状を取得する計測装置の性能が低いためノイズの影響を受けたデータに対しては、周波数変換を行い、その低周波部分だけを残して、逆変換したデータを処理に使う。
【0010】
このような前処理を行うことによって、個人差、計測装置の誤差の影響が主である高周波成分での微分幾何学的な特徴は除去しつつ、人の顔の一般的な特徴が主である低周波成分での微分幾何学的な特徴のみを利用できる利点がある。
【0011】
また、口のように形状、色輝度ともに周囲と大きな変化がある部位は、色輝度情報と形状情報を併用することで、色輝度情報だけでは原理的に防げない誤抽出、例えば照明により唇とその周囲が似た色になって周囲も抽出されるケース、を防ぐことが可能になる。
【0012】
請求項第1乃至第11の発明の目的は、顔三次元データ中の部位の抽出に際し、作業者の負担を軽減することにある。
【0013】
請求項第1、3、4、5および第6、8,9,10及び第11の発明の目的は、顔三次元データ中の部位の抽出において、形状に付加されるノイズの影響を軽減することにある。
【0014】
請求項第1乃至第11の発明の目的は、顔三次元データ中の部位の抽出において、照明や姿勢の変動の影響を軽減することにある。
【0015】
【課題を解決するための手段】
本発明の第1の顔特徴抽出方法は、形状情報と色輝度情報が保存された顔三次元データから口、鼻、目を含む顔特徴部位を抽出する方法であって、
顔三次元データの形状情報から高周波成分を除去する第1の手順と、
顔特徴部位の形状パタン、色パタン、探索範囲の顔情報の格納を行う第2の手順と、
前記第2の手順による顔情報を用いて前記第1の手順で出力された低周波成分顔三次元データより顔特徴を抽出する第3の手順と、
を備える。
【0016】
本発明の第2の顔特徴抽出方法は、形状情報と色輝度情報が保存された顔三次元データから口、鼻、目を含む顔特徴部位を抽出する方法であって、
顔特徴部位の形状パタン、色パタン、探索範囲からなる顔情報の格納を行う第1の手順と、
前記顔三次元データと前記第1の手順による顔情報とから顔特徴を抽出する第2の手順と、
を備える。
【0017】
本発明の第3の顔特徴抽出方法は、第1または第2の発明において、抽出特徴部位として口を抽出することを備える。
【0018】
本発明の第4の顔特徴抽出方法は、第1または第2の発明において、抽出特徴部位として鼻を抽出することを備える。
【0019】
本発明の第5の顔特徴抽出方法は、第1または第2の発明において、抽出特徴部位として目を抽出することを備える。
【0020】
本発明の第1の顔特徴抽出装置は、形状情報と色輝度情報が保存された顔三次元データから口、鼻、目を含む顔特徴部位を抽出する装置であって、
顔三次元データの形状情報から高周波成分を除去する高周波成分除去手段と、顔特徴部位の形状パタン、色パタン、探索範囲の顔情報の格納を行う色輝度・形状パタン蓄積手段と、
前記顔情報を用いて前記高周波成分除去手順で出力された低周波成分顔三次元データより顔特徴を抽出する特徴抽出手段と、
を備える。
【0021】
本発明の第2の顔特徴抽出装置は、形状情報と色輝度情報が保存された顔三次元データから口、鼻、目を含む顔特徴部位を抽出する装置であって、
顔特徴部位の形状パタン、色パタン、探索範囲の顔情報の格納を行う色輝度・形状パタン蓄積手段と、
前記顔三次元データと前記顔情報とから顔特徴を抽出する特徴抽出手段と、
を備える。
【0022】
本発明の第3の顔特徴抽出装置は、第1または第2の発明において、抽出特徴部位として口を抽出することを備える。
【0023】
本発明の第4の顔特徴抽出装置は、第1または第2の発明において、抽出特徴部位として鼻を抽出することを備える。
【0024】
本発明の第5の顔特徴抽出装置は、第1または第2の発明において、抽出特徴部位として目を抽出することを備える。
【0025】
本発明の第1の顔特徴抽出情報記憶媒体は、計算機が読み取り自在なソフトウェアが格納されている情報記憶媒体において、第1、第2、第3、第4または第5記載の顔特徴抽出方法を前記計算機に実行させるためのプログラムが格納されていることを備える。
【0026】
【発明の実施の形態】
図1は、第1の発明である顔特徴抽出方法の一つの実施の形態を表す。
【0027】
本方法は、次のように処理を行う3つの手段で構成される。
【0028】
高周波成分除去手段100は、三次元の形状データとそれに付随する色・輝度情報をもつオリジナルの顔画像情報であるオリジナル顔三次元データD100から形状に関する高周波成分を取り除いた低周波成分顔三次元データD200を作成し、特徴抽出手段200に送る。
【0029】
特徴抽出手段200は、色輝度・形状パタン蓄積手段300によって蓄積されている口、鼻、目といった顔特徴部位に関する顔情報である色・形状パタンD300に基づいて顔特徴部位D400を抽出する。
【0030】
色輝度・形状パタン蓄積手段300は、所望の顔特徴部位に関する形状パタンや色・輝度パタンおよび他の特徴部位間との位置関係を表した色・形状パタンD300をあらかじめ蓄えておく。複数の顔特徴部位を見つけるときは、あらかじめ蓄積しておいた情報だけでなく、特徴抽出手段200で先に得られた顔特徴部位位置データD500も蓄えられる。
【0031】
次に各手段の動作の詳細を説明する。
【0032】
高周波成分除去手段100は、オリジナル顔三次元データD100の形状情報部分からノイズとして混入される高周波成分のみを除去する高周波除去フィルタとして作用する。
【0033】
使用するフィルタは様々な方法が考えられるが、広く行われている方法としてフーリエ変換、DCT、ウェーブレット変換等の周波数変換でもよいし、主成分分析のような固有空間への変換で主要項のみを残す方法でも良い。
【0034】
オリジナル顔三次元データD100にいずれかのフィルタを施すことで高周波成分と低周波成分を分離することができるので低周波成分のみを残し、それを逆変換することで、低周波成分顔三次元データD200に変換される。
【0035】
フィルタの施し方は複数考えられ、適切な方法は、顔三次元レンジデータがどのような形で保存されているかにも依存する。
【0036】
例えば、図2のように顔の三次元座標データ(X,Y,Z)がメルカトル図法のように円筒座標(U、V)に投影されて保存されている場合、画像で色、輝度情報に対してよく行われているブロックごとの2次元フーリエ変換、DCTでもよいし、図2のように顔がXY平面に平行で正面に向いていることが既知の場合、u方向、V方向それぞれで1次元フーリエ変換、DCTを行ってもよい。このような変換を施した後でも、顔の特徴部位を示す形状特徴は低周波の変化であり、適切な周波数以下の領域だけ残せば、高周波のノイズのみ除去され顔の凹凸を意味する変化は残すことができる。
【0037】
特徴抽出手段200は、高周波成分除去手段100で得られた低周波成分顔三次元データD200に対して、色輝度・形状パタン蓄積手段300に蓄えられている、口、鼻、目それぞれに関するの色、相対的な位置関係、座標の大小、極、曲率等の微分幾何的な情報である色・形状パタンD300を元に特徴部位を抽出する。特に、微分幾何的な情報は、上記の処理で高周波成分が取り除かれているのでノイズの影響を受けにくくなっている。より具体的な処理は、口に関しては、第2の発明、鼻に関しては、第3の発明、目に関しては、第4の発明の実施の形態で述べる。
【0038】
色輝度・形状パタン蓄積手段300は、口、鼻、目それぞれに関するの色、相対的な位置関係、座標の大小、極、曲率等の微分幾何的な情報である色・形状パタンD300をあらかじめ蓄えておくだけでなく、先に見つけた特徴の位置情報も入力、蓄積される。これは、探索位置を限定するのに役立つ。例えば、口と鼻がすでに見つかっているときは、目は口の上で鼻の左右になるので探索範囲が大きく削減され、抽出ミスの削減、抽出時間の短縮を測ることができる。
【0039】
第2の発明は、第1の発明から高周波成分除去手段100を取り除いたものである、この構成は三次元データD100が高精度の三次元データ取得装置で得られてノイズ影響が無視できるときは有効である。
【0040】
第3の発明の実施の形態は、第1もしくは第2の発明の実施の形態において、口の抽出に特化したものである。これは、例えば、次のような方法で抽出できる。
【0041】
前述の図2のように顔がXY平面に平行で正面に向き、口が閉口したデータが与えられている場合、鼻の頂点は、Z座標最小点になり、この点を通るV軸に閉口な直線上で高周波成分除去手段100によりオリジナル顔三次元データD100から高周波成分を除いた場合、VZ平面では図3のようになる。この図からわかるように、鼻の頂点から下にある最初の極大点が鼻の下、次の極大点が閉口時の上下唇の重なる部位になるので、この情報から、口のV方向の位置がわかり、さらにこの近傍で赤い部分を見つければ、上下唇を見つけることができる。
【0042】
第4の発明の実施の形態は、第1もしくは第2の発明の実施の形態において、鼻の抽出に特化したものである。鼻の抽出は、色輝度の情報がほとんど使えないので、微分幾何的な形状情報のみを用いることになる。第3の発明の実施の形態と同じデータが使われているとすれば、鼻の頂点からU座標両方向で曲率が大きくなるところが鼻筋にあたる(図4参照)。さらに、この点のV座標上下でU座標近傍の曲率最大点を追跡することで鼻筋の候補を見つけることができる(図5参照)。鼻筋の上限は、曲率最大点のZ座標が極小になるところで決める(図6参照)。この部位は、目の下のくぼみ直上の出っ張る部分に対応するので、実感としても妥当である。鼻筋の下限は、口の抽出時に見つけた鼻の頂点で直下の極大点のV座標を手掛かりに、曲率の最大点の曲率が大きく変化する点を探す。
第5の発明の実施の形態は、第1もしくは第2の発明の実施の形態において、目の抽出に特化したものである。目は、鼻と口の位置がわかっていれば、領域をかなり絞り込むことが可能であり、絞り込んだ領域内では色だけでも安定して抽出が可能になる。
【0043】
第6、7,8,9,10の発明の実施の形態は、第1、2、3、4もしくは5の発明の顔特徴抽出方法を備える装置によって実施することができる。
【0044】
第11の発明の実施の形態は、第1、2、3、4もしくは5の発明の各手順を計算機上で実行可能なプログラムとして構成し、そのプログラムを計算機で読み取り自在な情報記憶媒体に格納して、その計算機上で実行することで実施することができる。
【0045】
【発明の効果】
請求項第1乃至第11の発明の効果は、顔三次元データ中の特徴部位の抽出に際し、作業者の負担を軽減することにある。その理由は、ユーザーが三次元座標データ上で部位の位置を直接指定をする必要がなく、顔の色輝度・形状パタンを用いて自動的に所望の特徴部位を抽出するからである。
【0046】
請求項第1、3、4、5および第6、8,9,10及び第11の発明の効果は、顔三次元データ中の特徴部位の抽出において、形状に付加されるノイズの影響を軽減することにある。その理由は、高周波成分除去手段もしくはそれを装備した装置、プログラムにより、高周波ノイズを削減するからである。
【0047】
請求項第1乃至第11の発明の効果は、顔三次元データ中の特徴部位の抽出において、照明や姿勢の変動の影響を軽減することにある。その理由は、三次元形状情報を利用するからである。
【0048】
請求項第1、4、および第6、9および第11の発明の効果は、色特徴の利用が期待できない鼻の抽出を行うことができる。その理由は、三次元形状情報を利用するからである。
【図面の簡単な説明】
【図1】本発明の実施の形態の構成を示すブロック図である。
【図2】顔3次元データの座標系と保存フォーマットの座標系の一例の説明図である。
【図3】高周波成分を除去した顔3次元データの鼻頂点を通るVZ平面断面の説明図である。
【図4】高周波成分を除去した顔3次元データの鼻頂点を通るUZ平面断面の説明図である。
【図5】曲率大の点のトレースによる鼻筋候補の探索の説明図である。
【図6】鼻筋上限の決定法の説明図である。
【符号の説明】
100  高周波成分除去手段
200  特徴抽出手段
300  色輝度・形状パタン蓄積手段
D100  オリジナル顔三次元データ
D200  低周波成分顔三次元データ
D300  色・形状パタン
D400  顔特徴部位
D500  顔特徴部位位置データ
[0001]
TECHNICAL FIELD OF THE INVENTION
The present invention relates to a method for extracting a characteristic part such as a mouth, a nose, and an eye from three-dimensional face data, an apparatus and an information storage medium.
[0002]
[Prior art]
In technology for extracting characteristic parts such as mouth, nose, eyes, etc. from face three-dimensional range data in which shape and color / luminance information is stored, it is generally performed manually by a person using a dedicated user interface or two-dimensionally. The mainstream is to apply image processing for extracting a characteristic part on the image from the image and to extract using only the color / luminance information.
[0003]
As other methods using shape information, Japanese Patent Application Laid-Open No. 02-311962 and the document "Curture-based face surface recognition using spherical correlation. Principal Directions for curved agonism, recruitment, reconciliation, and the like. T. Ikeda, M .; , Chiaki, H .; , Proceedings of Third IEEE International Conference on Automatic Face and Gesture Recognition, 1998, pp. 372-377. However, these are not features of the face such as the mouth, nose, and eyes, but are features that are extracted as features of the entire face having characteristic irregularities.
[0004]
[Problems to be solved by the invention]
However, the above method has the following problems.
[0005]
The method using a dedicated user interface can be avoided by mistake because obvious manual extraction is performed manually, but the burden on the work is large, and detailed extraction depends on the completeness of the interface and the skill of the operator. The problem arises.
[0006]
In addition, since the method using the color luminance information is easily affected by the photographing state of the image such as a change in lighting and posture, and individual differences between photographed persons, it is difficult to expect stable extraction with this method alone.
[0007]
On the other hand, the shape information includes useful information independent of the color / luminance information. In particular, the color of the nose does not change much from the surrounding portion, but the shape of the nose, such as that represented by the curvature, is rich in change. However, since the shape information such as the curvature is obtained by the differential operation, there is a problem that the measurement device for acquiring the shape is easily affected by noise if the performance is low.
[0008]
In addition, it has not been known what kind of information of a three-dimensional feature can be used to extract a characteristic portion such as a mouth, a nose, and an eye stably.
[0009]
Therefore, according to the method of the present invention, a characteristic pattern relating to the shape and color of a characteristic portion such as a mouth, a nose, and an eye is categorized into a template, and the template is compared with input three-dimensional data to extract a characteristic portion. Frequency conversion is performed on data affected by noise due to the low performance of the measurement device for acquiring a shape, and the inversely converted data is used for processing, leaving only the low-frequency portion.
[0010]
By performing such pre-processing, the general features of the human face are mainly used while removing the differential geometric features in high-frequency components, which are mainly affected by individual differences and errors of the measuring device. There is an advantage that only differential geometric features at low frequency components can be used.
[0011]
In addition, for parts such as the mouth, where both the shape and color luminance are largely different from the surroundings, by using both color luminance information and shape information, erroneous extraction that cannot be prevented in principle with color luminance information alone, for example, lip due to illumination It is possible to prevent a case where the surroundings have a similar color and the surroundings are also extracted.
[0012]
It is an object of the first to eleventh aspects of the present invention to reduce the burden on an operator when extracting a part from three-dimensional face data.
[0013]
An object of the first, third, fourth, fifth, sixth, eighth, ninth, and eleventh aspects of the present invention is to reduce the influence of noise added to a shape when extracting a part from three-dimensional face data. It is in.
[0014]
It is an object of the first to eleventh aspects of the present invention to reduce the influence of illumination and posture fluctuations in extracting a part from three-dimensional face data.
[0015]
[Means for Solving the Problems]
A first facial feature extraction method of the present invention is a method for extracting a facial feature portion including a mouth, a nose, and an eye from three-dimensional face data in which shape information and color luminance information are stored,
A first procedure for removing high frequency components from the shape information of the face three-dimensional data;
A second procedure for storing a shape pattern, a color pattern, and face information of a search range of a face characteristic portion;
A third procedure of extracting a face feature from the low-frequency component face three-dimensional data output in the first procedure using the face information in the second procedure;
Is provided.
[0016]
A second facial feature extraction method according to the present invention is a method for extracting a facial feature portion including a mouth, a nose, and an eye from three-dimensional face data in which shape information and color luminance information are stored,
A first procedure for storing face information including a shape pattern, a color pattern, and a search range of a face characteristic portion;
A second procedure for extracting a face feature from the face three-dimensional data and the face information according to the first procedure;
Is provided.
[0017]
A third facial feature extraction method according to the present invention according to the first or second aspect, comprises extracting a mouth as an extracted feature portion.
[0018]
A fourth face feature extraction method according to the present invention according to the first or second invention, further comprising extracting a nose as an extracted feature portion.
[0019]
A fifth face feature extraction method according to the present invention according to the first or second invention, comprises extracting an eye as an extracted feature portion.
[0020]
A first facial feature extraction device of the present invention is a device for extracting a facial feature portion including a mouth, a nose, and an eye from three-dimensional face data in which shape information and color luminance information are stored,
A high-frequency component removing means for removing high-frequency components from the shape information of the face three-dimensional data; a color luminance / shape pattern accumulating means for storing a face pattern of a characteristic feature, a color pattern, and face information of a search range;
Feature extraction means for extracting a face feature from the low-frequency component face three-dimensional data output in the high-frequency component removal procedure using the face information,
Is provided.
[0021]
A second facial feature extraction device of the present invention is a device for extracting a facial feature portion including a mouth, a nose, and an eye from three-dimensional face data in which shape information and color luminance information are stored,
A color-brightness / shape-pattern storage means for storing a shape pattern, a color pattern, and face information of a search range of a face characteristic portion;
Feature extraction means for extracting a face feature from the face three-dimensional data and the face information;
Is provided.
[0022]
A third facial feature extraction device according to the present invention according to the first or second aspect, further comprises extracting a mouth as an extracted feature portion.
[0023]
According to a fourth aspect of the present invention, in the first or second aspect, the facial feature extracting device includes extracting a nose as an extracted characteristic portion.
[0024]
According to a fifth aspect of the present invention, in the first or second aspect, an eye is extracted as an extracted characteristic part.
[0025]
A first aspect feature extraction information storage medium according to the present invention is an information storage medium storing computer-readable software, wherein the face feature extraction method according to the first, second, third, fourth, or fifth aspect is provided. Is stored in the computer.
[0026]
BEST MODE FOR CARRYING OUT THE INVENTION
FIG. 1 shows an embodiment of a face feature extraction method according to the first invention.
[0027]
The present method includes three means for performing the following processing.
[0028]
The high-frequency component removing means 100 is a low-frequency component face three-dimensional data obtained by removing a high-frequency component related to a shape from original face three-dimensional data D100 which is original face image information having three-dimensional shape data and color / luminance information associated therewith. D200 is created and sent to the feature extracting means 200.
[0029]
The feature extracting means 200 extracts the face characteristic part D400 based on the color / shape pattern D300 which is the face information on the face characteristic part such as the mouth, nose, and eyes stored by the color luminance / shape pattern storing means 300.
[0030]
The color / luminance / shape pattern storage unit 300 stores in advance a color / shape pattern D300 representing a shape pattern relating to a desired facial feature portion, a color / luminance pattern, and a positional relationship between other feature portions. When finding a plurality of facial feature parts, not only the information previously stored but also the facial feature part position data D500 previously obtained by the feature extracting means 200 are stored.
[0031]
Next, the operation of each means will be described in detail.
[0032]
The high-frequency component removing means 100 functions as a high-frequency removing filter for removing only high-frequency components mixed as noise from the shape information portion of the original face three-dimensional data D100.
[0033]
Various methods are conceivable for the filter to be used, but a frequency conversion such as Fourier transform, DCT, or wavelet transform may be used as a widely used method, or only a main term is converted into an eigenspace such as principal component analysis. The method of leaving is good.
[0034]
The high-frequency component and the low-frequency component can be separated by applying any one of the filters to the original face three-dimensional data D100, so that only the low-frequency component is left, and the low-frequency component face three-dimensional data is inversely transformed. Converted to D200.
[0035]
There are several ways to apply the filter, and an appropriate method depends on how the face three-dimensional range data is stored.
[0036]
For example, when three-dimensional coordinate data (X, Y, Z) of a face is projected and stored in cylindrical coordinates (U, V) as in the Mercator projection as shown in FIG. For each block, a two-dimensional Fourier transform or DCT, which is often performed, may be used. Alternatively, if it is known that the face is parallel to the XY plane and faces to the front as shown in FIG. One-dimensional Fourier transform and DCT may be performed. Even after performing such a conversion, the shape feature indicating the characteristic portion of the face is a low-frequency change.If only an area below an appropriate frequency is left, only the high-frequency noise is removed, and the change meaning the unevenness of the face is not changed. Can be left.
[0037]
The feature extracting means 200 calculates the color of each of the mouth, nose and eyes stored in the color luminance / shape pattern storing means 300 for the low-frequency component face three-dimensional data D200 obtained by the high-frequency component removing means 100. A characteristic part is extracted based on a color / shape pattern D300 which is differential geometric information such as relative positional relationship, magnitude of coordinates, poles, curvature, and the like. In particular, the differential geometric information is less susceptible to noise because high-frequency components have been removed by the above processing. More specific processing will be described in the second embodiment for the mouth, the third invention for the nose, and the fourth embodiment for the eyes.
[0038]
The color / brightness / shape pattern storage means 300 stores in advance a color / shape pattern D300 which is differential geometric information such as color, relative positional relationship, magnitude of coordinates, poles, curvature, and the like for each of the mouth, nose, and eyes. Not only that, the position information of the feature found earlier is also input and stored. This helps to limit the search location. For example, when the mouth and nose have already been found, the eyes are on the left and right of the nose above the mouth, so that the search range is greatly reduced, and it is possible to measure the reduction of extraction errors and the reduction of extraction time.
[0039]
The second invention is obtained by removing the high-frequency component removing means 100 from the first invention. This configuration is used when the three-dimensional data D100 is obtained by a high-precision three-dimensional data acquisition device and the effect of noise can be ignored. It is valid.
[0040]
The third embodiment of the present invention specializes in mouth extraction in the first or second embodiment of the present invention. This can be extracted, for example, by the following method.
[0041]
When the face is parallel to the XY plane and faces forward and the mouth is closed as shown in FIG. 2, the vertex of the nose is the minimum point on the Z coordinate, and the nose is closed on the V axis passing through this point. When the high-frequency component is removed from the original face three-dimensional data D100 by the high-frequency component removing means 100 on a straight line, the result is as shown in FIG. 3 on the VZ plane. As can be seen from this figure, the first maximum point below the nose apex is below the nose, and the next maximum point is the portion where the upper and lower lips overlap when the mouth is closed. From this information, the position of the mouth in the V direction is obtained. Then, if you find the red part in the vicinity, you can find the upper and lower lips.
[0042]
The fourth embodiment of the present invention is directed to the first or second embodiment, which specializes in the extraction of a nose. Since the nose extraction hardly uses color luminance information, it uses only differential geometric shape information. Assuming that the same data as in the third embodiment is used, a portion where the curvature increases in both directions of the U coordinate from the vertex of the nose corresponds to the nose muscle (see FIG. 4). Further, by tracking the point of maximum curvature near the U coordinate above and below the V coordinate of this point, a candidate for a nose ridge can be found (see FIG. 5). The upper limit of the nose ridge is determined at the point where the Z coordinate of the maximum curvature point is minimized (see FIG. 6). Since this portion corresponds to the protruding portion just above the depression under the eye, it is appropriate as a real feeling. The lower limit of the nose muscle is searched for a point at which the curvature of the maximum curvature greatly changes, based on the V coordinate of the local maximum at the top of the nose found at the time of extracting the mouth.
The fifth embodiment of the present invention specializes in eye extraction in the first or second embodiment. If the positions of the nose and mouth of the eyes are known, it is possible to narrow down the area considerably, and it is possible to stably extract only the color in the narrowed down area.
[0043]
Embodiments of the sixth, seventh, eighth, ninth, and tenth aspects of the invention can be implemented by an apparatus including the face feature extraction method of the first, second, third, fourth, or fifth aspect.
[0044]
According to an embodiment of the eleventh invention, each procedure of the first, second, third, fourth or fifth invention is configured as a program executable on a computer, and the program is stored in an information storage medium readable by the computer. Then, it can be implemented by executing on the computer.
[0045]
【The invention's effect】
An advantage of the first to eleventh aspects of the present invention is to reduce a burden on an operator when extracting a characteristic portion in three-dimensional face data. The reason is that the user does not need to directly specify the position of the part on the three-dimensional coordinate data, and the desired characteristic part is automatically extracted by using the face color luminance / shape pattern.
[0046]
The effects of the first, third, fourth, fifth, sixth, eighth, ninth, and eleventh aspects of the present invention reduce the influence of noise added to a shape in extracting a characteristic portion in three-dimensional face data. Is to do. The reason is that high-frequency noise is reduced by high-frequency component removing means or a device or a program equipped with the same.
[0047]
An advantage of the first to eleventh aspects is to reduce the influence of fluctuations in lighting and posture in extracting a characteristic part in three-dimensional face data. The reason is that three-dimensional shape information is used.
[0048]
According to the effects of the first, fourth, sixth, ninth, and eleventh aspects of the present invention, it is possible to extract a nose for which use of a color feature is not expected. The reason is that three-dimensional shape information is used.
[Brief description of the drawings]
FIG. 1 is a block diagram showing a configuration of an embodiment of the present invention.
FIG. 2 is an explanatory diagram of an example of a coordinate system of face three-dimensional data and a coordinate system of a storage format.
FIG. 3 is an explanatory diagram of a VZ plane cross section passing through a nose vertex of face three-dimensional data from which high-frequency components have been removed.
FIG. 4 is an explanatory diagram of a UZ plane cross section passing through a nose vertex of face three-dimensional data from which high-frequency components have been removed.
FIG. 5 is an explanatory diagram of searching for a nose ridge candidate by tracing a point having a large curvature.
FIG. 6 is an explanatory diagram of a method of determining an upper limit of a nose ridge.
[Explanation of symbols]
100 high frequency component removing means 200 feature extracting means 300 color luminance / shape pattern storage means D100 original face three-dimensional data D200 low frequency component face three-dimensional data D300 color / shape pattern D400 face characteristic part D500 face characteristic part position data

Claims (11)

形状情報と色輝度情報が保存された顔三次元データから口、鼻、目を含む顔特徴部位を抽出する方法であって、
顔三次元データの形状情報から高周波成分を除去する第1の手順と、
顔特徴部位の形状パタン、色パタン、探索範囲の顔情報の格納を行う第2の手順と、
前記第2の手順による顔情報を用いて前記第1の手順で出力された低周波成分顔三次元データより顔特徴を抽出する第3の手順と、
を備えることを特徴とする顔特徴抽出方法。
A method for extracting a facial feature portion including a mouth, a nose, and eyes from face three-dimensional data in which shape information and color luminance information are stored,
A first procedure for removing high frequency components from the shape information of the face three-dimensional data;
A second procedure for storing a shape pattern, a color pattern, and face information of a search range of a face characteristic portion;
A third procedure of extracting face features from the low-frequency component face three-dimensional data output in the first procedure using the face information in the second procedure;
A facial feature extraction method comprising:
形状情報と色輝度情報が保存された顔三次元データから口、鼻、目を含む顔特徴部位を抽出する方法であって、
顔特徴部位の形状パタン、色パタン、探索範囲からなる顔情報の格納を行う第1の手順と、
前記顔三次元データと前記第1の手順による顔情報とから顔特徴を抽出する第2の手順と、
を備えることを特徴とする顔特徴抽出方法。
A method for extracting a facial feature portion including a mouth, a nose, and eyes from face three-dimensional data in which shape information and color luminance information are stored,
A first procedure for storing face information including a shape pattern, a color pattern, and a search range of a face characteristic portion;
A second procedure for extracting a face feature from the face three-dimensional data and the face information according to the first procedure;
A facial feature extraction method comprising:
請求項1もしくは2記載の顔特徴抽出方法において、抽出特徴部位として口を抽出することを特徴とする顔特徴抽出方法。3. The face feature extracting method according to claim 1, wherein a mouth is extracted as an extracted feature portion. 請求項1もしくは2記載の顔特徴抽出方法において、抽出特徴部位として鼻を抽出することを特徴とする顔特徴抽出方法。3. The method according to claim 1, wherein a nose is extracted as an extracted characteristic part. 請求項1もしくは2記載の顔特徴抽出方法において、抽出特徴部位として目を抽出することを特徴とする顔特徴抽出方法。3. The face feature extracting method according to claim 1, wherein eyes are extracted as extracted feature portions. 形状情報と色輝度情報が保存された顔三次元データから口、鼻、目を含む顔特徴部位を抽出する装置であって、
顔三次元データの形状情報から高周波成分を除去する高周波成分除去手段と、顔特徴部位の形状パタン、色パタン、探索範囲の顔情報の格納を行う色輝度・形状パタン蓄積手段と、
前記顔情報を用いて前記高周波成分除去手順で出力された低周波成分顔三次元データより顔特徴を抽出する特徴抽出手段と、
を備えることを特徴とする顔特徴抽出装置。
An apparatus for extracting a facial feature portion including a mouth, a nose, and eyes from face three-dimensional data in which shape information and color luminance information are stored,
A high-frequency component removing means for removing high-frequency components from the shape information of the face three-dimensional data, a color luminance / shape pattern accumulating means for storing a face pattern, a color pattern, and face information of a search range;
Feature extraction means for extracting a face feature from the low-frequency component face three-dimensional data output in the high-frequency component removal procedure using the face information,
A facial feature extraction device comprising:
形状情報と色輝度情報が保存された顔三次元データから口、鼻、目を含む顔特徴部位を抽出する装置であって、
顔特徴部位の形状パタン、色パタン、探索範囲の顔情報の格納を行う色輝度・形状パタン蓄積手段と、
前記顔三次元データと前記顔情報とから顔特徴を抽出する特徴抽出手段と、
を備えることを特徴とする顔特徴抽出装置。
An apparatus for extracting a facial feature portion including a mouth, a nose, and eyes from face three-dimensional data in which shape information and color luminance information are stored,
A color-brightness / shape-pattern storage means for storing a shape pattern, a color pattern, and face information of a search range of a face characteristic portion;
Feature extraction means for extracting a face feature from the face three-dimensional data and the face information;
A facial feature extraction device comprising:
請求項6もしくは7記載の顔特徴抽出装置において、抽出特徴部位として口を抽出することを特徴とする顔特徴抽出装置。8. The facial feature extracting device according to claim 6, wherein a mouth is extracted as an extracted feature portion. 請求項6もしくは7記載の顔特徴抽出装置において、抽出特徴部位として鼻を抽出することを特徴とする顔特徴抽出装置。8. The facial feature extracting device according to claim 6, wherein a nose is extracted as an extracted feature portion. 請求項6もしくは7記載の顔特徴抽出装置において、抽出特徴部位として目を抽出することを特徴とする顔特徴抽出装置。8. The facial feature extraction device according to claim 6, wherein an eye is extracted as an extracted feature portion. 計算機が読み取り自在なソフトウェアが格納されている情報記憶媒体において、請求項1、2、3、4もしくは5記載の顔特徴抽出方法を前記計算機に実行させるためのプログラムが格納されていることを特徴とする情報記憶媒体。An information storage medium storing software that is readable by a computer, wherein a program for causing the computer to execute the face feature extraction method according to claim 1, 2, 3, 4, or 5 is stored. Information storage medium.
JP2002180080A 2002-06-20 2002-06-20 Face feature extraction method, apparatus and information storage medium Withdrawn JP2004021924A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2002180080A JP2004021924A (en) 2002-06-20 2002-06-20 Face feature extraction method, apparatus and information storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2002180080A JP2004021924A (en) 2002-06-20 2002-06-20 Face feature extraction method, apparatus and information storage medium

Publications (1)

Publication Number Publication Date
JP2004021924A true JP2004021924A (en) 2004-01-22

Family

ID=31177312

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2002180080A Withdrawn JP2004021924A (en) 2002-06-20 2002-06-20 Face feature extraction method, apparatus and information storage medium

Country Status (1)

Country Link
JP (1) JP2004021924A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007074600A1 (en) * 2005-12-26 2007-07-05 Nec Corporation Feature extraction device, feature extraction method, and feature extraction program
EP1808810A1 (en) * 2004-11-04 2007-07-18 NEC Corporation 3d shape estimation system and image generation system
JP2009021862A (en) * 2007-07-12 2009-01-29 Fujifilm Corp Imaging apparatus, and imaging control method
WO2017217191A1 (en) * 2016-06-14 2017-12-21 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ Three-dimensional data coding method, three-dimensional data decoding method, three-dimensional data coding device, and three-dimensional data decoding device
WO2018016168A1 (en) * 2016-07-19 2018-01-25 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ Three-dimensional data generation method, three-dimensional data transmission method, three-dimensional data generation device, and three-dimensional data transmission device

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1808810A1 (en) * 2004-11-04 2007-07-18 NEC Corporation 3d shape estimation system and image generation system
EP1808810A4 (en) * 2004-11-04 2013-07-24 Nec Corp 3d shape estimation system and image generation system
US8374434B2 (en) 2005-12-26 2013-02-12 Nec Corporation Feature quantity calculation using sub-information as a feature extraction filter
WO2007074600A1 (en) * 2005-12-26 2007-07-05 Nec Corporation Feature extraction device, feature extraction method, and feature extraction program
JP2009021862A (en) * 2007-07-12 2009-01-29 Fujifilm Corp Imaging apparatus, and imaging control method
US11127169B2 (en) 2016-06-14 2021-09-21 Panasonic Intellectual Property Corporation Of America Three-dimensional data encoding method, three-dimensional data decoding method, three-dimensional data encoding device, and three-dimensional data decoding device
WO2017217191A1 (en) * 2016-06-14 2017-12-21 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ Three-dimensional data coding method, three-dimensional data decoding method, three-dimensional data coding device, and three-dimensional data decoding device
CN109313820B (en) * 2016-06-14 2023-07-04 松下电器(美国)知识产权公司 Three-dimensional data encoding method, decoding method, encoding device, and decoding device
CN109313820A (en) * 2016-06-14 2019-02-05 松下电器(美国)知识产权公司 Three-dimensional data coding method, coding/decoding method, code device, decoding apparatus
JPWO2017217191A1 (en) * 2016-06-14 2019-04-04 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America 3D data encoding method, 3D data decoding method, 3D data encoding device, and 3D data decoding device
US20190108656A1 (en) 2016-06-14 2019-04-11 Panasonic Intellectual Property Corporation Of America Three-dimensional data encoding method, three-dimensional data decoding method, three-dimensional data encoding device, and three-dimensional data decoding device
US11593970B2 (en) 2016-06-14 2023-02-28 Panasonic Intellectual Property Corporation Of America Three-dimensional data encoding method, three-dimensional data decoding method, three-dimensional data encoding device, and three-dimensional data decoding device
CN109478338A (en) * 2016-07-19 2019-03-15 松下电器(美国)知识产权公司 Three-dimensional data production method, sending method, producing device, sending device
US10810786B2 (en) 2016-07-19 2020-10-20 Panasonic Intellectual Property Corporation Of America Three-dimensional data creation method, three-dimensional data transmission method, three-dimensional data creation device, and three-dimensional data transmission device
JPWO2018016168A1 (en) * 2016-07-19 2019-05-09 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America Three-dimensional data creation method, three-dimensional data transmission method, three-dimensional data creation device, and three-dimensional data transmission device
WO2018016168A1 (en) * 2016-07-19 2018-01-25 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ Three-dimensional data generation method, three-dimensional data transmission method, three-dimensional data generation device, and three-dimensional data transmission device
US11710271B2 (en) 2016-07-19 2023-07-25 Panasonic Intellectual Property Corporation Of America Three-dimensional data creation method, three-dimensional data transmission method, three-dimensional data creation device, and three-dimensional data transmission device

Similar Documents

Publication Publication Date Title
US9898651B2 (en) Upper-body skeleton extraction from depth maps
KR100682889B1 (en) Method and Apparatus for image-based photorealistic 3D face modeling
CN110045823B (en) Motion guidance method and device based on motion capture
JP5201096B2 (en) Interactive operation device
WO2012046392A1 (en) Posture estimation device and posture estimation method
US20170345147A1 (en) Tooth axis estimation program, tooth axis estimation device and method of the same, tooth profile data creation program, tooth profile data creation device and method of the same
JP7015152B2 (en) Processing equipment, methods and programs related to key point data
JP2001101429A (en) Method and device for observing face, and recording medium for face observing processing
JP2007272435A (en) Face feature extraction device and face feature extraction method
US9239962B2 (en) Nail region detection method, program, storage medium, and nail region detection device
US10987198B2 (en) Image simulation method for orthodontics and image simulation device thereof
US9569850B2 (en) System and method for automatically determining pose of a shape
KR20200097572A (en) Training data generation method and pose determination method for grasping object
JP2008059283A (en) Operation detection device and program therefor
WO2020134925A1 (en) Illumination detection method and apparatus for facial image, and device and storage medium
JP2014215735A (en) Nail image synthesizing device, nail image synthesizing method, and nail image synthesizing program
Sharma et al. Image recognition system using geometric matching and contour detection
JP2004021924A (en) Face feature extraction method, apparatus and information storage medium
KR101541421B1 (en) Method and System for providing user interaction interface using hand posture recognition
Chaudhary et al. A vision-based method to find fingertips in a closed hand
CN110310336B (en) Touch projection system and image processing method
JP2012003724A (en) Three-dimensional fingertip position detection method, three-dimensional fingertip position detector and program
JP3440644B2 (en) Hand motion recognition device
KR20000060745A (en) A Real time face tracking technique using face&#39;s color model and ellipsoid approximation model
KR101627962B1 (en) Method and apparatus for analyzing fine scale wrinkle of skin image

Legal Events

Date Code Title Description
A300 Application deemed to be withdrawn because no request for examination was validly filed

Free format text: JAPANESE INTERMEDIATE CODE: A300

Effective date: 20050906