JP2004252511A - Method for estimating facial direction - Google Patents

Method for estimating facial direction Download PDF

Info

Publication number
JP2004252511A
JP2004252511A JP2003039083A JP2003039083A JP2004252511A JP 2004252511 A JP2004252511 A JP 2004252511A JP 2003039083 A JP2003039083 A JP 2003039083A JP 2003039083 A JP2003039083 A JP 2003039083A JP 2004252511 A JP2004252511 A JP 2004252511A
Authority
JP
Japan
Prior art keywords
face
facial
feature point
organ feature
color
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
JP2003039083A
Other languages
Japanese (ja)
Inventor
Masato Kazui
誠人 数井
Yoshiki Kobayashi
小林  芳樹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hitachi Ltd
Original Assignee
Hitachi Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hitachi Ltd filed Critical Hitachi Ltd
Priority to JP2003039083A priority Critical patent/JP2004252511A/en
Publication of JP2004252511A publication Critical patent/JP2004252511A/en
Withdrawn legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

<P>PROBLEM TO BE SOLVED: To provide a method for estimating a facial direction by detecting a facial organ such as eyes or nostrils corresponding to a change in attitude of the face. <P>SOLUTION: This method comprises a signal input part 101 of facial image data, a color space conversion part 102 for converting an input RGB signal to a signal having three elements of color hue, color saturation and brightness; a facial area extraction part 103 for extracting only signals having a color hue and color saturation close to flesh color from the values of color hue and color saturation, and extracting a rectangular area where the extraction signals are collected most as a facial area; a facial organ feature point candidate detection part 104 for detecting candidates of a facial organ feature point; a facial organ feature point candidate combination part 105 for selecting six points from the facial organ feature point candidates; an invariant searching part 106 for calculating a projection invariant from the points selected by the facial organ feature point candidate combination part, and comparing it with a projection invariant calculated from a reference facial image to search a true facial organ feature point; and a facial direction estimation part 107 for estimating the facial direction from the searching result. <P>COPYRIGHT: (C)2004,JPO&NCIPI

Description

【0001】
【発明の属する技術分野】
本発明は、顔が含まれる画像から顔器官の特徴点を検出し、顔の向きを推定する手法および装置に関し、顔認証システムや居眠り運転防止システムや視線入力インターフェースを搭載するシステムに関する。
【0002】
【従来の技術】
顔器官の位置決めのために、多くの手法で瞳や鼻孔などの顔器官を検出している。この検出方法として瞳検出にHough 変換を用いる手法(例えば非特許文献1参照)や、分離度フィルタによって検出する方法(例えば、非特許文献2参照)や、顔器官の標準位置を記憶しているメッシュモデルを動的に顔画像にフィッティングさせる手法が提案されている(例えば、非特許文献3参照)。
【0003】
【非特許文献1】
川口 剛,モハメド リゾン,日高 大輔,“ハフ変換と分離度フィルタによる人物顔からの両目の検出”,信学論(D−II),Vol.J84−D−II,No.10,pp.2190−2200,Oct. 2001
【非特許文献2】
福井 和広,山口 修,“形状抽出とパターン照合の組合せによる顔特徴点抽出”,信学論(D−II),Vol.J80−D−II,No.8,pp.2170−2177,Aug. 1997
【非特許文献3】
十河 浩二,“顔の特徴による個人認証技術”,O plus E,Vol.24,No.8,pp.861−865,2002
【0004】
【発明が解決しようとする課題】
上記手法のうち、瞳や鼻孔の円形状を検出する手法は、比較的安定して顔器官の特徴点を検出できる。しかし、顔の姿勢が変化すると瞳や鼻孔が隠れて見えなくなることがある。メッシュモデルをフィッティングさせる手法も安定した顔器官の位置決めが可能であるが、姿勢変化が大きくメッシュモデルの変形の許容範囲を超えると位置決めできなくなる可能性がある。またその場合、顔の向きを推定することが困難となる。
【0005】
監視システムなどでは、カメラが天井や壁に設置されることが多く、俯角が大きくなるにつれて顔器官検出が困難になると考えられる。また、顔器官検出は顔認証の前処理のみならず、顔の向きや視線の向きを入力とするヒューマン・インターフェースにも用いることができる。このように実際応用を想定した場合、姿勢変化に対して安定して顔器官を検出することが必須となる。
【0006】
本発明は上記問題点を解決することを目的とし、顔の姿勢変化に対応した顔器官検出および顔向き推定を目的とする。
【0007】
【課題を解決するための手段】
上記目的を達成するために、本発明では顔器官の特徴点候補をコーナー検出フィルタによって検出し、候補点の中から目尻,目頭そして口元の6点を特定する。以下、これら6点を顔器官特徴点と呼ぶ。通常、コーナー検出フィルタでは顔器官以外の特徴点も検出されるため、それらの点群から顔器官特徴点を特定する必要がある。本発明では射影不変量を用いて顔器官特徴点を特定する。
【0008】
具体的には、目尻,目頭,口元の顔器官特徴点が近似的に同一平面上に存在し、これらの点は表情変化に対して大きな位置変動がないと仮定する。このような仮定を設定すると、平面上の5点から射影不変量を計算することができる。顔器官特徴点をマウス操作で指定し、これらの点から射影不変量をあらかじめ計算しておくと、コーナー検出フィルタの出力点群から事前計算した不変量を持つ点の組合せを選択できる。射影不変量は射影変換に対して不変な量なので顔の姿勢変化に対応するための恣意的なパラメータ設定が不要となり、処理が簡易になる。
【0009】
この手法により、顔の向きによっては検出が困難な瞳や鼻腔などの顔器官特徴点を用いずに顔器官を検出することが可能となり、それらの検出した顔器官と正面顔の顔器官との相対的な位置ずれから顔向きを推定できる。
【0010】
【発明の実施の形態】
本発明の一実施例を以下に示す。
【0011】
図1は本実施例の全体構成図である。101は信号入力部であり、CCDカメラや赤外線カメラ、あるいは紫外線カメラからの信号を入力する。信号入力部に入った画像信号は102の色空間変換部でRGB信号から色相,彩度,明るさの3要素を持つ信号に変換される。前記入力信号がモノクロ信号の場合は、102では色空間の変換処理を行わずに次のステップへ進む。103では前記色相,彩度の値から、肌色に近い色相,彩度を持つ信号のみを抽出し、その抽出信号が最も多く集まる矩形領域を顔領域として抽出する。
【0012】
次に104で特徴点検出フィルタ(例えば、福井 和広,山口 修,“形状抽出とパターン照合の組合せによる顔特徴点抽出”,信学論(D−II),Vol.J80−D−II,No.8,pp.2170−2177,Aug. 1997. )によって顔器官特徴点の候補を検出する。105では前記顔器官特徴点候補の中から6点を選択する。選択位置は図3中の白丸で示す目尻,目頭,口元の6点である。
【0013】
図2の201に本発明で用いる顔器官特徴点を示す。そして、106の不変量探索部で前記6点中5点を用いて射影不変量を計算する。この計算は次のようにして行う。
【0014】
平面上の5点(A1,…,A5)が決まると、射影不変量は次式で計算される(例えば、佐藤 淳著,“コンピュータビジョン −視覚の幾何学−”,コロナ社,東京,1999.)。
【0015】
【数1】

Figure 2004252511
【0016】
上式はA (i=1,…,5)の並び順が変わると値も変わってしまう順序依存性の不変量である。実際応用では特徴点抽出の順番が必ず同じになるとは限らないので特徴点の並び方に依存しない不変量を用いる必要がある。本稿ではそのような不変量としてP2(projective and permutation invariant)不変量を用いる(例えば、R. Lenz and P.Meer:“Experimental investigation of projection and permutation invariants,”Pattern Recognition Letters,Vol.15,pp.751−760,1994.)。これは式(1)(2)を用いて次のように計算できる。
【0017】
【数2】
Figure 2004252511
【0018】
【数3】
Figure 2004252511
【0019】
【数4】
Figure 2004252511
【0020】
【数5】
Figure 2004252511
【0021】
式(3)は特徴点の並び順に依存しない不変量である。式(3)によって計算された射影不変量は106の不変量探索部で基準顔画像(例えば、複数の顔を重ね合わせた平均顔)の顔器官特徴点から計算された式(3)の値と比較される。両者の差で定義される比較結果が最小になるまで105の顔器官特徴点候補の組合せと、106の不変量計算を繰り返す。最終的に、基準顔画像の顔器官特徴点配置が持つ不変量と最も差が小さい顔器官特徴点配置が検出される。これが顔器官特徴点検出の最終結果になる。107では前記基準顔画像の顔器官特徴点配置と、入力画像から検出された顔器官特徴点配置とのずれから、入力画像中の顔がどの方向にどの程度向いているかを計算する。
【0022】
なお、画像入力部101において、アプリケーションや照明環境によっては自然光下の顔画像の取得が困難な場合がある。例えば、車中は西日や朝日が射して逆光となったり、ピラーの影になったり、夜間走行で暗い場合が考えられる。このような場合は、図3のように自然光とは別の光線照射部302を設けて、自然光撮影と光線照射撮影を適応的に切り替える。自然光、または光線照射部から照射された光線(赤外光,紫外光などの電磁波)は顔303に当たり、反射光が画像撮影部304で撮影される。304で撮影された信号は信号入力部305へ送られ、色空間変換部102へ送られる。
【0023】
本発明によれば、射影不変量を用いる顔器官特徴点の検出方法において、俯角が付くように設置されたカメラ、あるいはカメラの前で人および頭部が動くようなシステムで撮像された斜めに傾いた顔画像から顔器官特徴点を顔の向きに関する恣意的なパラメータ設定を行うことなく、安定して検出できる。
【0024】
【発明の効果】
顔の姿勢変化に対応した顔器官検出および顔向き推定を実現できる。
【図面の簡単な説明】
【図1】本発明の一実施例の全体構成図である。
【図2】不変量計算部において、不変量を計算するための顔器官特徴点を示す図である。
【図3】
自然光の他に、別途光線照射光源が必要な場合の構成を示す図である。
【符号の説明】
101…信号入力部、102…色空間変換部、103…顔領域抽出部、104…顔器官特徴点候補検出部、105…顔器官特徴点候補組合せ部、106…不変量計算部、107…顔向き推定部、201…不変量計算に用いる顔器官特徴点、301…自然光、302…光線照射部、303…顔、304…画像撮影部、305…信号入力部。[0001]
TECHNICAL FIELD OF THE INVENTION
The present invention relates to a method and an apparatus for detecting a feature point of a facial organ from an image including a face and estimating a face direction, and relates to a system including a face authentication system, a drowsy driving prevention system, and a gaze input interface.
[0002]
[Prior art]
In order to position the facial organ, facial organs such as pupils and nostrils are detected by many methods. As this detection method, a method using Hough transform for pupil detection (for example, see Non-Patent Document 1), a method for detecting with a separation filter (for example, see Non-Patent Document 2), and a standard position of a face organ are stored. A method of dynamically fitting a mesh model to a face image has been proposed (for example, see Non-Patent Document 3).
[0003]
[Non-patent document 1]
Tsuyoshi Kawaguchi, Mohammed Lizon, Daisuke Hidaka, "Detection of Both Eyes from Human Face by Huff Transform and Separation Filter", IEICE (D-II), Vol. J84-D-II, No. 10, pp. 2190-2200, Oct. 2001
[Non-patent document 2]
Kazuhiro Fukui and Osamu Yamaguchi, "Face Feature Point Extraction by Combination of Shape Extraction and Pattern Matching", IEICE (D-II), Vol. J80-D-II, No. 8, pp. 2170-2177, Aug. 1997
[Non-Patent Document 3]
Koji Togawa, "Personal authentication technology based on facial features", O plus E, Vol. 24, no. 8, pp. 861-865, 2002
[0004]
[Problems to be solved by the invention]
Among the above methods, the method of detecting the circular shape of the pupil or the nostril can relatively stably detect the feature points of the face organ. However, if the posture of the face changes, the pupils and nostrils may be hidden and become invisible. The method of fitting the mesh model also enables stable positioning of the face organ, but if the posture change is large and exceeds the allowable range of deformation of the mesh model, positioning may not be possible. In that case, it is difficult to estimate the direction of the face.
[0005]
In a surveillance system or the like, a camera is often installed on a ceiling or a wall, and it is considered that the face organ detection becomes difficult as the depression angle increases. In addition, face organ detection can be used not only for preprocessing of face authentication but also for a human interface that inputs the direction of a face and the direction of a line of sight. Assuming practical application as described above, it is essential to detect a face organ stably with respect to a change in posture.
[0006]
SUMMARY OF THE INVENTION An object of the present invention is to solve the above problems and to detect a face organ and estimate a face direction corresponding to a change in the posture of a face.
[0007]
[Means for Solving the Problems]
In order to achieve the above object, in the present invention, feature point candidates of a face organ are detected by a corner detection filter, and six points of the outer corner of the eye, the inner corner of the eye and the mouth are specified from among the candidate points. Hereinafter, these six points are referred to as facial organ feature points. Normally, since the corner detection filter also detects feature points other than the face organ, it is necessary to specify a face organ feature point from those point groups. In the present invention, a facial organ feature point is specified using a projection invariant.
[0008]
Specifically, it is assumed that facial organ feature points of the outer corner of the eye, the inner corner of the eye, and the mouth are approximately on the same plane, and that these points do not have a large positional change with respect to a change in facial expression. When such an assumption is set, the projection invariant can be calculated from five points on the plane. If the facial organ feature points are specified by mouse operation and the projection invariants are calculated in advance from these points, a combination of points having the precalculated invariants can be selected from the output point group of the corner detection filter. Since the projection invariant is invariant to the projective transformation, it is not necessary to arbitrarily set parameters to cope with a change in the posture of the face, and the processing is simplified.
[0009]
This method makes it possible to detect facial organs without using facial organ feature points such as pupils and nasal cavities that are difficult to detect depending on the orientation of the face. The face direction can be estimated from the relative displacement.
[0010]
BEST MODE FOR CARRYING OUT THE INVENTION
One embodiment of the present invention will be described below.
[0011]
FIG. 1 is an overall configuration diagram of the present embodiment. A signal input unit 101 inputs a signal from a CCD camera, an infrared camera, or an ultraviolet camera. The image signal input to the signal input unit is converted from an RGB signal into a signal having three elements of hue, saturation, and brightness by a color space conversion unit 102. If the input signal is a monochrome signal, the process proceeds to the next step without performing the color space conversion process at 102. At 103, only signals having a hue and saturation close to the flesh color are extracted from the hue and saturation values, and a rectangular area where the extracted signals are most concentrated is extracted as a face area.
[0012]
Next, at 104, feature point detection filters (eg, Kazuhiro Fukui, Osamu Yamaguchi, “Face feature point extraction by combination of shape extraction and pattern matching”, IEICE (D-II), Vol. J80-D-II, No. 0.8, pp. 2170-2177, Aug. 1997.). At 105, six points are selected from the facial feature point candidates. The selection positions are the six points of the outer corner of the eye, the inner corner of the eye, and the mouth indicated by the white circle in FIG.
[0013]
Reference numeral 201 in FIG. 2 shows facial organ feature points used in the present invention. Then, the invariant search unit 106 calculates a projection invariant using five of the six points. This calculation is performed as follows.
[0014]
When the five points (A1,..., A5) on the plane are determined, the projection invariant is calculated by the following equation (for example, Jun Sato, "Computer Vision-Visual Geometry-", Corona, Tokyo, 1999) .).
[0015]
(Equation 1)
Figure 2004252511
[0016]
The above equation is an invariant of order dependence in which the value changes when the arrangement order of A i (i = 1,..., 5) changes. In actual application, the order of feature point extraction is not always the same, so it is necessary to use an invariant that does not depend on the arrangement of feature points. In this paper, a P2 (projective and permutation invariant) invariant is used as such an invariant (for example, R. Lenz and P. Meer: “Experimental investment promotion of investment promotion.” 751-760, 1994.). This can be calculated as follows using equations (1) and (2).
[0017]
(Equation 2)
Figure 2004252511
[0018]
[Equation 3]
Figure 2004252511
[0019]
(Equation 4)
Figure 2004252511
[0020]
(Equation 5)
Figure 2004252511
[0021]
Equation (3) is an invariant that does not depend on the order of feature points. The projection invariant calculated by Expression (3) is the value of Expression (3) calculated from the face organ feature point of the reference face image (for example, an average face obtained by superimposing a plurality of faces) by the invariant search unit 106. Is compared to The combination of 105 facial organ feature point candidates and the invariant calculation of 106 are repeated until the comparison result defined by the difference between the two becomes minimum. Finally, the face organ feature point arrangement having the smallest difference from the invariant of the face organ feature point arrangement of the reference face image is detected. This is the final result of face organ feature point detection. At 107, the direction and the degree of the face in the input image are calculated from the difference between the arrangement of the facial organ feature points of the reference face image and the arrangement of the facial organ feature points detected from the input image.
[0022]
Note that it may be difficult for the image input unit 101 to acquire a face image under natural light depending on the application and the lighting environment. For example, it is conceivable that a west sun or an morning sun shines in the car, which causes backlight, shadows of pillars, or darkness during night driving. In such a case, a light beam irradiation unit 302 different from natural light is provided as shown in FIG. 3 to adaptively switch between natural light shooting and light beam shooting. Natural light or light beams (electromagnetic waves such as infrared light and ultraviolet light) emitted from the light irradiation unit hit the face 303, and reflected light is photographed by the image photographing unit 304. The signal photographed in 304 is sent to the signal input unit 305 and sent to the color space conversion unit 102.
[0023]
According to the present invention, in a method for detecting a facial organ feature point using a projection invariant, a camera installed so as to have a depression angle, or an oblique image captured by a system in which a person and a head move in front of the camera. Facial organ feature points can be stably detected from a tilted face image without setting arbitrary parameters relating to the face direction.
[0024]
【The invention's effect】
It is possible to realize face organ detection and face direction estimation corresponding to a change in the face posture.
[Brief description of the drawings]
FIG. 1 is an overall configuration diagram of an embodiment of the present invention.
FIG. 2 is a diagram illustrating facial organ feature points for calculating an invariant in an invariant calculation unit.
FIG. 3
It is a figure which shows the structure at the time of separately requiring a light irradiation light source other than natural light.
[Explanation of symbols]
101: signal input unit, 102: color space conversion unit, 103: face area extraction unit, 104: face organ feature point candidate detection unit, 105: face organ feature point candidate combination unit, 106: invariant calculation unit, 107: face Direction estimating unit, 201: facial organ feature points used for invariant calculation, 301: natural light, 302: light irradiation unit, 303: face, 304: image capturing unit, 305: signal input unit.

Claims (3)

顔画像を入力し、
前記顔画像から顔器官特徴点の候補を検出し、
検出された前記顔器官特徴点の候補の中から射影不変量を用いて顔器官特徴点を検出し、
前記顔器官特徴点と予め登録された正面顔の顔器官特徴点との間で、対応する顔器官特徴点の位置ずれから顔の向きを推定する顔向き推定方法。
Enter the face image,
Detecting face organ feature point candidates from the face image,
Detecting face organ feature points using projection invariants from among the candidates for the detected face organ feature points,
A face direction estimating method for estimating a face direction from a positional shift of a corresponding face organ feature point between the face organ feature point and a previously registered face organ feature point.
請求項1記載の顔向き推定方法において、
顔に光線を照射し、
前記光線の照射による前記顔からの反射光を検出し、
前記反射光を顔画像データに変換して前記顔画像を入力する顔向き推定方法。
The face direction estimating method according to claim 1,
Illuminate the face with light rays,
Detecting reflected light from the face due to the irradiation of the light beam,
A face direction estimating method for converting the reflected light into face image data and inputting the face image.
請求項1記載の顔向き推定方法において、
前記顔画像の撮影環境に基づいて予め定められた顔画像の入力方法を切り替える顔向き推定方法。
The face direction estimating method according to claim 1,
A face direction estimating method for switching a predetermined face image input method based on a shooting environment of the face image.
JP2003039083A 2003-02-18 2003-02-18 Method for estimating facial direction Withdrawn JP2004252511A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2003039083A JP2004252511A (en) 2003-02-18 2003-02-18 Method for estimating facial direction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2003039083A JP2004252511A (en) 2003-02-18 2003-02-18 Method for estimating facial direction

Publications (1)

Publication Number Publication Date
JP2004252511A true JP2004252511A (en) 2004-09-09

Family

ID=33023358

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2003039083A Withdrawn JP2004252511A (en) 2003-02-18 2003-02-18 Method for estimating facial direction

Country Status (1)

Country Link
JP (1) JP2004252511A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100842258B1 (en) 2006-10-26 2008-06-30 한국전자통신연구원 Methdo for detecting forged face image and apparatus thereof
CN100454330C (en) * 2005-09-29 2009-01-21 株式会社东芝 Feature point detection apparatus and method
JP2014519665A (en) * 2011-06-10 2014-08-14 アマゾン・テクノロジーズ、インコーポレイテッド Improved facial recognition in video
US10733423B2 (en) 2016-11-15 2020-08-04 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and storage medium
JP2022528128A (en) * 2019-04-12 2022-06-08 アークソフト コーポレイション リミテッド Skin quality measurement method, skin quality classification method, skin quality measurement device, electronic device and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0883344A (en) * 1994-09-14 1996-03-26 Mitsubishi Electric Corp Picture processor and personal state judging device
JPH09305743A (en) * 1996-05-20 1997-11-28 Toshiba Corp Human face motion detecting system
JPH11283031A (en) * 1998-03-27 1999-10-15 Toshiba Corp Object recognition device and method therefor
JP2001109907A (en) * 1999-10-04 2001-04-20 Sharp Corp Three-dimensional model generation device, three- dimensional model generation method, and recording medium recording three-dimensional model generation program

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0883344A (en) * 1994-09-14 1996-03-26 Mitsubishi Electric Corp Picture processor and personal state judging device
JPH09305743A (en) * 1996-05-20 1997-11-28 Toshiba Corp Human face motion detecting system
JPH11283031A (en) * 1998-03-27 1999-10-15 Toshiba Corp Object recognition device and method therefor
JP2001109907A (en) * 1999-10-04 2001-04-20 Sharp Corp Three-dimensional model generation device, three- dimensional model generation method, and recording medium recording three-dimensional model generation program

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
佐藤淳: "時空間不変量と任意視点映像による動作認識", 画像ラボ, vol. 第13巻,第7号, JPN6009009023, 1 July 2002 (2002-07-01), JP, pages 14 - 19, ISSN: 0001260238 *
足立淳,外1名: "エピポーラ幾何と射影不変量による未校正3次元視覚サーボ", 電子情報通信学会論文誌, vol. 第J85−D−II巻,第9号, JPN6009009020, 1 September 2002 (2002-09-01), JP, pages 1392 - 1400, ISSN: 0001260239 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100454330C (en) * 2005-09-29 2009-01-21 株式会社东芝 Feature point detection apparatus and method
KR100842258B1 (en) 2006-10-26 2008-06-30 한국전자통신연구원 Methdo for detecting forged face image and apparatus thereof
JP2014519665A (en) * 2011-06-10 2014-08-14 アマゾン・テクノロジーズ、インコーポレイテッド Improved facial recognition in video
US9355301B2 (en) 2011-06-10 2016-05-31 Amazon Technologies, Inc. Enhanced face recognition in video
US10733423B2 (en) 2016-11-15 2020-08-04 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and storage medium
JP2022528128A (en) * 2019-04-12 2022-06-08 アークソフト コーポレイション リミテッド Skin quality measurement method, skin quality classification method, skin quality measurement device, electronic device and storage medium
JP7413400B2 (en) 2019-04-12 2024-01-15 アークソフト コーポレイション リミテッド Skin quality measurement method, skin quality classification method, skin quality measurement device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
JP5567853B2 (en) Image recognition apparatus and method
JP5445460B2 (en) Impersonation detection system, impersonation detection method, and impersonation detection program
JP6587435B2 (en) Image processing apparatus, information processing method, and program
JP3688879B2 (en) Image recognition apparatus, image recognition method, and recording medium therefor
JP5529660B2 (en) Pupil detection device and pupil detection method
KR20190001066A (en) Face verifying method and apparatus
RU2431190C2 (en) Facial prominence recognition method and device
JP4597391B2 (en) Facial region detection apparatus and method, and computer-readable recording medium
JP2018508888A (en) System and method for performing fingerprint-based user authentication using an image captured using a mobile device
JP2007317062A (en) Person recognition apparatus and method
KR101937323B1 (en) System for generating signcription of wireless mobie communication
JPH10320562A (en) Detection system and detection method for face of person
JP5170094B2 (en) Spoofing detection system, spoofing detection method, and spoofing detection program
US11989975B2 (en) Iris authentication device, iris authentication method, and recording medium
US11756338B2 (en) Authentication device, authentication method, and recording medium
JP2010262601A (en) Pattern recognition system and pattern recognition method
Xu et al. Integrated approach of skin-color detection and depth information for hand and face localization
Tepelea et al. A vision module for visually impaired people by using Raspberry PI platform
JP2014186505A (en) Visual line detection device and imaging device
JP3577908B2 (en) Face image recognition system
JP2008123360A (en) Device, method, and program for extracting/determining human body specific area
JP2004252511A (en) Method for estimating facial direction
JP2004157778A (en) Nose position extraction method, program for operating it on computer, and nose position extraction device
US9082002B2 (en) Detection device and detection method
JP5995610B2 (en) Subject recognition device and control method therefor, imaging device, display device, and program

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20060213

RD01 Notification of change of attorney

Free format text: JAPANESE INTERMEDIATE CODE: A7421

Effective date: 20060420

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20090218

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20090303

A761 Written withdrawal of application

Free format text: JAPANESE INTERMEDIATE CODE: A761

Effective date: 20090415