JP2007271554A - Face attitude detection method - Google Patents

Face attitude detection method Download PDF

Info

Publication number
JP2007271554A
JP2007271554A JP2006100211A JP2006100211A JP2007271554A JP 2007271554 A JP2007271554 A JP 2007271554A JP 2006100211 A JP2006100211 A JP 2006100211A JP 2006100211 A JP2006100211 A JP 2006100211A JP 2007271554 A JP2007271554 A JP 2007271554A
Authority
JP
Japan
Prior art keywords
distance
face
subject
face image
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP2006100211A
Other languages
Japanese (ja)
Other versions
JP4431749B2 (en
Inventor
Yoshinobu Ebisawa
嘉伸 海老澤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shizuoka University NUC
Original Assignee
Shizuoka University NUC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shizuoka University NUC filed Critical Shizuoka University NUC
Priority to JP2006100211A priority Critical patent/JP4431749B2/en
Publication of JP2007271554A publication Critical patent/JP2007271554A/en
Application granted granted Critical
Publication of JP4431749B2 publication Critical patent/JP4431749B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

<P>PROBLEM TO BE SOLVED: To detect a face attitude direction efficiently, without requiring a complex imaging system. <P>SOLUTION: This face posture detection method has a distance deriving step for determining the distance between portions in the first reference portion group, which is three combinations between right and left pupils and right and left nostrils of an object person A; a position detection step for generating a facial image of the object person A with one camera 2, and detecting a two-dimensional position of the first reference portion group on the facial image, based on the face image; and an attitude deriving step for deriving the facial attitude of the object person A, by calculating the normal direction on a plane including the first reference portion group, based on the distance determined in the distance deriving step, and on the two-dimensional position detected in the position detection step. <P>COPYRIGHT: (C)2008,JPO&INPIT

Description

本発明は、観察対象者の顔姿勢を検出する顔姿勢検出方法に関するものである。   The present invention relates to a face posture detection method for detecting the face posture of an observation subject.

従来から、自動車等において運転者の顔方向を検出する機能を搭載してよそ見運転をモニターすることが検討されている。このような機能は、自動車の安全運転のために重要な情報を提供する。この種の顔方向検出の技術としては、2台のカメラによって対象者の顔画像を撮像して、三角測量の原理から顔の特徴点の3次元的位置を計測し、それらの特徴点の位置関係から顔の方向を捉える方法が開示されている(下記非特許文献1参照)。また、2台のカメラを使って対象者の瞳孔及び鼻孔の座標を演算して、演算結果に応じて顔方向を求める方法も開示されている(特許文献1参照)。
特開2005−266868号公報 Yoshio Matsumoto, Alexander Zelinsky, 「頭部の方向と視線方向の実時間計測のアルゴリズム(An Algorithm forReal-time Stereo Vision Implementation of Head Pose and Gaze DirectionMeasurement)」,Proceedings of IEEE Fourth International Conference on Face and GestureRecognition (FG'2000), pp.499-505, 2000
Conventionally, it has been considered to mount a function for detecting the driver's face direction in an automobile or the like to monitor driving while looking away. Such a function provides important information for safe driving of a car. As this kind of face direction detection technology, the face image of the subject is captured by two cameras, the three-dimensional positions of the facial feature points are measured from the principle of triangulation, and the positions of these feature points A method of capturing the face direction from the relationship is disclosed (see Non-Patent Document 1 below). In addition, a method is also disclosed in which the coordinates of the pupil and nostril of a subject are calculated using two cameras, and the face direction is obtained according to the calculation result (see Patent Document 1).
JP 2005-266868 A Yoshio Matsumoto, Alexander Zelinsky, “An Algorithm for Real-time Stereo Vision Implementation of Head Pose and Gaze Direction Measurement”, Proceedings of IEEE Fourth International Conference on Face and Gesture Recognition (FG '2000), pp.499-505, 2000

しかしながら、上述した従来の顔方向の検出方法においては、2台のカメラを利用して顔方向を検出する場合にその顔方向の検出範囲が所定範囲(例えば±40度)に限られるため、検出範囲を拡げたい場合は別の一組のカメラが必要になる。しかしながら、カメラを複数組設けることは、検出システムの構成を複雑にすると同時にコスト上昇を招くおそれがある。   However, in the conventional face direction detection method described above, when the face direction is detected using two cameras, the detection range of the face direction is limited to a predetermined range (for example, ± 40 degrees). If you want to expand the range, you need another set of cameras. However, providing a plurality of sets of cameras may complicate the configuration of the detection system and increase the cost.

そこで、本発明は、かかる課題に鑑みて為されたものであり、複雑な撮像系システムを必要とすることなく、効率的に顔姿勢の方向を検出することが可能な顔姿勢検出方法を提供することを目的とする。   Therefore, the present invention has been made in view of such problems, and provides a face posture detection method capable of efficiently detecting the face posture direction without requiring a complicated imaging system. The purpose is to do.

上記課題を解決するため、本発明の顔姿勢検出方法は、対象者の左右の瞳孔及び左右の鼻孔のうちの3つの組み合わせである第1基準部位群における部位間の距離を求める距離導出ステップと、1台の撮像手段によって対象者の顔画像を生成し、顔画像に基づいて顔画像上における第1基準部位群の2次元的位置を検出する位置検出ステップと、距離導出ステップにおいて求められた距離と、位置検出ステップにおいて検出された2次元的位置とに基づいて、第1基準部位群を含む平面の法線方向を算出することによって、対象者の顔姿勢を導出する姿勢導出ステップとを備える。   In order to solve the above-described problem, the face posture detection method of the present invention includes a distance derivation step for obtaining a distance between parts in a first reference part group that is a combination of three of the left and right pupils and left and right nostrils of the subject. It was obtained in a position detecting step for generating a face image of the subject by one imaging means and detecting a two-dimensional position of the first reference region group on the face image based on the face image, and a distance deriving step. A posture deriving step for deriving the face posture of the subject by calculating the normal direction of the plane including the first reference region group based on the distance and the two-dimensional position detected in the position detecting step; Prepare.

このような顔姿勢検出方法によれば、対象者の左右の瞳孔及び鼻孔のうちの3部位間の距離を求めておいて、1台の撮像手段によって撮像された顔画像からその3部位の2次元的位置を検出し、3部位間の距離と3部位の2次元的位置から3部位を含む平面の法線方向を算出することによって顔姿勢を特定するので、1組の撮像手段によって特徴点を捉えることができる角度よりも1台の撮像手段によって捉えることができる角度の方が広くなり、簡易な撮像系によって効率的な顔姿勢の検出が可能となる。   According to such a face posture detection method, the distance between three parts of the left and right pupils and nostrils of the subject is obtained, and 2 of the three parts are obtained from the face image imaged by one imaging means. Since the face posture is specified by detecting the dimensional position and calculating the normal direction of the plane including the three parts from the distance between the three parts and the two-dimensional position of the three parts, the feature points are obtained by a set of imaging means. The angle that can be captured by a single imaging unit is wider than the angle that can capture the image, and a simple imaging system enables efficient face posture detection.

距離導出ステップでは、2台の撮像手段を用いてステレオ計測により第1基準部位群における部位間の距離を求め、位置検出ステップでは、2台の撮像手段のうちの一方の撮像手段を用いて第1基準部位群の2次元的位置を検出することが好ましい。この場合、対象者ごとの特徴点間の距離が容易に把握されると同時に、検出可能な顔姿勢の範囲をより効率的に拡げることができる。   In the distance deriving step, the distance between the parts in the first reference part group is obtained by stereo measurement using two imaging means, and in the position detecting step, the first imaging means is used by using one of the two imaging means. It is preferable to detect a two-dimensional position of one reference region group. In this case, the distance between the feature points for each subject can be easily grasped, and at the same time, the range of detectable face postures can be expanded more efficiently.

また、距離導出ステップでは、左右の瞳孔及び左右の鼻孔のうちの第1基準部位群以外の3つの組み合せである第2基準部位群における部位間の距離を併せて求め、位置検出ステップでは、顔画像に基づいて顔画像上における第2基準部位群の2次元的位置を併せて検出し、姿勢導出ステップでは、距離導出ステップにおいて求められた第2基準部位群に関する距離と、位置検出ステップにおいて検出された第2基準部位群に関する2次元的位置とに基づいて、第2基準部位群を含む平面の法線方向を更に算出し、該法線方向と第1基準部位を含む平面の法線方向とを用いて対象者の顔姿勢を導出することも好ましい。こうすれば、左右の瞳孔及び鼻孔のうちの3つの組み合わせを2組選択して、それぞれの組み合わせの部位を含む2つの平面の法線方向から顔姿勢を検出するので、顔姿勢の検出精度がより向上する。   In the distance deriving step, the distance between parts in the second reference part group which is a combination of the left and right pupils and the right and left nostrils other than the first reference part group is also obtained. Based on the image, the two-dimensional position of the second reference part group on the face image is also detected, and in the posture deriving step, the distance related to the second reference part group obtained in the distance deriving step is detected in the position detecting step. The normal direction of the plane including the second reference region group is further calculated based on the two-dimensional position related to the second reference region group, and the normal direction and the normal direction of the plane including the first reference region are calculated. It is also preferable to derive the face posture of the subject using In this way, two combinations of the left and right pupils and nostrils are selected, and the face posture is detected from the normal directions of the two planes including the respective combination parts. More improved.

さらに、位置検出ステップでは、撮像手段に取り付けられた第1の光源から対象者に向けて照明光を照射させると同時に第1の顔画像を生成する一方、撮像手段の光軸からの距離が第1の光源よりも大きくなるように設けられた第2の光源から、対象者に向けて照明光を照射させると同時に第2の顔画像を生成し、第1の顔画像と第2の顔画像との差分を取ることによって第1基準部位群のうちの瞳孔の2次元的位置を検出することも好ましい。このような構成により、対象者に対して第1の光源から照明光を照射することによって得られた顔画像は瞳孔が明るく光る画像(明瞳孔画像)となり、対象者に対して第2の光源から照明光を照射することによって得られた顔画像は瞳孔が暗く写る画像(暗瞳孔画像)となり、両者の画像の差分を取ることによって、ロバスト性の高い瞳孔の検出を行うことができる。   Furthermore, in the position detection step, the illumination light is irradiated from the first light source attached to the imaging means toward the subject, and simultaneously the first face image is generated, while the distance from the optical axis of the imaging means is the first distance. A second face image is generated simultaneously with illuminating illumination light toward the subject from a second light source provided to be larger than the first light source, and the first face image and the second face image are generated. It is also preferable to detect the two-dimensional position of the pupil in the first reference region group by taking the difference between. With such a configuration, the face image obtained by irradiating the subject with illumination light from the first light source becomes an image in which the pupil shines brightly (bright pupil image), and the second light source for the subject The face image obtained by irradiating with illumination light is an image in which the pupil appears dark (dark pupil image), and by detecting the difference between the two images, a highly robust pupil can be detected.

またさらに、位置検出ステップでは、第1の光源から対象者に向けて照明光を照射させると同時に第1の顔画像を生成する一方、発光波長が第1の光源と異なる第2の光源から、対象者に向けて照明光を照射させると同時に第2の顔画像を生成し、第1の顔画像と第2の顔画像との差分を取ることによって第1基準部位群のうちの瞳孔の2次元的位置を検出することも好ましい。対象者に対して異なる波長領域の照明光を照射することによって得られた顔画像の差分を取ることによって、ロバスト性の高い瞳孔の検出を行うことができる。   Still further, in the position detection step, the illumination light is irradiated from the first light source toward the subject and at the same time the first face image is generated, while the emission wavelength is different from the first light source, At the same time as illuminating the target person with illumination light, a second face image is generated, and the difference between the first face image and the second face image is taken to obtain 2 of the pupils in the first reference region group. It is also preferable to detect the dimensional position. By taking a difference between facial images obtained by irradiating illumination light of different wavelength regions to the subject, it is possible to detect a highly robust pupil.

さらにまた、第1及び第2の光源は、撮像手段の光軸からの距離が等しくなるように設けられていることも好ましい。この場合は、瞳孔の部分に輝度差を生じさせながら、光源の構成を簡略化及び小型化することができる。   Furthermore, it is also preferable that the first and second light sources are provided so that the distances from the optical axis of the imaging means are equal. In this case, the configuration of the light source can be simplified and reduced in size while causing a luminance difference in the pupil portion.

本発明による顔姿勢検出方法によれば、複雑な撮像系システムを必要とすることなく、効率的に顔姿勢の方向を検出することができる。   According to the face posture detection method of the present invention, the face posture direction can be efficiently detected without requiring a complicated imaging system.

以下、図面を参照しつつ本発明に係る顔姿勢検出方法の好適な実施形態について詳細に説明する。なお、図面の説明においては同一又は相当部分には同一符号を付し、重複する説明を省略する。   Hereinafter, preferred embodiments of a face posture detection method according to the present invention will be described in detail with reference to the drawings. In the description of the drawings, the same or corresponding parts are denoted by the same reference numerals, and redundant description is omitted.

(第1実施形態)
本発明の第1実施形態について説明する。まず、本発明にかかる顔姿勢検出方法を実施するための撮像系の構成について、図1を参照しながら説明する。
(First embodiment)
A first embodiment of the present invention will be described. First, the configuration of an imaging system for carrying out the face posture detection method according to the present invention will be described with reference to FIG.

図1は、撮像系と対象者との位置関係を示す平面図である。同図に示すように、撮像系1は、対象者Aの顔画像を撮像する1台のカメラ(撮像手段)2と、カメラ2の前面2aの撮像レンズの近傍に設けられた光源3aと、カメラ2の前面2aから離れた位置に設けられた光源3bとを備えている。   FIG. 1 is a plan view showing the positional relationship between the imaging system and the subject. As shown in the figure, the imaging system 1 includes a single camera (imaging means) 2 that captures the face image of the subject A, a light source 3a provided in the vicinity of the imaging lens on the front surface 2a of the camera 2, And a light source 3b provided at a position away from the front surface 2a of the camera 2.

カメラ2は、対象者Aの顔画像を生成できる撮像手段であれば特定の種類のものには限定されないが、画像データをリアルタイム性が高く処理できるという点で、CCD、CMOS等の撮像素子を内蔵するデジタルカメラを用いる。対象者Aは、顔姿勢の検出時にこのカメラ2の撮像レンズの光軸L1上に位置する。   The camera 2 is not limited to a specific type as long as it is an imaging means capable of generating the face image of the subject A, but an image sensor such as a CCD or CMOS is used in that the image data can be processed with high real-time properties. Use the built-in digital camera. The subject A is positioned on the optical axis L1 of the imaging lens of the camera 2 when detecting the face posture.

光源3aは、カメラ2の光軸L1に沿って、光軸L1上に位置する対象者Aをカバーする範囲に向けて、近赤外光成分を有する照明光を照射可能に構成されている。光源3bは、光軸L1からの距離が光源3aよりも離れた位置に固定され、光軸L1に沿って対象者Aをカバーする範囲に向けて、近赤外光成分を有する照明光を照射可能に構成されている。ここで、光源3a,3bから照射される照明光は、瞳孔の部分に輝度差を生じさせるような異なる波長成分(例えば、中心波長が850nmと950nm)を有する光であり、かつ、光源3bは光軸L1からの距離が光源3aと等しい位置に固定されていてもよい。この場合は、瞳孔の部分に輝度差を生じさせながら、光源の構成を簡略化及び小型化することができる。   The light source 3a is configured to be able to irradiate illumination light having a near-infrared light component toward a range covering the subject A located on the optical axis L1 along the optical axis L1 of the camera 2. The light source 3b is fixed at a position where the distance from the optical axis L1 is further away from the light source 3a, and irradiates illumination light having a near-infrared light component toward a range covering the subject A along the optical axis L1. It is configured to be possible. Here, the illumination light emitted from the light sources 3a and 3b is light having different wavelength components (for example, center wavelengths of 850 nm and 950 nm) that cause a luminance difference in the pupil portion, and the light source 3b The distance from the optical axis L1 may be fixed at a position equal to the light source 3a. In this case, the configuration of the light source can be simplified and reduced in size while causing a luminance difference in the pupil portion.

なお、カメラ2及び光源3a,3bは、対象者Aが眼鏡をかけていたときの顔画像における反射光の写り込みを防止し、対象者Aの鼻孔を検出し易くする目的で、対象者Aの顔の高さよりも低い位置(例えば、光軸L1の水平面に対する傾斜角20度〜30度)に設けられることが好ましい。   Note that the camera 2 and the light sources 3a and 3b are provided for the purpose of preventing the reflection of reflected light in the face image when the subject A is wearing glasses and making it easier to detect the nostril of the subject A. It is preferably provided at a position lower than the height of the face (for example, an inclination angle of 20 degrees to 30 degrees with respect to the horizontal plane of the optical axis L1).

次に、上述した撮像系1を用いた顔姿勢検出方法について説明する。   Next, a face posture detection method using the above-described imaging system 1 will be described.

まず、対象者Aの左右の瞳孔中心及び左の鼻孔中心(第1基準部位群)の3つの各部位間の距離を実測する。また、対象者Aの左右の瞳孔中心及び右の鼻孔中心(第2基準部位群)の3つの各部位間の距離、及び左右の鼻孔中心間の距離も併せて実測する(以上、距離導出ステップ)。   First, the distance between each of the three parts of the left and right pupil centers and the left nostril center (first reference part group) of the subject A is measured. In addition, the distance between the three parts of the left and right pupil centers and the right nostril center (second reference part group) of the subject A and the distance between the right and left nostril centers are also measured (the distance deriving step). ).

次に、対象者Aをカメラ2の光軸L1上に位置させ、任意の方向を向いた対象者Aの顔画像を撮像する。このようにしてカメラ2によって生成された顔画像に基づいて、顔画像上の左右の瞳孔中心の2次元座標、及び左右の鼻孔中心の2次元座標を検出する(位置検出ステップ)。以下、瞳孔中心及び鼻孔中心の検出方法について詳細に説明する。   Next, the subject A is positioned on the optical axis L1 of the camera 2, and a face image of the subject A facing in an arbitrary direction is captured. Based on the face image generated by the camera 2 in this way, the two-dimensional coordinates of the left and right pupil centers and the two-dimensional coordinates of the left and right nostrils on the face image are detected (position detection step). Hereinafter, a method for detecting the pupil center and the nostril center will be described in detail.

(瞳孔中心の検出)
顔画像の撮像時には、光源3a,3bを交互に点灯させて、それぞれの点灯に同期した顔画像を生成させることによって、明瞳孔画像及び暗瞳孔画像を得る。明瞳孔画像は、光源3aの照射に伴って得られる画像であり、瞳孔部分の輝度が相対的に明るくなっている。これに対し、暗瞳孔画像は、光源3bの照射に伴って得られる画像であり、瞳孔部分の輝度が相対的に暗くなっている。これらの2種類の画像は、2つの光源3a,3bからの照明光の照射に伴う瞳孔からの反射光の強度が異なることに起因して得られる。例えば、フィールド走査を採用するカメラの場合は、光源3a,3bをカメラ2のフィールド信号に同期させて点灯させることで、奇数フィールドと偶数フィールド間で明瞳孔画像と暗瞳孔画像とを分離することができる。そして、明瞳孔画像と暗瞳孔画像との差分を取った後に瞳孔部分の範囲を判別する。このような差分処理を行うことで、ロバスト性の高い瞳孔の検出を行うことができる。
(Detection of pupil center)
At the time of capturing a face image, the light sources 3a and 3b are alternately turned on to generate a face image synchronized with each lighting, thereby obtaining a bright pupil image and a dark pupil image. The bright pupil image is an image obtained with the irradiation of the light source 3a, and the luminance of the pupil portion is relatively bright. On the other hand, the dark pupil image is an image obtained with the irradiation of the light source 3b, and the luminance of the pupil portion is relatively dark. These two types of images are obtained due to differences in the intensity of reflected light from the pupil accompanying irradiation of illumination light from the two light sources 3a and 3b. For example, in the case of a camera that employs field scanning, the light source images 3a and 3b are turned on in synchronization with the field signal of the camera 2 to separate the bright and dark pupil images between the odd and even fields. Can do. Then, after taking the difference between the bright pupil image and the dark pupil image, the range of the pupil portion is determined. By performing such difference processing, it is possible to detect a highly robust pupil.

その後、検出した瞳孔の輪郭を特定して、その輪郭に近似できる楕円を算出してその楕円の中心を瞳孔の中心位置として求める。また、差分処理を施された画像を用いて、その画像を2値化した後に重心法を用いて瞳孔中心の位置を算出してもよい。このとき、画像中に目蓋等の動く対象があると瞳孔以外も明るく写る場合があるので、重心を求める際の画像領域の広さの選択が問題となる。そこで、特開2005−348832号公報に記載にように、分離度フィルタを用いて瞳孔中心の位置を算出してもよい。すなわち、円形に近いパターンを用いて分離度が最大になる中心座標を求める。   After that, the contour of the detected pupil is specified, an ellipse that can be approximated to the contour is calculated, and the center of the ellipse is obtained as the center position of the pupil. Alternatively, the position of the center of the pupil may be calculated using the centroid method after binarizing the image using the difference-processed image. At this time, if there is a moving target such as an eyelid in the image, the area other than the pupil may appear brightly, so the selection of the size of the image area when obtaining the center of gravity becomes a problem. Therefore, as described in JP-A-2005-348832, the position of the pupil center may be calculated using a separability filter. That is, the center coordinate that maximizes the degree of separation is obtained using a pattern close to a circle.

(鼻孔中心の検出)
左右の鼻孔中心の2次元座標は、明瞳孔画像又は暗瞳孔画像を参照して検出する。すなわち、左右の瞳孔中心の中点を求め、それより下の位置に、対象者Aが正面を向いていたと仮定した場合に中心がほぼ鼻孔位置に一致する大ウィンドウを設定し、その大ウィンドウ内で鼻孔を検出する。そして、画像の大ウィンドウ内を対象にP−tile法により輝度が低いほうから0.8%の画素を検出し、HIGH画素及びLOW画素からなる2値化画像に変換する。その後、検出された2値化画像の膨張処理及び収縮処理(モルフォロジー処理)を繰り返し画像内の領域を明確化させた後、ラベリング処理を施して大きなほうから2つの領域を選び出し、それぞれの領域について上下左右の端点より形成される長方形の中心、縦横比、及び面積を算出する。ここで、膨張処理とは、2値画像中で対象画素の近傍の8画素の1つでもHIGH画素がある場合に、対象画素をHIGH画素に変換する処理であり、収縮処理とは、2値画像中で対象画素の近傍の8画素の1つでもLOW画素がある場合に、対象画素をLOW画素に変換する処理である。そして、縦横比が0.5より小さいか0.7より大きく、かつ、全体の画像サイズが640×240画素に対して面積が100画素より小さいか300画素より大きい場合は、鼻孔像を示す領域ではないと判断する。そうでない場合は、上記長方形の中心を中心に30×30画素の小ウィンドウを設定し、もとの画像の小ウィンドウ内を対象に、P−tile法により輝度が低いほうから5%の画素を抽出する。その後、上記のモルフォロジー処理及びラベリング処理を繰り返し、最大面積の領域を求める。その領域の面積が130画素以上か70画素以下の場合は鼻孔像でないと判断し、そうでない場合は鼻孔像であると判断し、領域の上下左右の端点より形成される長方形の中心を鼻孔の中心として求める。その結果、2つの鼻孔中心が検出されたら、それぞれの座標値の大きさから左右の鼻孔の対応関係を判断する。
(Detection of nostril center)
The two-dimensional coordinates of the left and right nostril centers are detected with reference to the bright pupil image or the dark pupil image. That is, the center point of the left and right pupil centers is obtained, and a large window whose center substantially coincides with the nostril position when the subject A is assumed to face the front is set at a position below that. Detect nostrils with. Then, 0.8% of pixels having the lower luminance are detected by the P-tile method in the large window of the image, and converted into a binary image composed of HIGH pixels and LOW pixels. Thereafter, the detected binarized image is repeatedly expanded and contracted (morphological processing) to clarify the area in the image, and then the labeling process is performed to select the two areas from the larger one. The center, aspect ratio, and area of the rectangle formed from the top, bottom, left, and right end points are calculated. Here, the expansion process is a process of converting a target pixel into a HIGH pixel when at least one of the eight pixels in the vicinity of the target pixel is present in the binary image, and the contraction process is a binary process. This is processing for converting a target pixel into a LOW pixel when at least one of the eight pixels near the target pixel is present in the image. When the aspect ratio is smaller than 0.5 or larger than 0.7 and the entire image size is 640 × 240 pixels, the area is smaller than 100 pixels or larger than 300 pixels, the region indicating the nostril image Judge that is not. Otherwise, a small window of 30 × 30 pixels is set around the center of the rectangle, and 5% of pixels from the lower luminance are selected by the P-tile method for the inside of the small window of the original image. Extract. Thereafter, the above morphological process and labeling process are repeated to obtain a region having the maximum area. If the area of the region is 130 pixels or more and 70 pixels or less, it is determined not to be a nostril image, otherwise it is determined to be a nostril image, and the center of the rectangle formed by the upper, lower, left and right end points of the region is determined. Seek as the center. As a result, when two nostril centers are detected, the correspondence between the left and right nostrils is determined from the size of each coordinate value.

上記のように、大ウィンドウと小ウィンドウとを用いて鼻孔検出を行うと、撮像条件の異なる2つの鼻孔のそれぞれを検出するのに最適な閾値を与えることができ、確実に鼻孔を検出できる。   As described above, when the nostril detection is performed using the large window and the small window, an optimum threshold value can be given for detecting each of the two nostrils having different imaging conditions, and the nostril can be detected reliably.

このような鼻孔検出時において左右どちらかの鼻孔のみしか検出できなかった場合は、距離導出ステップで実測された距離と、検出された左右の瞳孔中心の位置及び片方の鼻孔中心の位置とを用いて、他方の鼻孔中心の位置を推定する。今、右瞳孔中心の座標を(xRP,yRP)、左瞳孔中心の座標を(xLP,yLP)、右鼻孔中心の座標を(xRN,yRN)、左鼻孔中心の座標を(xLN,yLN)、左右の瞳孔中心間の実測距離をDP0、左右の鼻孔中心間の実測距離をDN0とし、左鼻孔を検出できなかった場合を考える。このときの顔画像における左右の瞳孔間の傾斜率IP、及び瞳孔中心間距離Dは、下記式(1)及び(2);
=(yRP−yLP)/(xRP−xLP) …(1)
={(xRP−xLP+(yRP−yLP1/2 …(2)
と考えることができる。
If only one of the right and left nostrils can be detected during such nostril detection, the distance measured in the distance deriving step and the detected positions of the left and right pupil centers and the center of one nostril are used. Then, the position of the other nostril center is estimated. Now, the coordinates of the right pupil center are (x RP , y RP ), the coordinates of the left pupil center are (x LP , y LP ), the coordinates of the right nostril center are (x RN , y RN ), and the coordinates of the left nostril center are Assume that (x LN , y LN ), the measured distance between the left and right pupil centers is D P0 , the measured distance between the left and right nostril centers is D N0 , and the left nostril cannot be detected. Ramp rate I P, and the pupil center distance D P between left and right pupils in the face image at this time, the following equation (1) and (2);
I P = (y RP −y LP ) / (x RP −x LP ) (1)
D P = {(x RP -x LP) 2 + (y RP -y LP) 2} 1/2 ... (2)
Can be considered.

ここで、左右の瞳孔中心を結ぶ線と左右の鼻孔中心を結ぶ線とは常に平行であると考えられるので、左右の鼻孔間の傾斜率I=Iとなる。また、顔画像上の鼻孔中心間距離Dは、左右の瞳孔中心間の実測距離DP0、左右の鼻孔中心間の実測距離DN0、及び瞳孔中心間距離Dから、下記式(3);
=(DN0/DP0)×D…(3)
により求まる。鼻孔中心間距離Dは下記式(4);
={(xRN−xLN+(yRN−yLN1/2 …(4)
と表されるので、左の鼻孔中心の座標(xLN,yLN)は、下記式(5)及び(6);
LN=xRN−{D /(1+I )}1/2 …(5)
LN=yRN−{(D .I )/(1+I )}1/2 …(6)
によって求めることができる。逆に右の鼻孔中心が検出されなかった場合も、下記式(7)及び(8);
RN=xLN+{D /(1+I )}1/2 …(7)
RN=yLN+{(D .I )/(1+I )}1/2 …(8)
によって求めることができる。
Here, since the line connecting the left and right pupil centers and the line connecting the left and right nostril centers are considered to be always parallel, the slope ratio I N = I P between the left and right nostrils. Further, the nostrils center distance D N on the face image, the measured distance D P0 between the left and right pupil center, found distance D N0 between the left and right nostrils center, and from the pupil center distance D P, the following equation (3) ;
D N = (D N0 / D P0 ) × D P (3)
It is obtained by. The nostril center distance DN is expressed by the following formula (4);
D N = {(x RN -x LN) 2 + (y RN -y LN) 2} 1/2 ... (4)
Therefore, the coordinates of the center of the left nostril (x LN , y LN ) are the following formulas (5) and (6);
x LN = x RN - {D N 2 / (1 + I N 2)} 1/2 ... (5)
y LN = y RN - {( D N 2 .I N 2) / (1 + I N 2)} 1/2 ... (6)
Can be obtained. Conversely, when the right nostril center is not detected, the following formulas (7) and (8);
x RN = x LN + {D N 2 / (1 + I N 2)} 1/2 ... (7)
y RN = y LN + {( D N 2 .I N 2) / (1 + I N 2)} 1/2 ... (8)
Can be obtained.

(顔姿勢の検出)
上述の位置検出ステップにおいて検出された顔画像の左右の瞳孔中心の位置及び左右の鼻孔中心の位置、及び距離導出ステップにおいて実測された距離に基づいて、現実の左右の鼻孔中心の3次元座標及び左右の鼻孔中心の3次元座標を算出し、対象者Aの姿勢を導出する(姿勢導出ステップ)。以下、対象者の姿勢の導出方法について詳細に説明する。
(Face posture detection)
Based on the positions of the center of the left and right pupils and the center of the left and right nostrils of the face image detected in the position detection step, and the distance actually measured in the distance derivation step, The three-dimensional coordinates of the left and right nostril centers are calculated to derive the posture of the subject A (posture derivation step). Hereinafter, a method for deriving the posture of the target person will be described in detail.

図2は、カメラ2の撮像レンズの主点を原点とした2次元座標系における画像平面PLと対象者Aとの位置関係を示す図である。同図に示すように、3次元座標系(X,Y,Z)をZ軸がカメラ2の光軸に一致する方向に設定すると、2次元座標系(x、y)の画像平面PLは、原点Oからの距離がカメラ2の焦点距離fとなり、かつZ軸に垂直な平面として捉えることができる。また、対象者Aの画像平面PL上における左右の瞳孔中心及び左右の鼻孔中心の4つの特徴点をP=(X,Y,Z)(n=1,2,3,4)とおくと、3次元空間における各特徴点間の距離Lij(i,j=1,2,3,4)は、下記式(9);
ij={(X−X+(Y−Y+(Z−Z1/2 …(9)
で与えられる。このような特徴点Pの画像平面PL上の投影像の2次元座標をQ(x,y)(n=1,2,3,4)とすると、Pは原点Oと投影像Qとを通る延長線上に存在すると考えることができる。よって、Pの位置ベクトルpは、直線OQの方向を向く単位ベクトルuとスカラーaを用いて、下記式(10);
=a・u …(10)
によって表せる。単位ベクトルuは投影像Qの座標値(x,y)を用いてu=(x,y,f)/(x +y +f1/2と計算され、投影像Qの座標が既知であるので、aが求まればPの3次元座標を求めることができる。
FIG. 2 is a diagram showing a positional relationship between the image plane PL and the subject A in a two-dimensional coordinate system with the principal point of the imaging lens of the camera 2 as the origin. As shown in the figure, when the three-dimensional coordinate system (X, Y, Z) is set in a direction in which the Z axis coincides with the optical axis of the camera 2, the image plane PL of the two-dimensional coordinate system (x, y) is The distance from the origin O becomes the focal length f of the camera 2 and can be regarded as a plane perpendicular to the Z axis. Further, four feature points of the left and right pupil centers and the right and left nostril centers on the image plane PL of the subject A are represented by P n = (X n , Y n , Z n ) (n = 1, 2, 3, 4). In other words, the distance L ij (i, j = 1, 2, 3, 4) between the feature points in the three-dimensional space is expressed by the following formula (9);
L ij = {(X i −X j ) 2 + (Y i −Y j ) 2 + (Z i −Z j ) 2 } 1/2 (9)
Given in. When the two-dimensional coordinates of the projected image of such a feature point P n on the image plane PL are Q n (x n , y n ) (n = 1, 2, 3, 4), P n is projected from the origin O. it can be considered to be on an extension line passing through the image Q n. Therefore, the position vector p n of P n, with a unit vector u n scalar a n facing the direction of the straight line OQ n, the following equation (10);
p n = a n · u n (10)
Can be represented by Unit vector u n coordinate values of the projected image Q n (x n, y n ) u using n = (x n, y n , f) / (x n 2 + y n 2 + f 2) 1/2 and calculation is, since the coordinates of the projected image Q n are known, it is possible to determine the three-dimensional coordinates of P n if Motomare is a n.

ここで、4つの特徴点Pのうちの左右の瞳孔中心及び左の鼻孔中心(第1基準部位群)間の距離Lij(i,j=1,2,3)は予め実測されており、特徴点Pの位置ベクトルpを用いて下記式(11);
|p−p|=Lij (i,j=1,2,3) …(11)
の関係が成立するので、式(10)及び式(11)に基づいて、下記式(12)〜(14);
+a −2a(u,u)=L12 …(12)
+a −2a(u,u)=L23 …(13)
+a −2a(u,u)=L31 …(14)
が導かれる。これらの式(12)〜式(14)からスカラーa,a,aを解くことによって、第1基準部位群の3次元座標P,P,Pを算出することができる。
Here, the distance L ij (i, j = 1, 2, 3) between the left and right pupil centers and the left nostril center (first reference region group) of the four feature points P n is measured in advance. , the following equation using the position vector p n of the feature point P n (11);
| P i −p j | = L ij (i, j = 1, 2, 3) (11)
Therefore, based on Expression (10) and Expression (11), the following Expressions (12) to (14);
a 1 2 + a 2 2 −2a 1 a 2 (u 1 , u 2 ) = L 12 (12)
a 2 2 + a 3 2 −2a 2 a 3 (u 2 , u 3 ) = L 23 (13)
a 3 2 + a 1 2 -2a 3 a 1 (u 3 , u 1 ) = L 31 (14)
Is guided. By solving the scalars a 1 , a 2 , a 3 from these equations (12) to (14), the three-dimensional coordinates P 1 , P 2 , P 3 of the first reference region group can be calculated.

また、同様にして、4つの特徴点Pのうちの左右の瞳孔中心及び右の鼻孔中心(第2基準部位群)の実測距離Lij(i,j=1,2,4)に基づいて、3次元座標Pを算出する。このようにして求められた3次元座標P(XPR,YPR,ZPR),P(XPL,YPL,ZPL),P(XNL,YNL,ZNL),P(XNR,YNR,ZNR)を用いて、第1基準部位群を含む平面の法線ベクトルn=(nLX,nLY,nLZ)を下記式(15)〜(17);
LX=(YNL−YPL)(ZPR−ZPL)−(YPR−YPL)(ZNL−ZPL)…(15)
LY=(ZNL−ZPL)(XPR−XPL)−(ZPR−ZPL)(XNL−XPL)…(16)
LZ=(XNL−XPL)(YPR−YPL)−(XPR−XPL)(YNL−YPL)…(17)
によって導出する。さらに、同様の算出方法で第2基準部位群を含む平面の法線ベクトルn=(nRX,nRY,nRZ)を導出する。最後に、求めた法線ベクトルn,nの合成ベクトルn=n+n=(n,n,n)を最終的な顔姿勢の方向を示す顔方向ベクトルとして導き出す。
Similarly, based on the measured distance L ij (i, j = 1, 2, 4) of the left and right pupil centers and the right nostril center (second reference region group) of the four feature points P n. calculates the three-dimensional coordinates P 4. The three-dimensional coordinates P 1 (X PR , Y PR , Z PR ), P 2 (X PL , Y PL , Z PL ), P 3 (X NL , Y NL , Z NL ), P thus obtained are determined. 4 (X NR , Y NR , Z NR ), the plane normal vector n L = (n LX , n LY , n LZ ) including the first reference region group is expressed by the following equations (15) to (17). ;
n LX = (Y NL -Y PL ) (Z PR -Z PL) - (Y PR -Y PL) (Z NL -Z PL) ... (15)
n LY = (Z NL -Z PL ) (X PR -X PL) - (Z PR -Z PL) (X NL -X PL) ... (16)
n LZ = (X NL -X PL ) (Y PR -Y PL) - (X PR -X PL) (Y NL -Y PL) ... (17)
Derived by Further, a normal vector n R = (n RX , n RY , n RZ ) of the plane including the second reference region group is derived by the same calculation method. Finally, a resultant vector n F = n R + n L = (n X , n Y , n Z ) of the obtained normal vectors n L and n R is derived as a face direction vector indicating the final face posture direction.

また、上述したようにして求められた第1及び第2基準部位群を含む2つの三角形のそれぞれの重心(XGR,YGR,ZGR),(XGL,YGL,ZGL)を下記式(18),(19);
(XGR,YGR,ZGR)={(XPR+XPL+XNR)/3,(YPR+YPL+YNR)/3,(ZPR+ZPL+ZNR)/3} …(18)
(XGL,YGL,ZGL)={(XPR+XPL+XNL)/3,(YPR+YPL+YNL)/3,(ZPR+ZPL+ZNL)/3} …(19)
によって算出し、さらに2つの重心間の重心(XGC,YGC,ZGC)を、下記式(20);
(XGC,YGC,ZGC)={(XGR+XGL)/2,(YGR+YGL)/2,(ZGR+ZGL)/2} …(20)
によって求める。
Further, the centroids (X GR , Y GR , Z GR ) and (X GL , Y GL , Z GL ) of the two triangles including the first and second reference site groups obtained as described above are as follows: Formula (18), (19);
( XGR , YGR , ZGR ) = {( XPR + XPL + XNR ) / 3, ( YPR + YPL + YNR ) / 3, ( ZPR + ZPL + ZNR ) / 3} (18)
( XGL , YGL , ZGL ) = {( XPR + XPL + XNL ) / 3, ( YPR + YPL + YNL ) / 3, ( ZPR + ZPL + ZNL ) / 3} (19)
And the centroid (X GC , Y GC , Z GC ) between the two centroids is calculated by the following equation (20):
( XGC , YGC , ZGC ) = {( XGR + XGL ) / 2, ( YGR + YGL ) / 2, ( ZGR + ZGL ) / 2} (20)
Ask for.

最終的に、対象者Aの顔姿勢は、重心(XGC,YGC,ZGC)を通り、方向がベクトルnで表される方向であると特定する。これにより、対象者の顔の位置と顔の方向を決定することができる。なお、顔の位置として計算される重心(XGC,YGC,ZGC)は、4つの特徴点の重心として計算してもよい。 Finally, the face posture of the subject A is specified as passing through the center of gravity (X GC , Y GC , Z GC ) and the direction being represented by the vector n F. Thereby, the position of the subject's face and the direction of the face can be determined. The centroid (X GC , Y GC , Z GC ) calculated as the face position may be calculated as the centroid of the four feature points.

以上説明した顔姿勢検出方法によれば、対象者Aの左右の瞳孔及び鼻孔のうちの3部位間の距離を求めておいて、1台のカメラ2によって撮像された顔画像からその3部位の2次元的位置を検出し、3部位間の距離と3部位の2次元的位置から3部位を含む平面の法線方向を算出することによって顔姿勢を特定するので、1組のカメラを用いたステレオ計測等によって特徴点を捉えることができる角度よりも、1台のカメラによって捉えることができる角度の方が広くなり、簡易な撮像系によって効率的な顔姿勢の検出が可能となる。例えば、1台のカメラで対象者の特徴点を捉えることができる水平方向の範囲を±45度とすると、互いの光軸の傾斜角が10度程度である2台のカメラを用いて顔姿勢を検出した場合の検出範囲が±40度程度であるのに対して(図4参照)、本発明の検出方法によれば、1台のカメラで±45度程度の顔姿勢の検出範囲を実現することができる(図1参照)。   According to the face posture detection method described above, the distance between the three parts of the left and right pupils and nostrils of the subject A is obtained, and the three parts are detected from the face image captured by one camera 2. The face posture is determined by detecting the two-dimensional position and calculating the normal direction of the plane including the three parts from the distance between the three parts and the two-dimensional position of the three parts. An angle that can be captured by one camera is wider than an angle at which a feature point can be captured by stereo measurement or the like, and a simple imaging system enables efficient face posture detection. For example, assuming that the horizontal range in which the feature points of the subject can be captured with one camera is ± 45 degrees, the face posture is determined using two cameras whose optical axis tilt angle is about 10 degrees. In contrast to the detection range of ± 40 degrees when the camera is detected (see FIG. 4), according to the detection method of the present invention, a face posture detection range of about ± 45 degrees is realized with one camera. (See FIG. 1).

また、左右の瞳孔及び鼻孔のうちの3つの組み合わせを、第1及び第2基準部位群として2組選択して、それぞれの部位群を含む2つの平面の法線方向から顔姿勢を検出するので、顔姿勢の検出精度がより向上する。   Also, two combinations of the left and right pupils and nostrils are selected as the first and second reference region groups, and the face posture is detected from the normal directions of the two planes including the respective region groups. The face posture detection accuracy is further improved.

(第2実施形態)
次に、本発明の第2実施形態について説明する。図3は、本発明の第2実施形態にかかる撮像系101と対象者Aとの位置関係を示す平面図である。同図に示すように、撮像系101は、対象者Aの顔画像を撮像する2台のカメラ(撮像手段)102,112と、カメラ102,112の前面102a,112aのそれぞれの撮像レンズの近傍に設けられた光源103a,113aと、カメラ2の前面102a,112aから離れた位置に設けられた光源103b,113bとを備えている。すなわち、2台のカメラを用いて対象者Aの顔姿勢を検出する。
(Second Embodiment)
Next, a second embodiment of the present invention will be described. FIG. 3 is a plan view showing the positional relationship between the imaging system 101 and the subject A according to the second embodiment of the present invention. As shown in the figure, the imaging system 101 includes two cameras (imaging means) 102 and 112 that capture the face image of the subject A, and the vicinity of the imaging lenses of the front surfaces 102a and 112a of the cameras 102 and 112, respectively. Light sources 103a and 113a, and light sources 103b and 113b provided at positions away from the front surfaces 102a and 112a of the camera 2. That is, the face posture of the subject A is detected using two cameras.

カメラ102とカメラ112とは、それらの光軸L2,L3が互いに直交するように配置される。対象者Aは、顔姿勢の検出時にこの光軸L2と光軸上L2との交点上に位置する。光源103a,103bの機能及びカメラ102との位置関係、及び光源113a,113bの機能及びカメラ112との位置関係は、第1実施形態において説明した光源3a,3bと同様である。   The camera 102 and the camera 112 are arranged so that their optical axes L2 and L3 are orthogonal to each other. The subject A is located at the intersection of the optical axis L2 and the optical axis L2 when the face posture is detected. The functions of the light sources 103a and 103b and the positional relationship with the camera 102, and the functions of the light sources 113a and 113b and the positional relationship with the camera 112 are the same as those of the light sources 3a and 3b described in the first embodiment.

以下、上記撮像系101を用いた顔姿勢検出方法について、第1実施形態との相違点を中心に説明する   Hereinafter, a face posture detection method using the imaging system 101 will be described focusing on differences from the first embodiment.

まず、対象者Aにカメラ102の光軸L2又はカメラ112の光軸L3に沿った方向に顔を向けさせた状態で顔画像を撮像し、左右の瞳孔中心の位置及び左右の鼻孔中心の位置を検出した後、顔画像上における左右の瞳孔中心間の距離DP1、左右の鼻孔中心間の距離DN1を計算しておく。その上で、対象者Aに、光軸L2と光軸L3との交点、及びカメラ102とカメラ112との間の中点を通る直線L4の方向に、顔を向けさせる。そこで、2台のカメラ102,112のそれぞれで同時に顔画像を撮像し顔画像上の4点の特徴点の2次元位置を検出する。ここで、カメラ102,112の光軸L2,L3が対象者Aの顔の向きに対して45度傾いているため、カメラ102,112が鼻孔を検出できないおそれがある。このようにどちらかの鼻孔が検出できなかった場合は、予め計算しておいた左右の瞳孔中心間の距離DP1及び左右の鼻孔中心間の距離DN1と、既に検出した瞳孔中心間の距離Dとから、上述した方法と同様の方法で、鼻孔中心の2次元位置を推定する。その後、2つの顔画像における特徴点の2次元位置から、ステレオ計測により4つの特徴点Pn(n=1,2,3,4)の3次元座標を求め、これらの3次元座標から各特徴点間の距離Lij(i,j=1,2,3,4)を算出する(以上、距離導出ステップ)。 First, a face image is captured in a state where the face is directed toward the subject A in the direction along the optical axis L2 of the camera 102 or the optical axis L3 of the camera 112, and the positions of the left and right pupil centers and the positions of the right and left nostrils Is detected, a distance D P1 between the left and right pupil centers on the face image and a distance D N1 between the left and right nostril centers are calculated. Then, the subject A is caused to face in the direction of the straight line L4 passing through the intersection of the optical axis L2 and the optical axis L3 and the midpoint between the camera 102 and the camera 112. Therefore, each of the two cameras 102 and 112 simultaneously captures a face image and detects the two-dimensional positions of the four feature points on the face image. Here, since the optical axes L2 and L3 of the cameras 102 and 112 are inclined by 45 degrees with respect to the face direction of the subject A, the cameras 102 and 112 may not be able to detect the nostrils. Thus if it can not be detected either nostril, the distance D N1 between the distance D P1 and the left and right nostrils center between the left and right pupil center which has been previously calculated, already a distance between the detected pupil center and a D P, in the manner described above and the same method to estimate the two-dimensional position of the nostrils center. Thereafter, three-dimensional coordinates of four feature points Pn (n = 1, 2, 3, 4) are obtained by stereo measurement from the two-dimensional positions of the feature points in the two face images, and each feature point is obtained from these three-dimensional coordinates. The distance L ij (i, j = 1, 2, 3, 4) is calculated (the distance deriving step).

その後、対象者Aに検出したい方向に顔を向けてもらい、2台のカメラ102,112のうちのどちらかのカメラで生成された顔画像に基づいて、顔画像上の左右の瞳孔中心の2次元座標Q,Q、及び左右の鼻孔中心の2次元座標Q,Qを検出する(位置検出ステップ)。このとき、どちらのカメラの顔画像を選択するかは、対象者Aの特徴点の検出が成功したか否かにより判断する。 Thereafter, the subject A is faced in the direction to be detected, and based on the face image generated by either one of the two cameras 102 and 112, 2 at the center of the left and right pupils on the face image. The dimension coordinates Q 1 and Q 2 and the two-dimensional coordinates Q 3 and Q 4 of the right and left nostril centers are detected (position detection step). At this time, which camera's face image to select is determined by whether or not the feature point of the subject A has been successfully detected.

最後に、位置検出ステップにおいて検出された顔画像上の4つの特徴点の2次元位置Q(n=1,2,3,4)、及び距離導出ステップにおいて計算された距離Lij(i,j=1,2,3,4)に基づいて、現実の左右の鼻孔中心の3次元座標P,P及び左右の鼻孔中心の3次元座標P,Pを算出した後に、第1基準部位群又は第2基準部位群を含む平面の法線ベクトルn,nを求めることによって、対象者Aの姿勢を導出する(姿勢導出ステップ)。 Finally, the two-dimensional positions Q n (n = 1, 2, 3, 4) of the four feature points on the face image detected in the position detection step, and the distance L ij (i, i, i) calculated in the distance derivation step. j = 1, 2, 3, 4), after calculating the three-dimensional coordinates P 1 , P 2 of the actual left and right nostril centers and the three-dimensional coordinates P 3 , P 4 of the right and left nostril centers, The posture of the subject A is derived by obtaining the normal vectors n L and n R of the plane including the reference part group or the second reference part group (posture derivation step).

以上説明した顔姿勢検出方法によれば、対象者Aごとの特徴点P間の距離が容易に把握されると同時に、検出可能な顔姿勢の範囲を、少ないカメラで効率的に拡げることができる。例えば、2台のカメラ102,112の光軸L2,L3の成す角が90度の場合は、±90度の範囲で顔方向が検出できることになる(図3参照)。 According to the face posture detection method described above, the distance between the feature points P n for each subject A can be easily grasped, and at the same time, the range of the detectable face posture can be efficiently expanded with a small number of cameras. it can. For example, when the angle formed by the optical axes L2 and L3 of the two cameras 102 and 112 is 90 degrees, the face direction can be detected within a range of ± 90 degrees (see FIG. 3).

なお、本発明は、前述した実施形態に限定されるものではない。例えば、本発明の顔姿勢検出方法においては、法線ベクトルを算出する対象の対象者Aの基準部位群として、どちらかの鼻孔中心を除いた3つの特徴点を選択していたが、3つの特徴点の組合せとして様々な組合せを選択することができる。例えば、瞳孔中心を除いた特徴点を基準部位群としてもよい。   In addition, this invention is not limited to embodiment mentioned above. For example, in the face posture detection method of the present invention, three feature points excluding one of the nostril centers are selected as the reference region group of the subject A for which the normal vector is calculated. Various combinations can be selected as combinations of feature points. For example, feature points excluding the pupil center may be used as the reference region group.

本発明の第1実施形態にかかる撮像系と対象者との位置関係を示す平面図である。It is a top view which shows the positional relationship of the imaging system concerning 1st Embodiment of this invention, and a subject. 図1のカメラの撮像レンズの主点を原点とした2次元座標系における画像平面と対象者との位置関係を示す図である。It is a figure which shows the positional relationship of the image plane in the two-dimensional coordinate system which made the origin the main point of the imaging lens of the camera of FIG. 1, and the subject. 本発明の第2実施形態にかかる撮像系と対象者との位置関係を示す平面図である。It is a top view which shows the positional relationship of the imaging system concerning 2nd Embodiment of this invention, and a subject. 本発明の比較例にかかる撮像系と対象者との位置関係を示す平面図である。It is a top view which shows the positional relationship of the imaging system concerning a comparative example of this invention, and a subject.

符号の説明Explanation of symbols

1,101…撮像系、2,102,112…カメラ、3a,3b,103a,103b,113a,113b…光源、A…対象者。   DESCRIPTION OF SYMBOLS 1,101 ... Imaging system, 2,102,112 ... Camera, 3a, 3b, 103a, 103b, 113a, 113b ... Light source, A ... Subject.

Claims (6)

対象者の左右の瞳孔及び左右の鼻孔のうちの3つの組み合わせである第1基準部位群における部位間の距離を求める距離導出ステップと、
1台の撮像手段によって前記対象者の顔画像を生成し、前記顔画像に基づいて前記顔画像上における前記第1基準部位群の2次元的位置を検出する位置検出ステップと、
前記距離導出ステップにおいて求められた前記距離と、前記位置検出ステップにおいて検出された前記2次元的位置とに基づいて、前記第1基準部位群を含む平面の法線方向を算出することによって、前記対象者の顔姿勢を導出する姿勢導出ステップと、
を備えることを特徴とする顔姿勢検出方法。
A distance deriving step for obtaining a distance between parts in the first reference part group which is a combination of three of the left and right pupils and right and left nostrils of the subject;
A position detecting step of generating a face image of the subject by one imaging means, and detecting a two-dimensional position of the first reference region group on the face image based on the face image;
Based on the distance obtained in the distance deriving step and the two-dimensional position detected in the position detecting step, calculating a normal direction of a plane including the first reference region group, A posture deriving step for deriving the face posture of the subject;
A face posture detection method comprising:
前記距離導出ステップでは、2台の撮像手段を用いてステレオ計測により前記第1基準部位群における部位間の距離を求め、
前記位置検出ステップでは、前記2台の撮像手段のうちの一方の撮像手段を用いて前記第1基準部位群の2次元的位置を検出する、
ことを特徴とする請求項1記載の顔姿勢検出方法。
In the distance deriving step, the distance between the parts in the first reference part group is obtained by stereo measurement using two imaging means,
In the position detection step, a two-dimensional position of the first reference region group is detected using one of the two imaging means.
The face posture detection method according to claim 1.
前記距離導出ステップでは、前記左右の瞳孔及び前記左右の鼻孔のうちの第1基準部位群以外の3つの組み合せである第2基準部位群における部位間の距離を併せて求め、
前記位置検出ステップでは、前記顔画像に基づいて前記顔画像上における前記第2基準部位群の2次元的位置を併せて検出し、
前記姿勢導出ステップでは、前記距離導出ステップにおいて求められた前記第2基準部位群に関する前記距離と、前記位置検出ステップにおいて検出された前記第2基準部位群に関する前記2次元的位置とに基づいて、前記第2基準部位群を含む平面の法線方向を更に算出し、該法線方向と前記第1基準部位を含む平面の法線方向とを用いて前記対象者の顔姿勢を導出する、
ことを特徴とする請求項1又は2に記載の顔姿勢検出方法。
In the distance derivation step, the distance between the parts in the second reference part group that is a combination of the left and right pupils and the left and right nostrils other than the first reference part group is also obtained,
In the position detection step, based on the face image, a two-dimensional position of the second reference part group on the face image is also detected,
In the posture derivation step, based on the distance related to the second reference part group determined in the distance derivation step and the two-dimensional position related to the second reference part group detected in the position detection step, Further calculating the normal direction of the plane including the second reference part group, and deriving the face posture of the subject using the normal direction and the normal direction of the plane including the first reference part,
The face posture detection method according to claim 1 or 2.
位置検出ステップでは、前記撮像手段に取り付けられた第1の光源から前記対象者に向けて照明光を照射させると同時に第1の顔画像を生成する一方、前記撮像手段の光軸からの距離が前記第1の光源よりも大きくなるように設けられた第2の光源から、前記対象者に向けて照明光を照射させると同時に第2の顔画像を生成し、前記第1の顔画像と前記第2の顔画像との差分を取ることによって第1基準部位群のうちの瞳孔の2次元的位置を検出する、
ことを特徴とする請求項1〜3のいずれか一項に記載の顔姿勢検出方法。
In the position detecting step, illumination light is emitted from the first light source attached to the imaging means toward the subject and a first face image is generated simultaneously, while the distance from the optical axis of the imaging means is A second face image is generated simultaneously with illuminating illumination light toward the subject from a second light source provided to be larger than the first light source, and the first face image and the Detecting a two-dimensional position of the pupil in the first reference region group by taking a difference from the second face image;
The face posture detection method according to any one of claims 1 to 3.
位置検出ステップでは、第1の光源から前記対象者に向けて照明光を照射させると同時に第1の顔画像を生成する一方、発光波長が前記第1の光源と異なる第2の光源から、前記対象者に向けて照明光を照射させると同時に第2の顔画像を生成し、前記第1の顔画像と前記第2の顔画像との差分を取ることによって第1基準部位群のうちの瞳孔の2次元的位置を検出する、
ことを特徴とする請求項1〜3のいずれか一項に記載の顔姿勢検出方法。
In the position detection step, the first face image is generated at the same time as the illumination light is irradiated from the first light source toward the subject, while the second light source having a light emission wavelength different from that of the first light source is used. A pupil in the first reference region group is generated by irradiating illumination light toward the subject and simultaneously generating a second face image and taking a difference between the first face image and the second face image. Detecting the two-dimensional position of
The face posture detection method according to any one of claims 1 to 3.
前記第1及び第2の光源は、前記撮像手段の光軸からの距離が等しくなるように設けられている、
ことを特徴とする請求項5記載の顔姿勢検出方法。
The first and second light sources are provided so that the distances from the optical axis of the imaging means are equal.
The face posture detection method according to claim 5.
JP2006100211A 2006-03-31 2006-03-31 Face posture detection method Active JP4431749B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2006100211A JP4431749B2 (en) 2006-03-31 2006-03-31 Face posture detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2006100211A JP4431749B2 (en) 2006-03-31 2006-03-31 Face posture detection method

Publications (2)

Publication Number Publication Date
JP2007271554A true JP2007271554A (en) 2007-10-18
JP4431749B2 JP4431749B2 (en) 2010-03-17

Family

ID=38674492

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2006100211A Active JP4431749B2 (en) 2006-03-31 2006-03-31 Face posture detection method

Country Status (1)

Country Link
JP (1) JP4431749B2 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012141962A (en) * 2010-12-14 2012-07-26 Canon Inc Position and orientation measurement device and position and orientation measurement method
JP2016045707A (en) * 2014-08-22 2016-04-04 国立大学法人静岡大学 Feature point detection system, feature point detection method, and feature point detection program
JP2016061653A (en) * 2014-09-17 2016-04-25 安全自動車株式会社 Headlight tester and confronting method of the same
JP2016099759A (en) * 2014-11-20 2016-05-30 国立大学法人静岡大学 Face detection method, face detection device, and face detection program
KR101786579B1 (en) 2017-03-22 2017-10-18 (주) 이즈테크놀로지 Method and Apparatus for Determining Front Face
WO2018030515A1 (en) * 2016-08-12 2018-02-15 国立大学法人静岡大学 Line-of-sight detection device
US10417782B2 (en) 2014-08-22 2019-09-17 National University Corporation Shizuoka University Corneal reflection position estimation system, corneal reflection position estimation method, corneal reflection position estimation program, pupil detection system, pupil detection method, pupil detection program, gaze detection system, gaze detection method, gaze detection program, face orientation detection system, face orientation detection method, and face orientation detection program

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6083761B2 (en) 2012-05-25 2017-02-22 国立大学法人静岡大学 Pupil detection method, corneal reflection detection method, face posture detection method, and pupil tracking method

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012141962A (en) * 2010-12-14 2012-07-26 Canon Inc Position and orientation measurement device and position and orientation measurement method
US9519971B2 (en) 2010-12-14 2016-12-13 Canon Kabushiki Kaisha Position and orientation measurement device and position and orientation measurement method
JP2016045707A (en) * 2014-08-22 2016-04-04 国立大学法人静岡大学 Feature point detection system, feature point detection method, and feature point detection program
US10417782B2 (en) 2014-08-22 2019-09-17 National University Corporation Shizuoka University Corneal reflection position estimation system, corneal reflection position estimation method, corneal reflection position estimation program, pupil detection system, pupil detection method, pupil detection program, gaze detection system, gaze detection method, gaze detection program, face orientation detection system, face orientation detection method, and face orientation detection program
JP2016061653A (en) * 2014-09-17 2016-04-25 安全自動車株式会社 Headlight tester and confronting method of the same
JP2016099759A (en) * 2014-11-20 2016-05-30 国立大学法人静岡大学 Face detection method, face detection device, and face detection program
WO2018030515A1 (en) * 2016-08-12 2018-02-15 国立大学法人静岡大学 Line-of-sight detection device
JPWO2018030515A1 (en) * 2016-08-12 2019-06-13 国立大学法人静岡大学 Gaze detection device
US10902635B2 (en) 2016-08-12 2021-01-26 National University Corporation Shizuoka University Line-of-sight detection device
KR101786579B1 (en) 2017-03-22 2017-10-18 (주) 이즈테크놀로지 Method and Apparatus for Determining Front Face

Also Published As

Publication number Publication date
JP4431749B2 (en) 2010-03-17

Similar Documents

Publication Publication Date Title
JP4431749B2 (en) Face posture detection method
US8929608B2 (en) Device and method for recognizing three-dimensional position and orientation of article
JP6625617B2 (en) Method and apparatus for identifying structural elements of a projected structural pattern in camera images
JP4127545B2 (en) Image processing device
JP5158842B2 (en) Eye movement measuring method and eye movement measuring apparatus
JP4452833B2 (en) Gaze movement detection method and gaze movement detection apparatus
JP5206620B2 (en) Member position recognition device, positioning device, joining device, and member joining method
US9613425B2 (en) Three-dimensional measurement apparatus, three-dimensional measurement method and program
JP2007213353A (en) Apparatus for detecting three-dimensional object
KR20100112853A (en) Apparatus for detecting three-dimensional distance
KR102102291B1 (en) Optical tracking system and optical tracking method
TW201415415A (en) Target object detection device, method for detecting target object, and program
JP2016152027A (en) Image processing device, image processing method and program
JP5429885B2 (en) Feature point tracking method and feature point tracking device
JP2007093412A (en) Three-dimensional shape measuring device
WO2017047282A1 (en) Image processing device, object recognition device, device control system, image processing method, and program
JP2013057541A (en) Method and device for measuring relative position to object
CN114341940A (en) Image processing apparatus, three-dimensional measurement system, and image processing method
JP6346294B2 (en) Ranging light generator
Akasaka et al. A sensor for simultaneously capturing texture and shape by projecting structured infrared light
JP5004099B2 (en) Cursor movement control method and cursor movement control apparatus
JP2011064579A (en) Three-dimensional measuring system and three-dimensional measuring method
WO2019240157A1 (en) Eye movement measurement device, eye movement measurement method, and eye movement measurement program
JP4260715B2 (en) Gaze measurement method and gaze measurement apparatus
JP2021018219A (en) Three-dimensional shape measurement system, three-dimensional shape measurement method, and three-dimensional shape measurement program

Legal Events

Date Code Title Description
A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20090605

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20090616

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20090817

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20091124

A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

R150 Certificate of patent or registration of utility model

Free format text: JAPANESE INTERMEDIATE CODE: R150