JP2008191785A - Eye closure detection apparatus, doze detection apparatus, eye closure detection method, and program for eye closure detection - Google Patents

Eye closure detection apparatus, doze detection apparatus, eye closure detection method, and program for eye closure detection Download PDF

Info

Publication number
JP2008191785A
JP2008191785A JP2007023381A JP2007023381A JP2008191785A JP 2008191785 A JP2008191785 A JP 2008191785A JP 2007023381 A JP2007023381 A JP 2007023381A JP 2007023381 A JP2007023381 A JP 2007023381A JP 2008191785 A JP2008191785 A JP 2008191785A
Authority
JP
Japan
Prior art keywords
nostril
eye
detected
mouth position
eyeball tracking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP2007023381A
Other languages
Japanese (ja)
Other versions
JP4781292B2 (en
Inventor
Kenichi Ogami
健一 大上
Yuji Ninagawa
勇二 蜷川
Kentaro Takahashi
健太郎 高橋
Shinichi Kojima
真一 小島
Satoru Nakanishi
悟 中西
Tomoharu Suzuki
智晴 鈴木
Atsushi Adachi
淳 足立
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Toyota Motor Corp
Toyota Central R&D Labs Inc
Aisin Corp
Original Assignee
Aisin Seiki Co Ltd
Toyota Motor Corp
Toyota Central R&D Labs Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Aisin Seiki Co Ltd, Toyota Motor Corp, Toyota Central R&D Labs Inc filed Critical Aisin Seiki Co Ltd
Priority to JP2007023381A priority Critical patent/JP4781292B2/en
Publication of JP2008191785A publication Critical patent/JP2008191785A/en
Application granted granted Critical
Publication of JP4781292B2 publication Critical patent/JP4781292B2/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Fittings On The Vehicle Exterior For Carrying Loads, And Devices For Holding Or Mounting Articles (AREA)
  • Image Processing (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

<P>PROBLEM TO BE SOLVED: To provide an eye closure detection apparatus, a doze detection apparatus, an eye closure detection method, and a program for eye closure detection that can detect eye positions even if nostrils cannot be detected in an eye detection apparatus for estimating eye positions from nostril positions. <P>SOLUTION: The eye closure detection apparatus 10, which has a nostril position determination means 12b for determining nostril positions from feature points extracted from a face image and an opening/closing detection means 12d for setting eyeball tracking areas based on the nostril positions and searching for eyes in the eyeball tracking areas, is provided with a mouth position determination means 12c for determining a mouth position if nostrils cannot be detected. The opening/closing detection means 12d sets eyeball tracking areas based on the mouth position and searches for eyes in the eyeball tracking areas. <P>COPYRIGHT: (C)2008,JPO&INPIT

Description

本発明は、顔画像から眼の開閉を検出する閉眼検知装置、居眠り検知装置、閉眼検知方法及び閉眼検知のためのプログラムに関し、特に、眼球以外の特徴部から眼球位置を推定し眼の開閉を検出する閉眼検知装置、居眠り検知装置、閉眼検知方法及び閉眼検知のためのプログラムに関する。   The present invention relates to an eye-closed detection device, a doze detection device, a closed-eye detection method, and a closed-eye detection program for detecting opening and closing of an eye from a face image. The present invention relates to a closed eye detection device, a drowsiness detection device, a closed eye detection method, and a closed eye detection program.

走行中の運転者の顔を撮影し撮影された画像データを処理して、運転者の居眠りを検出した場合、運転者に警告することで覚醒状態への復帰を促す居眠り検知技術が提案されている(例えば、特許文献1参照。)。特許文献1では、運転者の顔の特徴として鼻孔を検出し、検出された鼻孔から眼の位置を推定しそこに設定された眼球追跡領域において眼の開閉を検出する。そして、例えば、閉眼時間が所定時間以上継続する場合には運転者の覚醒度が低いと判定して警報音などにより運転者に注意を促す。
特開平10−86696号公報
A dozing detection technology has been proposed that, when a driver's face is photographed by processing the driver's face while driving and detecting the driver's dozing, alerting the driver to return to the awake state (For example, refer to Patent Document 1). In Patent Literature 1, a nostril is detected as a feature of the driver's face, the eye position is estimated from the detected nostril, and eye opening / closing is detected in the eyeball tracking region set there. For example, when the eye-closing time continues for a predetermined time or more, it is determined that the driver's arousal level is low, and the driver is alerted with an alarm sound or the like.
Japanese Patent Laid-Open No. 10-86696

このように、鼻孔はメガネや髪の毛などに覆われることが少なく、また、眉毛とも離れているので比較的検出が容易であって、さらに、口と異なり開閉しないため鼻孔位置を基準に眼球追跡領域を決定することは有効といえる。   In this way, the nostril is rarely covered with glasses, hair, etc., and it is relatively easy to detect because it is separated from the eyebrows, and since it does not open and close unlike the mouth, the eye tracking region is based on the nostril position. It can be said to be effective.

しかしながら、運転者が横を向いた場合や、正面を向いていても顔に横方向から直射日光が照射され鼻孔に陰が生じた場合、鼻の形状の個人差等により、鼻孔を検出できない場合がある。   However, when the driver is facing sideways, even when facing the front, if the face is exposed to direct sunlight from the side and the nostrils are shaded, the nostrils cannot be detected due to individual differences in the shape of the nose, etc. There is.

本発明は、上記課題に鑑み、鼻孔位置から眼の位置を推定する閉眼検知装置において、鼻孔が検出できない場合でも眼の位置を検知可能な閉眼検知装置、居眠り検知装置、閉眼検知方法及び閉眼検知のためのプログラムを提供することを目的とする。   In view of the above problems, the present invention provides a closed eye detection device that estimates the position of an eye from a nostril position, an closed eye detection device capable of detecting an eye position even when a nostril cannot be detected, a doze detection device, a closed eye detection method, and closed eye detection. It aims at providing the program for.

上記課題を解決するため、本発明は、顔画像から抽出された特徴点から鼻孔位置を確定する鼻孔位置確定手段と、鼻孔位置を基準に眼球追跡領域を設定し、眼球追跡領域の眼を探索する開閉検出手段とを、有する閉眼検知装置において、鼻孔が検出されない場合、口位置を確定する口位置確定手段を有し、開閉検出手段は、口位置を基準に眼球追跡領域を設定し、眼球追跡領域の眼を探索する、ことを特徴とする。   In order to solve the above-described problems, the present invention searches for an eye in the eyeball tracking area by setting a nostril position determining means for determining a nostril position from the feature points extracted from the face image, and an eyeball tracking area based on the nostril position. And an open / close detection means that, when no nostril is detected, has a mouth position determination means for determining a mouth position, and the open / close detection means sets an eyeball tracking region based on the mouth position, and The eye of the tracking area is searched.

本発明によれば、鼻孔よりは検出精度は低下するが眼よりも検出しやすい口を検出することで、鼻孔の検出が困難な場合、よりスムーズに眼の検出を可能にすることができる。   According to the present invention, it is possible to detect an eye more smoothly when detection of a nostril is difficult by detecting a mouth that is detected more easily than an eye but has a lower detection accuracy than a nostril.

また、本発明は、顔画像から抽出された特徴点から鼻孔位置を確定する鼻孔位置確定手段と、鼻孔位置を基準に眼球追跡領域を設定し、眼球追跡領域の眼を探索する開閉検出手段とを、有する閉眼検知装置において、鼻孔が検出されない場合、口位置を確定する口位置確定手段を、有し、鼻孔位置確定手段は、口位置を基準に鼻孔位置を確定し、開閉検出手段は、口位置を基準に確定された鼻孔位置を基準に眼球追跡領域を設定し、眼球追跡領域の眼を探索する、ことを特徴とする。   The present invention also provides a nostril position determining means for determining a nostril position from a feature point extracted from a face image, an open / close detection means for setting an eyeball tracking area based on the nostril position and searching for an eye in the eyeball tracking area. When the nostril is not detected in the closed eye detection device, the mouth position determination means for determining the mouth position, the nostril position determination means determines the nostril position based on the mouth position, the open / close detection means An eyeball tracking area is set based on a nostril position determined based on the mouth position, and an eye in the eyeball tracking area is searched.

本発明によれば、口よりも鼻の方が顔の中心線を決定しやすいため、口から直接眼を検出するよりも口からまず鼻孔を検出することで検出精度を向上させることができる場合がある。   According to the present invention, since the nose is easier to determine the center line of the face than the mouth, the detection accuracy can be improved by first detecting the nostrils from the mouth rather than detecting the eyes directly from the mouth There is.

また、本発明の一形態において、上記の閉眼検知装置と、開閉検出手段が検出した目の開度に基づき連続した閉眼時間を計測する閉眼時間計測手段と、閉眼時間が所定時間以上の場合に、運転者に警告する警告手段と、を有することを特徴とする。   Further, in one aspect of the present invention, when the above-mentioned eye-closing detection device, eye-closing time measuring means for measuring a continuous eye-closing time based on the eye opening detected by the opening / closing detection means, and when the eye-closing time is a predetermined time or more And warning means for warning the driver.

本発明によれば、閉眼時間が所定時間以上継続した場合には運転者に警告するので、居眠りした場合には運転者を覚醒させることができる。   According to the present invention, the driver is warned when the eye-closing time continues for a predetermined time or more, so that the driver can be awakened when he falls asleep.

鼻孔位置から眼の位置を推定する閉眼検知装置において、鼻孔が検出できない場合でも眼の位置を検知可能な閉眼検知装置、居眠り検知装置、閉眼検知方法及び閉眼検知のためのプログラムを提供することができる。   To provide a closed eye detection device, a doze detection device, a closed eye detection method, and a closed eye detection program capable of detecting the position of an eye even when the nostril cannot be detected in the closed eye detection device that estimates the eye position from the nostril position. it can.

以下、本発明を実施するための最良の形態について、図面を参照しながら実施例を挙げて説明する。本実施形態の閉眼検知装置10は、顔画像から鼻孔位置を確定し鼻孔位置を基準に決定された眼球追跡領域の眼の開閉を検出するが、運転者の顔向き等により鼻孔位置が確定できなかった場合、口位置を検出して口位置を基準に眼球追跡領域を決定するか、又は、口位置を基準に再度鼻孔を検出しその鼻孔位置を基準に眼の開閉を検出する。したがって、鼻孔位置が検出されない場合でも口位置を検出することで、眼の開閉を検出することができる。   Hereinafter, the best mode for carrying out the present invention will be described with reference to the accompanying drawings. The closed eye detection device 10 of the present embodiment determines the nostril position from the face image and detects the opening / closing of the eye in the eye tracking region determined based on the nostril position, but can determine the nostril position based on the driver's face orientation and the like. If not, the mouth position is detected and the eye tracking region is determined based on the mouth position, or the nostril is detected again based on the mouth position, and the opening / closing of the eye is detected based on the nostril position. Therefore, even when the nostril position is not detected, the opening / closing of the eyes can be detected by detecting the mouth position.

図1は、本実施形態の閉眼検知装置10を含む居眠り検知装置1のシステム構成図を示す。閉眼検知装置10は、顔カメラ11及び顔画像処理ECU(Electronic Control Unit)12から構成され、閉眼検知装置10がさらにCAN(Controller Area Network)など車内LANを介して衝突判断ECU13、コンビネーションメータ14,ブレーキECU15及びミリ波レーダ装置16と接続されて居眠り検知装置1が構成されている。   FIG. 1 shows a system configuration diagram of a dozing detection device 1 including the closed eye detection device 10 of the present embodiment. The closed eye detection device 10 includes a face camera 11 and a face image processing ECU (Electronic Control Unit) 12. The closed eye detection device 10 further includes a collision determination ECU 13, a combination meter 14, and the like via a vehicle LAN such as a CAN (Controller Area Network). The doze detecting device 1 is configured by being connected to the brake ECU 15 and the millimeter wave radar device 16.

顔カメラ11は、運転者の顔を正面やや下方から臨む位置、例えば、コンビネーションパネル内やステアリングコラム上に配置される。顔カメラ11は、例えばCMOS(Complementary Metal Oxide Semiconductor)やCCD(Charge Coupled Device)の光電変換素子を有し、入射した光をその強度に応じて光電変換して、所定の輝度階調(例えば、256階調)のデジタル画像(顔画像)を出力する。   The face camera 11 is disposed at a position facing the driver's face from the front or slightly below, for example, in the combination panel or on the steering column. The face camera 11 has, for example, a CMOS (Complementary Metal Oxide Semiconductor) or a CCD (Charge Coupled Device) photoelectric conversion element, and photoelectrically converts incident light according to its intensity to obtain a predetermined luminance gradation (for example, 256-tone) digital image (face image) is output.

顔画像処理ECU12は、顔カメラ11により撮影された顔画像を画像処理して運転者の顔が正面方向に対し左右のどのくらい角度を向いているか示す顔向き度、及び、どのくらい眼が開いているかを示す眼の開度(上瞼と下瞼の距離)を、衝突判断ECU13に送出する。   The face image processing ECU 12 performs image processing on the face image taken by the face camera 11 to indicate how much the driver's face is directed to the left and right with respect to the front direction, and how much the eyes are open. The eye opening (the distance between the upper eyelid and the lower eyelid) is sent to the collision determination ECU 13.

衝突判断ECU13は、顔向き度とその継続時間から脇見運転を検知し、また、眼の開度とその継続時間から居眠り運転を検知し、コンビネーションメータ14から警報音を吹鳴しまた警告ランプを点灯する。また、衝突判断ECU13は、ミリ波レーダ装置16が検出した障害物までの距離及び相対速度に基づき、障害物の衝突の可能性を判定し、衝突の可能性が高い場合は警報音を吹鳴し、衝突が不可避となるとブレーキECU15に車両の強制的な制動を要求する。また、衝突判断ECU13は、運転者の顔向き度が所定以上の場合や所定時間以上の閉眼が継続して検出されている場合には、警報の吹鳴や車両の制動を早期に実行する。   The collision determination ECU 13 detects a side-viewing operation from the degree of face orientation and its duration, detects a drowsy operation from the eye opening and its duration, sounds an alarm sound from the combination meter 14, and lights a warning lamp To do. The collision determination ECU 13 determines the possibility of an obstacle collision based on the distance to the obstacle and the relative speed detected by the millimeter wave radar device 16, and emits an alarm sound when the possibility of the collision is high. When a collision is unavoidable, the brake ECU 15 is requested to forcibly brake the vehicle. Further, when the driver's face orientation degree is equal to or greater than a predetermined value or when closed eyes for a predetermined time or longer are continuously detected, the collision determination ECU 13 performs warning sound and vehicle braking early.

顔画像処理ECU12及び衝突判断ECU13は、プログラムを実行するCPU、プログラム実行の作業領域となりまた一時的にデータを記憶するRAM、イグニションオフしてもデータを保持するEEPROM(Electronically Erasable and Programmable Read Only Memory)、データのインターフェイスとなる入出力インターフェイス、他のECUと通信する通信コントローラ、及び、プログラムを記憶するROM等がバスにより接続されたマイコンにより構成される。   The face image processing ECU 12 and the collision determination ECU 13 are a CPU that executes a program, a RAM that is a work area for executing the program and that temporarily stores data, and an EEPROM (Electronically Erasable and Programmable Read Only Memory) that retains data even when the ignition is turned off. ), An input / output interface serving as a data interface, a communication controller that communicates with other ECUs, a ROM that stores programs, and the like are configured by a microcomputer connected by a bus.

CPUがプログラム(特許請求の範囲における閉眼検知のためのプログラム)17を実行することで、正面方向を基準にした顔向き度を検知する顔向き検知手段12a、鼻孔位置を確定する鼻孔位置確定手段12b、口位置を確定する口位置確定手段12c、眼球追跡領域の眼を探索し開閉を検出する開閉検出手段12d、が実現される。また、衝突判断ECU13のCPUがプログラムを実行することで、継続した閉眼時間を計測する閉眼時間計測手段13aが実現される。   When the CPU executes a program (a closed eye detection program in claims) 17, a face direction detection unit 12 a that detects the degree of face orientation based on the front direction, and a nostril position determination unit that determines the nostril position 12b, mouth position determining means 12c for determining the mouth position, and opening / closing detecting means 12d for searching for an eye in the eyeball tracking region and detecting opening / closing. Further, the CPU of the collision determination ECU 13 executes the program, thereby realizing the eye-closing time measuring unit 13a that measures the continuous eye-closing time.

〔眼の開閉の検出〕
始めに、鼻孔が検出された場合の眼の開閉の検出について説明する。顔向き度と眼の開閉は次のような周知の方法で検出することができる。
[Detection of eye opening and closing]
First, detection of opening and closing of the eye when a nostril is detected will be described. Face orientation and eye opening / closing can be detected by the following well-known methods.

図2(a)は顔画像の一例を示す図である。閉眼検知装置10は、順次入力される顔画像から、顔の輪郭、中央線、鼻孔位置、眼の位置を順に特定し、眼の位置を含む所定の眼球追跡領域を設定する。いったん眼球追跡領域が設定された後は、その中の眼の開閉の検出を繰り返し、眼の開閉が検出されない状態が継続すると改めて顔の輪郭から検出をやり直す。   FIG. 2A shows an example of a face image. The closed eye detection device 10 sequentially identifies the face outline, center line, nostril position, and eye position from the sequentially input face images, and sets a predetermined eyeball tracking region including the eye position. Once the eyeball tracking area is set, the detection of the opening and closing of the eyes in the eyeball tracking region is repeated, and when the state where the opening and closing of the eyes is not detected continues, the detection is performed again from the face outline.

まず、顔向き検知手段12aは、顔画像から顔のおよその位置として顔の輪郭を検出する。例えば、顔向き検知手段12aは、順次入力される顔画像の間で画素値(輝度)が所定以上に異なる画素の画素値の差をその位置情報と共に記憶しておき、所定数の顔画像について同じ位置の画素値の差をカウントし、カウントした値の縦方向のヒストグラムを作りその積分値のピークを横方向の顔の輪郭位置とする。顔画像には背景なども写っているが、背景は静止しているため、画素値の変化する領域の左右の端部から横方向の顔の輪郭を検出することができる。   First, the face direction detection means 12a detects the face outline as the approximate position of the face from the face image. For example, the face orientation detection unit 12a stores a difference between pixel values of pixels having different pixel values (luminance) more than a predetermined value between the sequentially input face images together with the position information, and the predetermined number of face images. The difference between the pixel values at the same position is counted, a histogram in the vertical direction of the counted values is created, and the peak of the integrated value is set as the facial contour position in the horizontal direction. Although the background and the like are shown in the face image, since the background is stationary, it is possible to detect the outline of the face in the horizontal direction from the left and right ends of the region where the pixel value changes.

顔の上下方向の輪郭はエッジ情報から検出する。エッジ情報により、顔のパーツ、眉、まぶた、鼻孔、口角、上下の唇の境、など、肌に比べ輝度の変化の大きい画素が検出される。なお、エッジ情報の抽出には例えばSobelのエッジ検出アルゴリズムを用い、これにより、輝度小から大、輝度大から小の2種類のエッジ情報に囲まれる顔の各パーツの輪郭が得られる。   The contour in the vertical direction of the face is detected from the edge information. Based on the edge information, pixels having a large luminance change compared to the skin, such as facial parts, eyebrows, eyelids, nostrils, mouth corners, and upper and lower lip borders, are detected. For example, Sobel's edge detection algorithm is used to extract edge information, thereby obtaining the contour of each part of the face surrounded by two types of edge information of low brightness to high brightness and high brightness to low brightness.

また、人の顔の眼や鼻などのパーツは左右対称に配置されているので、顔向き検知手段12aは左右のエッジ情報の数がほぼ均等になるように顔の中央線を検出する。運転者が横方向を向くと中央線と横方向の輪郭の相対位置が変わるので、中央線を監視することで顔向きを追跡することができる。   In addition, since the parts such as the eyes and nose of the human face are arranged symmetrically, the face direction detection means 12a detects the center line of the face so that the number of left and right edge information is substantially equal. Since the relative position of the center line and the contour in the horizontal direction changes when the driver turns to the horizontal direction, the face direction can be tracked by monitoring the center line.

そして、得られた顔の中央線から例えば左側に黒画素の連続した領域(エッジ情報で囲まれた領域)を、上側(眉毛側)から走査線a〜cのように走査して、片方の眉を検出する。眉は左右対称にあると考えてよいので、中央線の反対側に同様の黒画素の連続領域があればその左右の眉位置を顔の上側の輪郭として決定する。また、眉位置よりも下側であって顔の中央線を跨ぎ所定以上に横方向に連続したエッジ情報を検出し、そのエッジ情報の幅、強度等に基づき、当該エッジ情報を上下の唇の境とみなし、顔の下側の輪郭として決定する。このようにして、顔の輪郭位置が得られる。   Then, a black pixel continuous region (region surrounded by edge information), for example, on the left side from the center line of the obtained face is scanned from the upper side (eyebrow side) as scanning lines a to c. Detect eyebrows. Since the eyebrows may be considered to be bilaterally symmetric, if there is a continuous region of similar black pixels on the opposite side of the center line, the right and left eyebrow positions are determined as the upper contour of the face. Further, edge information that is lower than the eyebrow position and crosses the center line of the face and continues in the horizontal direction more than a predetermined amount is detected, and the edge information is detected based on the width, strength, etc. of the edge information. It is considered as a border and determined as the lower contour of the face. In this way, the face contour position is obtained.

ついで、鼻孔位置確定手段12bは鼻孔位置を検出する。鼻孔位置は眼球追跡領域を設定するために検出されるが、眼球追跡領域は口位置を基準にしても設定可能である。しかしながら後述するように、瞼の位置を検出する場合まばたきによる上瞼の位置の変化を利用するが、口もまた会話や呼吸等により開閉するため瞼と区別がつきにくい。また、口位置は横方向のエッジとなるが、顔画像には眉や眼、鼻下部など横方向に連続したエッジ情報を有する部分が多い。   Next, the nostril position determining means 12b detects the nostril position. The nostril position is detected in order to set the eye tracking area, but the eye tracking area can be set with reference to the mouth position. However, as will be described later, when detecting the position of the eyelid, a change in the position of the upper eyelid due to blinking is used, but the mouth is also opened and closed by conversation, breathing, etc., so it is difficult to distinguish from the eyelid. Further, the mouth position is an edge in the horizontal direction, but the face image has many portions having edge information continuous in the horizontal direction, such as eyebrows, eyes, and lower nose.

これに対し、鼻孔は中央線に線対称に2個存在するものであるため、検出された場合の確度は口位置より高いと考えられる。また、口位置は横方向に長いエッジであるためどこが横方向の中央か判断が困難になるが、鼻孔位置は2個の鼻孔の中点を横方向の中央としてよいので、眼球追跡領域を適切な場所に設定しやすい。   On the other hand, since there are two nostrils symmetrically with respect to the center line, it is considered that the accuracy when detected is higher than the mouth position. Moreover, the mouth position is determined where for a long edge laterally or transversely of the center becomes difficult, nostril position so good a midpoint of two nostrils as the central lateral, right eye tracking region It is easy to set in a convenient place.

鼻孔位置確定手段12bは、顔の中心線を通るやや縦長の鼻孔検出領域を唇よりも上方に設定する。鼻孔検出領域を縦長とするのは、ほくろや光の影響を排除し易くし、また、パターンマッチングにより鼻孔を検出する上で画像処理の負担を軽減するためである。鼻孔位置確定手段12bは、形状や大きさ、輝度など鼻孔の特徴を備える特徴点を抽出することで鼻孔を検出する。例えば2値化処理などにより鼻孔検出領域の明暗をはっきりさせ、左右いずれかの鼻孔の標準パターンによるパターンマッチング、又は、所定以上の面積の黒画素の塊が検出されるか否か、等により、左右いずれかの鼻孔を検出する。そして、一方の鼻孔が検出されると、中央線を挟んで略線対称な鼻孔を検出し、鼻孔が水平に2個検出された場合に鼻孔位置を決定する。   The nostril position determining means 12b sets a slightly longer nostril detection region that passes through the center line of the face above the lips. The reason for making the nostril detection region vertically long is to easily eliminate the influence of moles and light, and to reduce the burden of image processing when detecting nostrils by pattern matching. The nostril position determining means 12b detects a nostril by extracting feature points having nostril characteristics such as shape, size, and luminance. For example, the brightness of the nostril detection area is clarified by binarization processing, etc., whether pattern matching with the standard pattern of either the right or left nostrils, or whether a block of black pixels with a predetermined area or more is detected, etc. Detect either the right or left nostril. When one nostril is detected, a nostril that is substantially line-symmetric with respect to the center line is detected. When two nostrils are detected horizontally, the nostril position is determined.

鼻孔位置が確定できた場合、開閉検出手段12dは鼻孔と眼の位置の関係の統計データを利用して、鼻孔位置に対する所定の領域を眼球追跡領域として設定する。図2(c)は眼球追跡領域を説明するための図を示す。顔が水平状態であれば鼻孔が水平に2個並ぶので、その中点に対し眼球の位置は、距離r・方向θにより指定される。統計データから眼球が所定以上の割合で存在する距離rの範囲・方向θの範囲は明らかなので、開閉検出手段12dはこれをカバーするような矩形領域を眼球追跡領域として設定する。   When the nostril position can be determined, the open / close detection means 12d uses the statistical data on the relationship between the nostril and the eye position to set a predetermined area for the nostril position as the eye tracking area. FIG. 2C is a diagram for explaining the eyeball tracking area. If the face is in a horizontal state, two nostrils are arranged horizontally, and the position of the eyeball with respect to the midpoint is designated by the distance r and the direction θ. Since the range of the distance r and the range of the direction θ where the eyeball is present at a predetermined ratio or more are clear from the statistical data, the open / close detection unit 12d sets a rectangular area covering this as the eyeball tracking area.

開閉検出手段12dは、眼球追跡領域の上瞼と下瞼を監視することで眼の開閉を検出する。開閉検出手段12dは、眼球追跡領域からエッジ情報を取得して、眼球追跡領域の左上頂点の画素列から、上から下向きに向かって輝度大から小(肌色から瞼)のエッジを検索し、エッジが検索できたら1つ右の画素列について同様の検索を行っていく。したがって、この検索が右端まで終われば、上瞼の画素位置が得られる。同様にして開閉検出手段12dは下瞼の画素位置を取得する。   The opening / closing detection means 12d detects the opening / closing of the eye by monitoring the upper eyelid and lower eyelid of the eyeball tracking area. The open / close detection means 12d acquires edge information from the eyeball tracking area, searches the pixel array at the upper left vertex of the eyeball tracking area for an edge from luminance large to small (skin color to wrinkle) from top to bottom, and the edge Can be searched, the same search is performed for the right pixel column. Therefore, if this search is completed to the right end, the upper pixel position is obtained. Similarly, the open / close detection means 12d acquires the lower eyelid pixel position.

また、開閉検出手段12dは、このようにして検出された上瞼の位置が、所定数以内の顔画像において大きく変わるか否かにより上瞼を検出しているか否かを決定する。これは、運転者は所定時間内にまばたきすることから上瞼の位置は他の眼の部分(例えば下瞼や、眉毛やメガネのフレーム)に比べてより大きく動くことが分かっているため、上瞼の位置が大きく動く場合は上瞼の位置を正確に検出していると考えられるからである。   Further, the open / close detection means 12d determines whether or not the upper eyelid is detected depending on whether or not the position of the upper eyelid thus detected changes greatly in the face images within a predetermined number. This is because the driver blinks within a predetermined time, so it is known that the position of the upper eyelid moves more greatly than other eye parts (for example, lower eyelids, eyebrows and eyeglass frames). This is because it is considered that the position of the upper eyelid is accurately detected when the position of the eyelid moves greatly.

ついで、開閉検出手段12dは、上瞼と下瞼の画素位置の差(上下の距離)を左から順に算出し、最大となる画素数を眼の開度とする。運転者が目を閉じている場合も上下の瞼の位置から同様な処理過程により眼の開度が検出される。   Next, the open / close detection means 12d calculates the difference (up and down distance) between the upper and lower eyelid pixel positions in order from the left, and sets the maximum number of pixels as the eye opening. Even when the driver closes his eyes, the eye opening is detected from the upper and lower eyelid positions through the same process.

開閉検出手段12dは、上瞼と下瞼の画素位置の差(上下の距離)を左から順に算出し、最大となる画素数を眼の開度とする。運転者が目を閉じている場合も上下の瞼の位置から同様な処理過程により眼の開度が検出される。   The open / close detection means 12d calculates the difference (up and down distance) between the upper and lower eyelid pixel positions in order from the left, and sets the maximum number of pixels as the eye opening. Even when the driver closes his eyes, the eye opening is detected from the upper and lower eyelid positions through the same process.

開閉検出手段12dはサイクル時間毎に撮影される顔画像から眼の開度を検出し、順次衝突判断ECU13に送出する。そして、衝突判断ECU13の閉眼時間計測手段13aは、眼の開閉を判定するための閾値と眼の開度を比較して、閾値以上であれば開状態、閾値より小さければ閉状態と判定し、閉眼が連続した場合は閉眼時間を継続して計測する。   The opening / closing detection means 12d detects the opening degree of the eye from the face image taken every cycle time, and sequentially sends it to the collision determination ECU 13. Then, the eye-closing time measuring means 13a of the collision determination ECU 13 compares the threshold for determining the opening and closing of the eye with the opening of the eye, and determines that the eye is open when it is greater than or equal to the threshold and closed when it is less than the threshold. When the closed eyes continue, the closed eye time is continuously measured.

眼球追跡領域は鼻孔位置から設定した方が好ましいが、運転者が所定以上に横を向いた場合や、正面を向いていても光の影響により鼻孔に陰影が生じた場合、鼻の形状の個人差等により、鼻孔を検出できない場合がある。そこで、本実施例では、鼻孔位置が確定できない場合、口位置を検出し口位置を基準に統計データを利用して眼球追跡領域を設定する閉眼検知装置10について説明する。   It is preferable to set the eye tracking area from the position of the nostril, but if the driver turns sideways more than the specified or if the nostrils are shaded by the influence of light even if they are facing the front, an individual with a nose shape A nostril may not be detected due to a difference or the like. Therefore, in the present embodiment, a description will be given of the closed eye detection device 10 that detects the mouth position and sets the eye tracking region using statistical data based on the mouth position when the nostril position cannot be determined.

図3は、中央線等を検出した画像処理後の顔画像の一例を示す。図3の顔画像は、2値化後にエッジ情報を抽出して顔の輪郭を検出したものであり、輪郭以外の領域(斜線部)は閉眼の検知に不要なので黒画素でマスクされている。   FIG. 3 shows an example of a face image after image processing in which a center line or the like is detected. The face image in FIG. 3 is obtained by extracting edge information after binarization and detecting the outline of the face, and the area other than the outline (hatched area) is masked with black pixels because it is unnecessary for detection of closed eyes.

図3では、眉31、瞼32及び口33の顔のパーツの輪郭が検出されているが、本来であれば検出されるはずの鼻孔34(点線で示す)の輪郭が検出されていない。このように鼻孔位置確定手段12bが鼻孔位置を確定できない場合、口位置確定手段12cが口33の位置を確定する。   In FIG. 3, the contours of the facial parts of the eyebrows 31, the eyelids 32, and the mouth 33 are detected, but the contours of the nostrils 34 (shown by dotted lines) that should be detected are not detected. As described above, when the nostril position determining unit 12b cannot determine the nostril position, the mouth position determining unit 12c determines the position of the mouth 33.

図4(a)は口位置の確定を説明するための図を示す。顔向き検知手段12aにより輪郭及び中央線が検出されているので、口位置確定手段12cは下側の輪郭の近くに横長の口位置検出領域を設定する。口位置は、上唇、下唇、その境などに生じる輝度変化から検出可能であるので、口位置確定手段12cは口位置検出領域の輝度値を上下方向に微分してエッジ情報を取得する。そして、口位置検出領域の左上頂点の画素列から、上から下向きに向かってエッジを検索し、検出したエッジ情報及びその強度を画素毎に記憶していく。これを右端の画素列まで行うと、上唇、下唇、その境などの輝度変化により生じる横方向に連続したエッジ、幅及びその強度が取得される。   FIG. 4A shows a diagram for explaining the determination of the mouth position. Since the contour and the center line are detected by the face direction detection unit 12a, the mouth position determination unit 12c sets a horizontally long mouth position detection region near the lower contour. Since the mouth position can be detected from the brightness change occurring in the upper lip, the lower lip, and the boundary thereof, the mouth position determining means 12c obtains edge information by differentiating the brightness value of the mouth position detection area in the vertical direction. Then, an edge is searched from the pixel row at the upper left vertex of the mouth position detection area from the top to the bottom, and the detected edge information and its intensity are stored for each pixel. When this is performed up to the rightmost pixel row, the continuous edge in the horizontal direction, the width, and the intensity generated by the luminance change such as the upper lip, the lower lip, and the boundary thereof are acquired.

図4(a)では上唇エッジ35、唇境エッジ36,下唇エッジ37,及び、唇の下の影により生じる影エッジ38が検出されている。口位置確定手段12cは、口位置検出領域の複数のエッジから好ましくは最も長く幅及び強度の大きいエッジ(唇境エッジ36)を検出し、そのようなエッジがない場合は所定の優先順位(例えば最も長いエッジ)で唇境エッジ36を検出する。なお、唇は周りの肌色に対し朱色という色に特徴を有する領域であるので、色情報を利用して口位置を特定してもよい。   In FIG. 4A, an upper lip edge 35, a lip border edge 36, a lower lip edge 37, and a shadow edge 38 caused by a shadow under the lips are detected. The mouth position determining means 12c detects the edge (lip border edge 36) that is preferably the longest and the largest in width and strength from a plurality of edges in the mouth position detection region, and if there is no such edge, a predetermined priority (for example, The lip border edge 36 is detected at the longest edge). Note that the lip is an area characterized by a vermilion color with respect to the surrounding skin color, so the mouth position may be specified using color information.

口位置確定手段12cは、例えば、中央線と唇境エッジ36の交点を口位置(横方向における口の中心)と確定する。なお、エッジの横方向の端部は不明確な場合があるので所定長以上の複数のエッジと中央線の交点の平均から口位置を確定してもよい。   The mouth position determining means 12c determines, for example, the intersection of the center line and the lip border edge 36 as the mouth position (the center of the mouth in the lateral direction). In addition, since the edge part of the edge in the horizontal direction may be unclear, the mouth position may be determined from the average of the intersections of a plurality of edges having a predetermined length or more and the center line.

口位置を確定したら、閉眼検出手段12dは口位置と眼球の位置の関係の統計データを利用して、眼球追跡領域を設定する。図4(b)は、口位置と眼球の位置の関係の一例を示す図である。口位置に対し眼球の位置は、距離D・方向aにより指定され、統計データから眼球が所定以上の割合で存在する距離Dの範囲・方向aの範囲は明らかなので、閉眼検出手段12dはこれをカバーするような矩形領域を眼球追跡領域として設定する。   When the mouth position is determined, the closed eye detection unit 12d sets an eyeball tracking region using statistical data on the relationship between the mouth position and the eyeball position. FIG. 4B is a diagram illustrating an example of the relationship between the mouth position and the eyeball position. The position of the eyeball with respect to the mouth position is specified by the distance D / direction a, and the range of the distance D / direction a where the eyeball is present at a predetermined ratio or more is clear from the statistical data. A rectangular area that covers the eyeball tracking area is set.

以降は、鼻孔位置が確定された場合と同様に開閉検出手段12dが眼球追跡領域から眼の開閉を検出することができる。   Thereafter, the opening / closing detection means 12d can detect the opening / closing of the eye from the eyeball tracking area, as in the case where the nostril position is determined.

図5は、鼻孔位置が確定できない場合に閉眼検知装置10が口位置を利用して眼の開閉を検出するフローチャート図の一例を示す。   FIG. 5 shows an example of a flowchart in which the closed eye detection device 10 detects opening / closing of the eye using the mouth position when the nostril position cannot be determined.

顔向き検知手段12aは、顔カメラ11が撮影する顔画像から輪郭及び中央線を検出する(S1)。   The face orientation detection means 12a detects the contour and the center line from the face image taken by the face camera 11 (S1).

ついで、鼻孔位置確定手段12bが縦に細長の鼻孔検出領域を設定し、鼻孔の特徴を利用して鼻孔が検出できるか否かを判定する(S2)。この判定は、2個の鼻孔のうち2個又は1個が検出できない場合に鼻孔が検出できないと判定してもよいし、顔向きが所定値以上の場合は鼻孔の検出が困難になることが分かっているので、顔向きが所定値以上の場合に1個又は2個の鼻孔を検出することなく鼻孔が検出できないと判定してもよい。これにより、画像処理の負担が軽減される。   Next, the nostril position determining means 12b sets a vertically elongated nostril detection region, and determines whether or not the nostril can be detected using the characteristics of the nostril (S2). In this determination, it may be determined that the nostril cannot be detected when two or one of the two nostrils cannot be detected, and it may be difficult to detect the nostrils when the face orientation is equal to or greater than a predetermined value. Since it is known, it may be determined that the nostril cannot be detected without detecting one or two nostrils when the face orientation is greater than or equal to a predetermined value. Thereby, the burden of image processing is reduced.

鼻孔が検出された場合(S2のYes)、開閉検出手段12dは鼻孔位置を基準にして眼球追跡領域を設定し(S3)、眼球追跡領域から瞼を検出して上下の瞼の距離から眼の開度を検出する(S4)。   When the nostril is detected (Yes in S2), the open / close detection means 12d sets an eyeball tracking region based on the nostril position (S3), detects eyelids from the eyeball tracking region, and detects the eye from the distance between the upper and lower eyelids. The opening degree is detected (S4).

鼻孔が検出されない場合(S2のNo)、口位置確定手段12cは顔の輪郭や中央線を利用して口位置検出領域を設定する(S5)。口位置確定手段12cは唇境エッジ36など横長のエッジを利用して口位置を確定し、閉眼検出手段12dが口位置と眼球の位置の関係の統計データを利用して眼球追跡領域を設定する(S6)。鼻孔が検出された場合と同様に、開閉検出手段12dは眼球追跡領域から瞼を検出して上下の瞼の距離から眼の開度を検出する(S4)。   When the nostril is not detected (No in S2), the mouth position determination unit 12c sets the mouth position detection region using the face outline and the center line (S5). The mouth position determination unit 12c determines the mouth position using a horizontally long edge such as the lip border edge 36, and the closed eye detection unit 12d sets the eye tracking region using statistical data on the relationship between the mouth position and the eyeball position. (S6). Similarly to the case where the nostril is detected, the open / close detection means 12d detects eyelids from the eyeball tracking region, and detects the eye opening from the distance between the upper and lower eyelids (S4).

本実施例の閉眼検知装置10では、鼻孔よりも検出精度は低下するが直接、鼻孔が検出困難な場合、眼球を検出するよりは容易な口位置を検出してから眼の開度を検出することで、よりスムーズに眼の開度を検出することができる。   In the closed eye detection device 10 of the present embodiment, the detection accuracy is lower than that of the nostril, but when the nostril is difficult to detect directly, the opening of the eye is detected after detecting the mouth position easier than detecting the eyeball. Thus, the opening degree of the eyes can be detected more smoothly.

実施例1では口位置を基準に統計データを利用して眼球追跡領域を設定したが、本実施例では口位置を確定した後、口位置を基準に鼻孔検出領域を設定しそこから鼻孔を検出すると共に、確定された鼻孔位置から眼の開度を検出する閉眼検知装置10について説明する。   In the first embodiment, the eye tracking region is set by using statistical data based on the mouth position. However, in this embodiment, after determining the mouth position, the nostril detection region is set based on the mouth position and the nostril is detected therefrom. In addition, the closed eye detection device 10 that detects the opening degree of the eye from the determined nostril position will be described.

口位置と中央線が分かれば鼻孔位置の推定精度も向上するので、鼻孔に相当する特徴点を検出するための閾値を下げることができ、鼻孔を検出しやすくすることができる。   If the mouth position and the center line are known, the estimation accuracy of the nostril position is also improved. Therefore, the threshold value for detecting the feature point corresponding to the nostril can be lowered, and the nostril can be easily detected.

図6は口位置と鼻孔位置の関係の一例を示す図である。統計データを利用して口位置から鼻孔検出領域を設定する点では同様であるが、口位置と鼻孔の上下方向の距離L1は眼と鼻孔との位置に比べ距離が短くその個人差も小さくなるため、鼻孔検出領域の上下方向の幅W1は狭くすることができる。また、口位置と鼻孔の横方向の距離L2はさらに個人差が少なく中央線から所定距離内としてよいが、顔向き度が大きくなると検出された中央線の位置に誤差が含まれる場合があり、また、中央線と左右の鼻孔の相対位置が変化するので、鼻孔検出領域の横方向の幅W2はW1と同程度とする。したがって、口位置が検出された後は縦に細長の鼻孔検出領域を小さくすることができる。   FIG. 6 is a diagram illustrating an example of the relationship between the mouth position and the nostril position. Although it is the same in that the nostril detection region is set from the mouth position using the statistical data, the distance L1 between the mouth position and the nostril in the vertical direction is shorter than the position between the eye and the nostril, and the individual difference is also small. Therefore, the vertical width W1 of the nostril detection region can be reduced. Further, the lateral distance L2 between the mouth position and the nostril may be within a predetermined distance from the center line with little individual difference, but there may be an error in the position of the detected center line when the face orientation increases. Further, since the relative position of the center line and the left and right nostrils changes, the width W2 in the lateral direction of the nostril detection region is set to be approximately the same as W1. Therefore, after the mouth position is detected, the vertically elongated nostril detection region can be reduced.

そして、このように鼻孔位置を絞って鼻孔検出領域を設定した後は、ほくろや光の影響により鼻孔を誤検出するおそれが低減するので、鼻孔検出のための閾値を下げることができる。   Then, after setting the nostril detection region by narrowing the nostril position in this way, the risk of false detection of the nostril due to the influence of moles and light is reduced, so the threshold for nostril detection can be lowered.

ここでいう、閾値は、例えば鼻孔検出領域を2値化するための閾値であり、この場合は閾値を下げることで小さい輝度値(暗い)で撮影されるはずの鼻孔による特徴点の検出が容易になる。また、パターンマッチング処理では、鼻孔検出領域内で最も画素値が一致する場合の一致度が所定値以上の場合にそこに鼻孔があると判定するが、その所定値を小さくすることで横向き等のため正常な形で撮影されていない鼻孔の検出が容易になる。また、所定以上の面積の黒画素の塊等を検出する場合、所定数の画素(例えばN画素)であれば黒画素が連続していなくても連続していると見なすよう処理するが、黒画素の塊の判定閾値を下げる場合、例えばこれを2×N画素であれば黒画素が連続していなくても黒画素が連続していると見なすことで、周辺のエッジが弱くなって鼻孔が撮影されている場合に所定以上の面積の黒画素の塊を検出しやすくすることができる。   Here, the threshold value is a threshold value for binarizing the nostril detection region, for example. In this case, it is easy to detect a feature point by a nostril that should be photographed with a small luminance value (dark) by lowering the threshold value. become. Also, in the pattern matching process, when the degree of coincidence when the pixel values match most within the nostril detection region is greater than or equal to a predetermined value, it is determined that there is a nostril there. Therefore, it becomes easy to detect a nostril that is not photographed in a normal form. In addition, when detecting a block of black pixels having a predetermined area or more, if a predetermined number of pixels (for example, N pixels) are processed, the black pixels are considered to be continuous even if they are not continuous. When lowering the pixel threshold, for example, if it is 2 × N pixels, it is considered that the black pixels are continuous even if the black pixels are not continuous. It is possible to easily detect a block of black pixels having a predetermined area or larger when the image is taken.

なお、閾値を下げる場合でも、例えば、元の閾値を1.0とした場合、0.7程度の閾値を下限に設定することで、ほくろや陰影などを誤って誤検出することを防止できる。   Even when the threshold value is lowered, for example, when the original threshold value is 1.0, by setting a threshold value of about 0.7 as the lower limit, it is possible to prevent erroneous detection of moles and shadows.

このようにして口位置から鼻孔が検出されれば、左右の鼻孔の中点が得られるので口位置よりも顔の中心線を決定し易く、その結果、眼球追跡領域を精度よく設定できるため眼の開度も精度よく検出可能となる。   If the nostril is detected from the mouth position in this way, the midpoint of the left and right nostrils can be obtained, so the center line of the face can be determined more easily than the mouth position, and as a result, the eye tracking region can be set with high accuracy. It is also possible to detect the degree of opening with high accuracy.

図7は、閉眼検知装置10が鼻孔位置が確定できない場合に口位置を利用して鼻孔を検出し、鼻孔位置を基準に眼の開閉を検出するフローチャート図の一例を示す。なお、図7において図5と同じステップには同一の符号を付した。   FIG. 7 shows an example of a flowchart for detecting the nostril using the mouth position when the closed eye detection device 10 cannot determine the nostril position and detecting the opening and closing of the eye based on the nostril position. In FIG. 7, the same steps as those in FIG.

顔向き検知手段12aは、顔カメラ11が撮影する顔画像から輪郭及び中央線を検出する(S1)。   The face orientation detection means 12a detects the contour and the center line from the face image taken by the face camera 11 (S1).

ついで、鼻孔位置確定手段12bが縦に細長の鼻孔検出領域を設定し、鼻孔の特徴を利用して鼻孔が検出できるか否かを判定する(S2)。この判定は、2個の鼻孔のうち2個又は1個が検出できない場合に鼻孔が検出できないと判定してもよいし、顔向きが所定値以上の場合は鼻孔の検出が困難になることが分かっているので、顔向きが所定値以上の場合に1個又は2個の鼻孔を検出することなく鼻孔が検出できないと判定してもよい。これにより、画像処理の負担が軽減される。   Next, the nostril position determining means 12b sets a vertically elongated nostril detection region, and determines whether or not the nostril can be detected using the characteristics of the nostril (S2). In this determination, it may be determined that the nostril cannot be detected when two or one of the two nostrils cannot be detected, and it may be difficult to detect the nostrils when the face orientation is equal to or greater than a predetermined value. Since it is known, it may be determined that the nostril cannot be detected without detecting one or two nostrils when the face orientation is greater than or equal to a predetermined value. Thereby, the burden of image processing is reduced.

鼻孔が検出された場合(S2のYes)、開閉検出手段12dは鼻孔位置を基準にして眼球追跡領域を設定し(S3)、眼球追跡領域から瞼を検出して上下の瞼の距離から眼の開度を検出する(S4)。   When the nostril is detected (Yes in S2), the open / close detection means 12d sets an eyeball tracking region based on the nostril position (S3), detects eyelids from the eyeball tracking region, and detects the eye from the distance between the upper and lower eyelids. The opening degree is detected (S4).

鼻孔が検出されない場合(S2のNo)、口位置確定手段12cは顔の輪郭や中央線を利用して口位置検出領域を設定する(S5)。口位置確定手段12cは唇境エッジ36など横長のエッジを利用して口位置を確定する。   When the nostril is not detected (No in S2), the mouth position determination unit 12c sets the mouth position detection region using the face outline and the center line (S5). The mouth position determination means 12c determines the mouth position using a horizontally long edge such as the lip border edge 36.

そして、鼻孔位置確定手段12bは口位置と鼻孔位置の関係の統計データを利用して、領域を絞った鼻孔検出領域を設定し、また、鼻孔を検出するための閾値を下げて鼻孔を検出する(S10)。   The nostril position determining means 12b uses the statistical data of the relationship between the mouth position and the nostril position to set a nostril detection area with a narrowed area, and to detect the nostril by lowering the threshold for detecting the nostril. (S10).

ついで、ステップS2に戻り、鼻孔位置確定手段12bは鼻孔が検出できるか否かを判定する(S2)。なお、ステップS2の判定は回数に上限を設けることで、鼻孔検出を際限なく繰り返すことを防止する。また、繰り返しの度に、鼻孔検出領域や鼻孔検出のための閾値を鼻孔を検出しやすくなるよう変更してもよい。   Next, returning to step S2, the nostril position determining means 12b determines whether or not the nostril can be detected (S2). It should be noted that the determination in step S2 provides an upper limit for the number of times, thereby preventing the nostril detection from being repeated indefinitely. Further, each time it is repeated, the nostril detection region and the threshold for nostril detection may be changed so that the nostril can be easily detected.

ステップS2において鼻孔が検出された場合(S2のYes)、開閉検出手段12dが眼の開度を検出することができる(S4)。   When the nostril is detected in step S2 (Yes in S2), the opening / closing detection means 12d can detect the opening degree of the eye (S4).

本実施例によれば、通常の鼻孔検出領域及び閾値では鼻孔が検出されない場合、口位置を確定し口位置から再度鼻孔を検出するので、左右の鼻孔の中点から眼球追跡領域を設定して眼の開度を精度よく検出することができる。   According to the present embodiment, when the nostril is not detected in the normal nostril detection region and the threshold value, the mouth position is determined and the nostril is detected again from the mouth position. Therefore, the eye tracking region is set from the midpoint of the left and right nostrils. The opening degree of the eye can be detected with high accuracy.

以上、本実施形態の閉眼検知装置10は、鼻孔が検出できない場合でも口位置を確定し口位置から眼球追跡領域を設定したり、また、口位置から再度鼻孔の検出を試みて確定された鼻孔位置から眼球追跡領域を設定するので、鼻孔が検出しにくい場合でも閉眼を検知することができる。   As described above, the closed eye detection device 10 according to the present embodiment determines the mouth position even when the nostril cannot be detected, sets the eye tracking region from the mouth position, or tries to detect the nostril again from the mouth position and confirms the nostril. Since the eye tracking region is set from the position, it is possible to detect the closed eye even when the nostril is difficult to detect.

閉眼検知装置を含む居眠り検知装置のシステム構成図である。It is a system configuration figure of a dozing detection device containing a closed eye detection device. 顔画像から眼の開閉の検出を説明するための図である。It is a figure for demonstrating the detection of opening and closing of eyes from a face image. 中央線等を検出した画像処理後の顔画像の一例を示す図である。It is a figure which shows an example of the face image after the image process which detected the centerline etc. FIG. 口位置の確定を説明するための図である。It is a figure for demonstrating determination of a mouth position. 鼻孔位置が確定できない場合に閉眼検知装置が口位置を利用して眼の開閉を検出するフローチャート図の一例である。It is an example of the flowchart figure in which a closed eye detection apparatus detects opening and closing of eyes using a mouth position when a nostril position cannot be decided. 口位置と鼻孔位置の関係の一例を示す図である。It is a figure which shows an example of the relationship between a mouth position and a nostril position. 閉眼検知装置が鼻孔位置が確定できない場合に口位置を利用して鼻孔を検出し、鼻孔位置を基準に眼の開閉を検出するフローチャート図の一例である。It is an example of the flowchart figure which detects a nostril using a mouth position when a closed eye detection apparatus cannot determine a nostril position, and detects opening and closing of an eye on the basis of a nostril position.

符号の説明Explanation of symbols

1 居眠り検知装置
10 閉眼検知装置
11 顔カメラ
12 顔画像処理ECU
12a 顔向き検知手段
12b 鼻孔位置確定手段
12c 口位置確定手段
12d 開閉検出手段
13 衝突判断ECU
13a 閉眼時間計測手段
14 コンビネーションメータ
15 ブレーキECU
16 ミリ波レーダ装置
17 プログラム



DESCRIPTION OF SYMBOLS 1 Dozing detection apparatus 10 Eye-closed detection apparatus 11 Face camera 12 Face image processing ECU
12a Face orientation detection means 12b Nostril position determination means 12c Mouth position determination means 12d Open / close detection means 13 Collision determination ECU
13a Eye-closing time measuring means 14 Combination meter 15 Brake ECU
16 Millimeter wave radar device 17 Program



Claims (5)

顔画像から抽出された特徴点から鼻孔位置を確定する鼻孔位置確定手段と、前記鼻孔位置を基準に眼球追跡領域を設定し、前記眼球追跡領域の眼を探索する開閉検出手段とを、有する閉眼検知装置において、
鼻孔が検出されない場合、口位置を確定する口位置確定手段を有し、
前記開閉検出手段は、前記口位置を基準に眼球追跡領域を設定し、前記眼球追跡領域の眼を探索する、
ことを特徴とする閉眼検知装置。
Closed eye having nostril position determining means for determining a nostril position from feature points extracted from a face image, and open / close detection means for setting an eyeball tracking region based on the nostril position and searching for an eye in the eyeball tracking region In the detection device,
When nostrils are not detected, it has mouth position determination means for determining the mouth position,
The open / close detection means sets an eyeball tracking region based on the mouth position, and searches for an eye in the eyeball tracking region;
A closed eye detection device characterized by the above.
顔画像から抽出された特徴点から鼻孔位置を確定する鼻孔位置確定手段と、前記鼻孔位置を基準に眼球追跡領域を設定し、前記眼球追跡領域の眼を探索する開閉検出手段とを、有する閉眼検知装置において、
鼻孔が検出されない場合、口位置を確定する口位置確定手段を、有し、
前記鼻孔位置確定手段は、前記口位置確定手段が確定した前記口位置を基準に前記鼻孔位置を確定し、
前記開閉検出手段は、前記口位置を基準に確定された前記鼻孔位置を基準に眼球追跡領域を設定し、前記眼球追跡領域の眼を探索する、
ことを特徴とする閉眼検知装置。
Closed eye having nostril position determining means for determining a nostril position from feature points extracted from a face image, and open / close detection means for setting an eyeball tracking region based on the nostril position and searching for an eye in the eyeball tracking region In the detection device,
If no nostril is detected, has mouth position determining means for determining the mouth position;
The nostril position determining means determines the nostril position with reference to the mouth position determined by the mouth position determining means;
The open / close detection means sets an eyeball tracking region based on the nostril position determined based on the mouth position, and searches for an eye in the eyeball tracking region;
A closed eye detection device characterized by the above.
請求項1又は2記載の閉眼検知装置と、
前記開閉検出手段が検出した目の開度に基づき連続した閉眼時間を計測する閉眼時間計測手段と、
前記閉眼時間が所定時間以上の場合に、運転者に警告する警告手段と、
を有することを特徴とする居眠り検知装置。
The closed eye detection device according to claim 1 or 2,
Eye-closing time measuring means for measuring a continuous eye-closing time based on the opening degree of the eyes detected by the opening / closing detection means;
Warning means for warning the driver when the eye-closing time is a predetermined time or more;
A dozing detection device characterized by comprising:
鼻孔位置確定手段が、顔画像から抽出された特徴点に基づき鼻孔位置を確定するステップと、
鼻孔位置が確定されない場合、口位置確定手段が口位置を確定するステップと、
開閉検出手段が、前記口位置を基準に眼球追跡領域を設定し、前記眼球追跡領域の眼を探索するステップと、
を有することを特徴とする閉眼検知方法。
Nostril position determining means determines the nostril position based on the feature points extracted from the face image;
If the nostril position is not determined, the mouth position determining means determines the mouth position;
An open / close detection means sets an eyeball tracking region based on the mouth position, and searches for an eye in the eyeball tracking region; and
A closed-eye detection method comprising:
鼻孔位置確定手段が、顔画像から抽出された特徴点に基づき鼻孔位置を確定するステップと、
鼻孔位置が確定されない場合、口位置確定手段が口位置を確定するステップと、
開閉検出手段が、前記口位置を基準に眼球追跡領域を設定し、前記眼球追跡領域の眼を探索するステップと、
をコンピュータに実行させることを特徴とする閉眼検知のためのプログラム。
Nostril position determining means determines the nostril position based on the feature points extracted from the face image;
If the nostril position is not determined, the mouth position determining means determines the mouth position;
An open / close detection means sets an eyeball tracking region based on the mouth position, and searches for an eye in the eyeball tracking region; and
A program for detecting closed eyes, which causes a computer to execute.
JP2007023381A 2007-02-01 2007-02-01 Closed eye detection device, dozing detection device, closed eye detection method, and closed eye detection program Expired - Fee Related JP4781292B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2007023381A JP4781292B2 (en) 2007-02-01 2007-02-01 Closed eye detection device, dozing detection device, closed eye detection method, and closed eye detection program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2007023381A JP4781292B2 (en) 2007-02-01 2007-02-01 Closed eye detection device, dozing detection device, closed eye detection method, and closed eye detection program

Publications (2)

Publication Number Publication Date
JP2008191785A true JP2008191785A (en) 2008-08-21
JP4781292B2 JP4781292B2 (en) 2011-09-28

Family

ID=39751851

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2007023381A Expired - Fee Related JP4781292B2 (en) 2007-02-01 2007-02-01 Closed eye detection device, dozing detection device, closed eye detection method, and closed eye detection program

Country Status (1)

Country Link
JP (1) JP4781292B2 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102542727A (en) * 2011-12-08 2012-07-04 中国船舶重工集团公司第七一一研究所 Monitoring system and method for preventing operator on duty on ship from napping
CN103679126A (en) * 2012-09-24 2014-03-26 由田新技股份有限公司 Eye searching method and eye state detection device and eye searching device
EP3206162A1 (en) 2016-02-15 2017-08-16 Renesas Electronics Corporation Eye opening degree detection system, doze detection system, automatic shutter system, eye opening degree detection method, and eye opening degree detection program

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0863597A (en) * 1994-08-22 1996-03-08 Konica Corp Face extracting method
JP2000105829A (en) * 1998-09-29 2000-04-11 Matsushita Electric Ind Co Ltd Method and device for face parts image detection
JP2002015322A (en) * 2000-06-29 2002-01-18 Nissan Motor Co Ltd Inattentive driving condition detection device
JP2003006645A (en) * 2001-06-20 2003-01-10 Secom Co Ltd Face image collating device for identity authentication
JP2008065705A (en) * 2006-09-08 2008-03-21 Toyota Motor Corp Dozing detector and dozing detecting method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0863597A (en) * 1994-08-22 1996-03-08 Konica Corp Face extracting method
JP2000105829A (en) * 1998-09-29 2000-04-11 Matsushita Electric Ind Co Ltd Method and device for face parts image detection
JP2002015322A (en) * 2000-06-29 2002-01-18 Nissan Motor Co Ltd Inattentive driving condition detection device
JP2003006645A (en) * 2001-06-20 2003-01-10 Secom Co Ltd Face image collating device for identity authentication
JP2008065705A (en) * 2006-09-08 2008-03-21 Toyota Motor Corp Dozing detector and dozing detecting method

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102542727A (en) * 2011-12-08 2012-07-04 中国船舶重工集团公司第七一一研究所 Monitoring system and method for preventing operator on duty on ship from napping
CN103679126A (en) * 2012-09-24 2014-03-26 由田新技股份有限公司 Eye searching method and eye state detection device and eye searching device
EP3206162A1 (en) 2016-02-15 2017-08-16 Renesas Electronics Corporation Eye opening degree detection system, doze detection system, automatic shutter system, eye opening degree detection method, and eye opening degree detection program

Also Published As

Publication number Publication date
JP4781292B2 (en) 2011-09-28

Similar Documents

Publication Publication Date Title
JP4677963B2 (en) Dozing detection device, dozing detection method
JP4845698B2 (en) Eye detection device, eye detection method, and program
JP4989249B2 (en) Eye detection device, dozing detection device, and method of eye detection device
EP2074550B1 (en) Eye opening detection system and method of detecting eye opening
WO2016038784A1 (en) Driver state determination apparatus
JP4655035B2 (en) Dozing detection device, dozing detection method
JP3768735B2 (en) Face image processing device
US9177202B2 (en) Red-eye detection device
JPH0944685A (en) Face image processor
JP4788624B2 (en) Careless warning device, vehicle equipment control method of careless warning device, and vehicle control device program
JP2009219555A (en) Drowsiness detector, driving support apparatus, drowsiness detecting method
JP2010191793A (en) Alarm display and alarm display method
KR100766592B1 (en) drowsiness detection using reflected image on eye iris
JP4082203B2 (en) Open / close eye determination device
RU2413632C2 (en) Method to prevent driver falling asleep
JP4781292B2 (en) Closed eye detection device, dozing detection device, closed eye detection method, and closed eye detection program
JP2000123188A (en) Eye open/close discriminating device
JP2004220080A (en) Eye opening/closing determining device
JP2004192552A (en) Eye opening/closing determining apparatus
JP5050794B2 (en) Sleepiness detection device, sleepiness detection method
JPH1044824A (en) Driver&#39;s-eye open/closed state determination device for vehicle
JP2008191784A (en) Eye closure detection apparatus, doze detection apparatus, eye closure detection method, and program for eye closure detection
JP4692447B2 (en) Dozing detection device, dozing detection method
JP5493676B2 (en) Eye position recognition device
KR200392762Y1 (en) Apparatus for sensing drowsiness based on infrared facial image

Legal Events

Date Code Title Description
A711 Notification of change in applicant

Free format text: JAPANESE INTERMEDIATE CODE: A711

Effective date: 20081218

A521 Request for written amendment filed

Free format text: JAPANESE INTERMEDIATE CODE: A821

Effective date: 20081218

A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20100120

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20110412

A521 Request for written amendment filed

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20110609

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20110628

A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20110705

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20140715

Year of fee payment: 3

R151 Written notification of patent or utility model registration

Ref document number: 4781292

Country of ref document: JP

Free format text: JAPANESE INTERMEDIATE CODE: R151

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20140715

Year of fee payment: 3

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

LAPS Cancellation because of no payment of annual fees