TW202409905A - Method for detecting car driver drowsiness at night using Taguchi method capable of quickly designing new combinations to increase the accuracy of detection and shortening time for development - Google Patents

Method for detecting car driver drowsiness at night using Taguchi method capable of quickly designing new combinations to increase the accuracy of detection and shortening time for development Download PDF

Info

Publication number
TW202409905A
TW202409905A TW111132351A TW111132351A TW202409905A TW 202409905 A TW202409905 A TW 202409905A TW 111132351 A TW111132351 A TW 111132351A TW 111132351 A TW111132351 A TW 111132351A TW 202409905 A TW202409905 A TW 202409905A
Authority
TW
Taiwan
Prior art keywords
night
eye
infrared
ratio
aspect ratio
Prior art date
Application number
TW111132351A
Other languages
Chinese (zh)
Other versions
TWI799343B (en
Inventor
洪穎怡
邱品誠
Original Assignee
中原大學
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中原大學 filed Critical 中原大學
Priority to TW111132351A priority Critical patent/TWI799343B/en
Application granted granted Critical
Publication of TWI799343B publication Critical patent/TWI799343B/en
Publication of TW202409905A publication Critical patent/TW202409905A/en

Links

Landscapes

  • Lighting Device Outwards From Vehicle And Optical Signal (AREA)
  • Traffic Control Systems (AREA)
  • Fittings On The Vehicle Exterior For Carrying Loads, And Devices For Holding Or Mounting Articles (AREA)

Abstract

A method for detecting car driver drowsiness at night using the Taguchi method is disclosed, which first detects the car driver's face, then locates the facial feature points to find out the locations of the left eye, right eye, mouth, chin, nose, left eyebrow and right eyebrow of the face from the image, and determines whether the car driver is dozing off by using the eye aspect ratio. Since the accuracy of identification is low at night, the disclosed uses the Taguchi method to conduct night experiments to achieve a robust design in an uncertain environment. In the Taguchi method experiment, the aspect ratio threshold, the infrared intensity of the infrared light source, the image color, and the frame size are set as control factors, the distance from the infrared night vision camera and the infrared lamp to the eyes is the input, wearing glasses or not wearing glasses are set as interference factors, and an accuracy rate is the output. After the obtained device design results are implemented by the embedded system, the accuracy rate can be greatly improved.

Description

運用田口法於汽車駕駛人夜間瞌睡偵測方法Using Taguchi method to detect driver’s drowsiness at night

本發明係有關於一種運用田口法於汽車駕駛人夜間瞌睡偵測方 法,尤指涉及一種使用田口直交實驗,可快速設計出新組合來增加偵測的正確率以縮短開發時間,特別係指可適用在不確定環境下,仍然可以在考量最大化信號對雜訊比(Signal-to-Noise Ratio, S/N)比,且兼顧品質特性下,得到最佳強健控制因子(設計參數)組合者。 The present invention relates to a method for detecting drowsiness of car drivers at night using the Taguchi method. This method, especially involving a Taguchi orthogonal experiment, can quickly design new combinations to increase the accuracy of detection and shorten development time. In particular, it means that it can be applied in uncertain environments and can still maximize the signal to noise. The one that obtains the best combination of robust control factors (design parameters) based on the Signal-to-Noise Ratio (S/N) ratio and taking into account quality characteristics.

現今社會使用車輛通行的比例越來越高,尤其家家戶戶都有小型 車輛。每天工作以及假日出遊都需要用到汽車,因此汽車駕駛的安全是非常重要,如果車程中發生車禍,將會造成駕駛或乘客受傷或死亡。110年整年的交通事故前10名,其中前10項有未依規定讓車、未注意車前狀況、及左轉彎未依規定等,這些常常是因為駕駛精神狀況不佳而造成。根據美國國家高速公路交通安全局(National Highway Traffic Safety Administration, NHTSA),在2019年統計發現因昏昏欲睡而造成高速公路駕駛車禍並且導致死亡人數為697人,佔總車禍死亡人數的1.9%,這個死亡比率相當高。因此汽車若有瞌睡偵測裝置將可以有效防止駕駛人因打嗑睡而造成車禍發生。 Nowadays, the proportion of vehicles used for transportation in society is increasing, especially every household has a small car. Cars are needed for daily work and holiday travel, so car driving safety is very important. If a car accident occurs during driving, it will cause injury or death to the driver or passengers. The top 10 traffic accidents in 2011 include failure to yield according to regulations, failure to pay attention to the situation in front of the car, and failure to turn left according to regulations. These are often caused by poor mental state of the driver. According to the National Highway Traffic Safety Administration (NHTSA), in 2019, statistics found that the number of deaths caused by highway driving accidents due to drowsiness was 697, accounting for 1.9% of the total number of deaths in traffic accidents. This death rate is quite high. Therefore, if the car has a drowsiness detection device, it will be able to effectively prevent the driver from falling asleep and causing a car accident.

一般習知解決方法係使用嘗試錯誤(trial-and-error)的方式,找 出一個偵測瞌睡裝置的設計參數,然而這樣的實驗方式不僅需要耗費相當多的時間,且不保證所得的設計參數可在例如:駕駛人臉部離攝影機距離不同、環 境照度不同、及駕駛人是否有戴眼鏡等不確定環境下使用。由於此種習知解決方法並未有系統地進行產品控制因子的選用,因此無法達到最佳化的產品合格(正確)率的最佳化選擇。 The common solution is to use a trial-and-error approach to find Develop the design parameters of a drowsiness detection device. However, such an experimental method not only takes a lot of time, but also does not guarantee that the obtained design parameters can be used in different situations, such as: different distances between the driver’s face and the camera, the environment Use in uncertain environments such as different ambient illumination levels and whether the driver is wearing glasses. Since this conventional solution method does not systematically select the product control factors, it cannot achieve the optimal selection of the optimal product qualification (correctness) rate.

目前瞌睡偵測比較廣泛運用於汽車行業中,因為汽車駕駛時需要 很高度的注意力,才可以防止駕駛意外發生,因此瞌睡偵測很適合運用在汽車業。職是之故,鑑於在汽車的輔助駕駛系統中安裝瞌睡偵測裝置,除了可防止駕駛意外發生,更能有效提高汽車的年銷售數量。因此發展一套可解決前案技術缺點與使用田口法運用於夜間行車駕駛人瞌睡偵測與嵌入式系統實現之發明實有必要。 At present, drowsiness detection is widely used in the automotive industry because it requires A high degree of attention can prevent driving accidents, so drowsiness detection is very suitable for use in the automotive industry. For this reason, installing a drowsiness detection device in a car's assisted driving system can not only prevent driving accidents, but also effectively increase the annual sales of cars. Therefore, it is necessary to develop an invention that can solve the technical shortcomings of the previous case and use the Taguchi method for driver drowsiness detection and embedded system implementation when driving at night.

本發明之主要目的係在於,克服習知技藝所遭遇之上述問題並提 供一種使用田口直交實驗,可快速設計出新組合來增加偵測的正確率以縮短開發時間,可適用在不確定環境下,仍然可以在考量最大化S/N比,且兼顧品質特性下,得到最佳強健控制因子(設計參數)組合之運用田口法於汽車駕駛人夜間瞌睡偵測方法。 The main purpose of the present invention is to overcome the above-mentioned problems encountered in the conventional art and to provide Provides a Taguchi orthogonal experiment that can quickly design new combinations to increase detection accuracy and shorten development time. It can be applied in uncertain environments and can still maximize the S/N ratio while taking into account quality characteristics. Obtain the best combination of robust control factors (design parameters) and use the Taguchi method to detect driver drowsiness at night.

為達以上之目的,本發明係一種運用田口法於汽車駕駛人夜間瞌 睡偵測方法,係應用於一瞌睡偵測裝置並且對該瞌睡偵測裝置中一微型單板電腦、一紅外線夜視攝影機以及一紅外線燈進行夜間汽車駕駛人瞌睡偵測(Nodding-off Detection),透過即時與連續自動偵測人臉及臉部相關特徵,以計算眼睛長寬比來判定該汽車駕駛人是否打瞌睡,該方法至少包含下列步驟:偵測環境建立步驟:使用該紅外線夜視攝影機搭配該紅外線燈在夜間對汽車駕駛人進行臉部攝影,其中該紅外線夜視攝影機與該紅外線燈係位在汽車駕駛座前方,該紅外線夜視攝影機係與該微型單板電腦相連接;影響品質特性的因子選出步驟:選出影響品質特性的數個控制因子,該些控制因子係可被控制之參數,其包括眼睛長寬比(Eye Aspect Ratio, EAR)的閾值、紅外線光源(mW/cm 2)、影像色彩、及框架大小;各種控制因子變動的水準決定步驟:每個控制因子各設有三個水準(level)個數,在該些控制因子中,該眼睛長寬比的閾值以0.21、0.22及0.23作為其變動的三個水準,該紅外線光源的紅外線強度以4、5及6 mW/cm 2作為其變動的三個水準,該影像色彩以灰階、彩色及黑白作為其變動的三個水準,該框架大小以520、620及720 pixel作為其變動的三個水準;實驗直交表選用步驟:依據該各種控制因子變動的水準決定步驟中每個控制因子及其水準建立直交表,選用的該直交表為L 9(3 4),以該紅外線夜視攝影機與該紅外線燈到眼睛的距離為輸入,戴眼鏡與不戴眼鏡為干擾因子,根據該些輸入與該些干擾因子來獲得對應量測值的一正確率為輸出,並依據該直交表實驗,紀錄辨識該些正確率;實驗數據與S/N比計算步驟:使用該L 9(3 4)田口實驗直交表分析,進行正交實驗,計算該直交表中各組實驗所得的該些量測值(即正確率)的平均值與標準偏差,最後再根據該些量測值(即正確率)的該平均值與該標準偏差計算出S/N比,取得該些量測值的變異程度;因子反應分析步驟:將該S/N比或品質特性放入因子反應分析表並依水準區分,再經由水準分類取平均值,重新取得新的S/N比或品質特性的因子反應分析表,從該新的S/N比或品質特性的因子反應分析表分類該眼睛長寬比的閾值、該紅外線光源、該影像色彩、及該框架大小四類控制因子來最大化S/N比,通過均值分析取其最大值以得到新的最佳控制因子組合,其中該眼睛長寬比的閾值為0.22,該紅外線光源的紅外線強度為 4 mW/cm 2,該影像色彩為灰階,及該框架大小為620 pixel為該新的最佳控制因子組合;以及設計結果確認步驟:將該新的最佳控制因子組合重新進行夜間汽車駕駛人瞌睡偵測的實驗比較後,與原設計相比,得到更高的S/N比,確認該新的最佳控制因子組合為最佳且強健的控制因子組合。 In order to achieve the above purpose, the present invention is a method for detecting drowsiness of car drivers at night using the Taguchi method. It is applied to a drowsiness detection device and includes a micro single-board computer and an infrared night vision camera in the drowsiness detection device. and an infrared lamp for nighttime driver drowsiness detection (Nodding-off Detection), which uses real-time and continuous automatic detection of human faces and facial-related features to calculate the eye aspect ratio to determine whether the driver is dozing. The method at least includes the following steps: a detection environment establishment step: using the infrared night vision camera and the infrared lamp to photograph the driver's face at night, where the infrared night vision camera and the infrared lamp are located on the driver's seat of the car In the front, the infrared night vision camera is connected to the micro single-board computer; the factor selection step that affects the quality characteristics: select several control factors that affect the quality characteristics. These control factors are parameters that can be controlled, including eye length. The threshold of the Eye Aspect Ratio (EAR), infrared light source (mW/cm 2 ), image color, and frame size; the level determination steps for various control factor changes: each control factor has three levels. Among these control factors, the threshold of the eye aspect ratio has three levels of variation: 0.21, 0.22 and 0.23, and the infrared intensity of the infrared light source has 4, 5 and 6 mW/cm 2 as its variation levels. Three levels, the image color uses gray scale, color and black and white as its three levels of change, the frame size uses 520, 620 and 720 pixel as its three levels of change; experimental orthogonal table selection steps: based on the various controls In the step of determining the level of factor changes, an orthogonal table is established for each control factor and its level. The orthogonal table selected is L 9 (3 4 ). The distance from the infrared night vision camera and the infrared lamp to the eyes is used as input. Wearing glasses and not wearing glasses as interference factors. According to these inputs and these interference factors, an accuracy rate output of the corresponding measurement value is obtained, and based on the orthogonal table experiment, the accuracy rate is recorded and identified; the experimental data and S/N Ratio calculation steps: Use the L 9 (3 4 ) Taguchi experiment orthogonal table analysis, conduct orthogonal experiments, and calculate the average and standard deviation of the measurement values (i.e., accuracy) obtained from each group of experiments in the orthogonal table. Finally, the S/N ratio is calculated based on the average value and the standard deviation of the measurement values (i.e., the accuracy) to obtain the degree of variation of the measurement values; the factor response analysis step: calculate the S/N ratio or The quality characteristics are put into the factor response analysis table and classified according to levels. Then the average value is obtained through level classification, and a new S/N ratio or factor response analysis table of quality characteristics is obtained again. From the new S/N ratio or quality characteristics The factor response analysis table classifies the eye aspect ratio threshold, the infrared light source, the image color, and the frame size into four categories of control factors to maximize the S/N ratio. The maximum value is obtained through mean analysis to obtain the new best A combination of control factors, in which the threshold of the eye aspect ratio is 0.22, the infrared intensity of the infrared light source is 4 mW/cm 2 , the image color is grayscale, and the frame size is 620 pixel is the new best control factor combination; and the steps to confirm the design results: after re-conducting the new optimal control factor combination for nighttime driver drowsiness detection experiments, compared with the original design, a higher S/N ratio was obtained, confirming that the new The optimal control factor combination is the optimal and robust control factor combination.

於本發明上述實施例中,該微型單板電腦更包括有一觸控螢幕顯 示模組。 In the above embodiment of the present invention, the micro single board computer further includes a touch screen display display module.

於本發明上述實施例中,該紅外線夜視攝影機到眼睛的距離與該 紅外線燈到眼睛的距離相等。 In the above embodiment of the present invention, the distance from the infrared night vision camera to the eye is equal to the distance from the infrared lamp to the eye.

於本發明上述實施例中,該紅外線夜視攝影機與該紅外線燈到眼 睛的距離為55~65公分。 In the above embodiment of the present invention, the distance between the infrared night vision camera and the infrared lamp to the eye is 55 to 65 cm.

於本發明上述實施例中,該微型單板電腦內設有一應用程式庫, 用以連續對該臉部攝影之影像自動偵測臉部與眼睛辨識,透過臉部特徵點定位,從該影像中找出左眼位置、右眼位置、嘴巴位置、下巴位置、鼻子位置、左眉毛位置及右眉毛位置,以計算該眼睛長寬比。 In the above embodiment of the present invention, the micro single board computer is equipped with an application program library. It is used to automatically detect face and eye recognition in images continuously photographed on the face. Through facial feature point positioning, it can find out the position of the left eye, the position of the right eye, the position of the mouth, the position of the chin, the position of the nose, and the position of the left eye from the image. The eyebrow position and the right eyebrow position are used to calculate the eye aspect ratio.

於本發明上述實施例中,該微型單板電腦係使用該應用程式庫中 的Dlib-ml程式模組所提供的前臉偵測與臉部特徵點擷取指令定位眼睛座標並計算該眼睛長寬比,然後使用OpenCV程式模組所提供的凸殼(convex)指令畫出眼睛輪廓。 In the above embodiment of the present invention, the micro single board computer uses the application program library The front face detection and facial feature point acquisition instructions provided by the Dlib-ml program module locate the eye coordinates and calculate the eye aspect ratio, and then use the convex command provided by the OpenCV program module to draw the Eye contour.

於本發明上述實施例中,該眼睛長寬比的公式為: EAR= ; 其中,該 為白眼球最左端;該 為黑眼球最左上端;該 為黑眼球最右上端;該 為白眼球最右端;該 為黑眼球最右下端;及該 為黑眼球最左下端。 In the above embodiment of the present invention, the formula of the eye aspect ratio is: EAR= ; Among them, the It is the leftmost end of the white eyeball; the It is the upper left corner of the black eyeball; the It is the upper right corner of the black eyeball; the It is the right end of the white eyeball; the is the lower right end of the black eyeball; and the It is the lower left end of the black eyeball.

於本發明上述實施例中,該些量測值(即正確率)的平均值 、 與該些量測值的標準偏差S,以及該S/N比的公式為: = ; S= ; S/N= ; 其中,該 為第i個量測值。 In the above embodiments of the present invention, the average value of the measured values (i.e., accuracy) is , and the standard deviation S of the measured values, and the formula for the S/N ratio is: = ; S= ; S/N= ; Among them, the is the i-th measurement value.

請參閱『第1圖~第4圖』所示,係分別為本發明瞌睡偵測裝置 之架構示意圖、本發明所提瞌睡偵測方法之偵測流程示意圖、本發明之睜眼與閉眼偵測照片、及本發明之眼睛長寬比照片。如圖所示:本發明係一種運用田口法於汽車駕駛人夜間瞌睡偵測方法,係使用田口方法(Taguchi Method)運用在汽車駕駛人瞌睡偵測(Nodding-off Detection),並使用Raspberry Pi 4 Model B為微型單板電腦1、Raspberry Pi Noir Camera V2 8MP為紅外線夜視攝影機2、以及外接式48顆燈泡850 nm為紅外線燈3來實現一瞌睡偵測裝置100,而該瞌睡偵測裝置100是架設在車輛4內。當運用時,先選出控制因子及干擾因子以進行正交實驗以獲得信號雜訊比,透過均值分析最後得到最佳且強健的控制因子組合。 Please refer to "Figures 1 to 4", which are respectively the drowsiness detection device of the present invention. Schematic diagram of the structure, schematic diagram of the detection process of the drowsiness detection method of the present invention, photos of the open and closed eyes detection of the present invention, and photos of the eye aspect ratio of the present invention. As shown in the figure: the present invention is a method for detecting driver drowsiness at night using the Taguchi Method (Taguchi Method) for driver drowsiness detection (Nodding-off Detection), and uses Raspberry Pi 4 Model B is a micro single board computer 1, Raspberry Pi Noir Camera V2 8MP is an infrared night vision camera 2, and 48 external light bulbs 850 nm are infrared lamps 3 to implement a drowsiness detection device 100, and the drowsiness detection device 100 It is installed in vehicle 4. When used, the control factors and interference factors are first selected to perform orthogonal experiments to obtain the signal-to-noise ratio, and the optimal and robust control factor combination is finally obtained through mean analysis.

上述所提之控制因子為一個可被控制之設計參數,本發明採用眼 睛長寬比(Eye Aspect Ratio, EAR)的閾值、紅外線光源(mW/cm 2)、影像色彩、及框架大小來進行實驗並作為控制因子,每個控制因子有三個水準,故田口法直交表採用L 9( )直交表分析。本發明所設計的控制因子水準表如表一所示,其中: The control factor mentioned above is a controllable design parameter. The present invention uses the threshold of the eye aspect ratio (EAR), infrared light source (mW/cm 2 ), image color, and frame size to conduct experiments and as control factors. Each control factor has three levels, so the Taguchi method orthogonal table uses L 9 ( ) Orthogonal table analysis. The control factor level table designed by the present invention is shown in Table 1, where:

該眼睛長寬比的閾值為A因子,選定其為0.21、0.22及0.23這三個 值來進行實驗。如果閾值過高則會發生眼睛睜開,卻會誤判成眼睛閉上;如果閾值過低則會誤判成張開眼睛。 The threshold of the eye aspect ratio is the A factor, and three values of 0.21, 0.22, and 0.23 were selected for the experiment. If the threshold is too high, the eyes will be open but the eyes will be mistakenly judged as closed; if the threshold is too low, the eyes will be mistakenly judged as open.

該紅外線光源的紅外線強度為B因子,其單位為mW/cm 2,如果紅 外線強度太弱會造成Raspberry Pi Noir Camera V2 8MP紅外線夜視攝影機無法辨識到臉部與眼睛;如果紅外線強度太強,會造成Raspberry Pi Noir Camera V2 8MP紅外線夜視攝影機辨識模糊,因此將紅外線光源設定為4、5及6 mW/cm 2The infrared intensity of the infrared light source is the B factor, and its unit is mW/ cm2 . If the infrared intensity is too weak, the Raspberry Pi Noir Camera V2 8MP infrared night vision camera will not be able to recognize the face and eyes; if the infrared intensity is too strong, the Raspberry Pi Noir Camera V2 8MP infrared night vision camera will have blurred recognition. Therefore, the infrared light source is set to 4, 5 and 6 mW/ cm2 .

該影像色彩為C因子,由於夜間的紅外線辨識情況不好,所以加 入灰階或黑白情況下進行實驗,減少光線對於臉部與眼睛辨識的影響。因此針對灰階、彩色、及黑白三種色彩來進行實驗。 The image color is the C factor. Since infrared recognition is poor at night, grayscale or black and white is added for experimentation to reduce the effect of light on face and eye recognition. Therefore, experiments are conducted on grayscale, color, and black and white.

該框架大小為D因子,框架大小會影響臉部、眼睛判讀、與程式 運行速度,所以設定框架為520、620及720 pixel來進行實驗。 表一 因子 說明 Level 1 Level 2 Level 3 A 眼睛長寬比的閾值 0.21 0.22 0.23 B 紅外線光源(mW/ ) 4 5 6 C 影像色彩 灰階 彩色 黑白 D 框架大小(pixel) 520 620 720 The frame size is the D factor. The frame size will affect the face, eye interpretation, and program running speed, so the frame is set to 520, 620 and 720 pixels for experiments. Table I factor instruction Level 1 Level 2 Level 3 A Eye aspect ratio threshold 0.21 0.22 0.23 B Infrared light source (mW/ ) 4 5 6 C Image color Grayscale color black and white D Frame size (pixel) 520 620 720

以下實施例僅舉例以供了解本發明之細節與內涵,但不用於限制 本發明之申請專利範圍。 The following embodiments are merely examples for understanding the details and connotations of the present invention, but are not intended to limit the scope of the patent application of the present invention.

於本發明之一較佳具體實施例中,本發明係針對夜間時段,使用 田口法正交實驗來設計一個以嵌入式系統為基礎的偵測裝置,透過即時與連續自動偵測人臉及臉部相關特徵,以計算眼睛長寬比來判定該汽車駕駛人是否打瞌睡,從而提高夜間瞌睡偵測的正確率。本發明所提方法如第2圖所示,其至少包含下列步驟: In a preferred embodiment of the present invention, the present invention is targeted at the night time period, using Taguchi method orthogonal experiment to design a detection device based on an embedded system. Through real-time and continuous automatic detection of human faces and face-related features, it can calculate the eye aspect ratio to determine whether the car driver is dozing. Thereby improving the accuracy of nighttime drowsiness detection. The method proposed by the present invention is shown in Figure 2, which at least includes the following steps:

偵測環境建立步驟s11:使用該紅外線夜視攝影機2搭配該紅外 線燈3在夜間對汽車駕駛人進行臉部攝影,其中該紅外線夜視攝影機2與該紅外線燈3係位在汽車駕駛座前方,該紅外線夜視攝影機2係與該微型單板電腦1相連接。 Detection environment establishment step s11: Use the infrared night vision camera 2 with the infrared The linear light 3 takes pictures of the driver's face at night, in which the infrared night vision camera 2 and the infrared light 3 are located in front of the driver's seat of the car, and the infrared night vision camera 2 is connected to the micro single board computer 1 .

影響品質特性的因子選出步驟s12:選出影響品質特性的數個控 制因子,該些控制因子係可被控制之參數,其包括眼睛長寬比(Eye Aspect Ratio, EAR)的閾值、紅外線光源(mW/cm 2)、影像色彩、及框架大小。 Step s12 of selecting factors that affect quality characteristics: Select several control factors that affect quality characteristics. These control factors are parameters that can be controlled, including the threshold of eye aspect ratio (EAR), infrared light source (mW) /cm 2 ), image color, and frame size.

各種控制因子變動的水準決定步驟s13:每個控制因子各設有三 個水準(level)個數,在該些控制因子中,該眼睛長寬比的閾值以0.21、0.22及 0.23作為其變動的三個水準,該紅外線光源的紅外線強度以4、5及6 mW/cm 2作為其變動的三個水準,該影像色彩以灰階、彩色及黑白作為其變動的三個水準,該框架大小以520、620及720 pixel作為其變動的三個水準。 Step s13 of determining the levels of changes of various control factors: each control factor is provided with three levels. Among these control factors, the threshold of the eye aspect ratio is 0.21, 0.22 and 0.23 as its three levels of change, the infrared intensity of the infrared light source is 4, 5 and 6 mW/ cm2 as its three levels of change, the image color is grayscale, color and black and white as its three levels of change, and the frame size is 520, 620 and 720 pixels as its three levels of change.

實驗直交表選用步驟s14:依據該各種控制因子變動的水準決定 步驟中每個控制因子及其水準建立直交表,選用的該直交表為L 9(3 4),以該紅外線夜視攝影機2與該紅外線燈3到眼睛的距離為輸入,戴眼鏡與不戴眼鏡為干擾因子,根據該些輸入與該些干擾因子來獲得對應量測值的一正確率為輸出,並依據該直交表實驗,紀錄辨識該些正確率。 Experimental orthogonal table selection step s14: establish an orthogonal table based on each control factor and its level in the step of determining the level of changes in various control factors. The orthogonal table selected is L 9 (3 4 ), and the infrared night vision camera 2 and The distance from the infrared lamp 3 to the eyes is the input, and wearing glasses or not wearing glasses is the interference factor. According to the inputs and the interference factors, a correct rate output of the corresponding measurement value is obtained, and based on the orthogonal table experiment, Record the identification accuracy.

實驗數據與S/N比計算步驟s15:使用該L 9(3 4)田口實驗直交表分 析,進行正交實驗,計算該直交表中各組實驗所得的該些量測值(即正確率)的平均值與標準偏差,最後再根據該些量測值的該平均值與該標準偏差計算出信號對雜訊比(Signal-to-Noise Ratio, S/N),取得該些量測值的變異程度。 Experimental data and S/N ratio calculation step s15: Use the L 9 (3 4 ) Taguchi experimental orthogonal table analysis to conduct an orthogonal experiment, calculate the average value and standard deviation of the measurement values (i.e., accuracy) obtained from each group of experiments in the orthogonal table, and finally calculate the signal-to-noise ratio (Signal-to-Noise Ratio, S/N) based on the average value and the standard deviation of the measurement values to obtain the degree of variation of the measurement values.

因子反應分析步驟s16:將該S/N比或品質特性放入因子反應分析 表並依水準區分,再經由水準分類取平均值,重新取得新的S/N比或品質特性的因子反應分析表,從該新的S/N比或品質特性的因子反應分析表分類該眼睛長寬比的閾值、該紅外線光源、該影像色彩、及該框架大小四類控制因子來最大化S/N比,通過均值分析取其最大值以得到新的最佳控制因子組合,其中該眼睛長寬比的閾值為0.22,該紅外線光源的紅外線強度為4 mW/cm 2,該影像色彩為灰階,及該框架大小為620 pixel為該新的最佳控制因子組合。 Factor response analysis step s16: Put the S/N ratio or quality characteristics into the factor response analysis table and classify them according to levels, and then average the values through level classification to obtain a new factor response analysis table for S/N ratio or quality characteristics. , classify the eye aspect ratio threshold, the infrared light source, the image color, and the frame size from the new factor response analysis table of S/N ratio or quality characteristics into four categories of control factors to maximize the S/N ratio, The maximum value is obtained through mean analysis to obtain the new best control factor combination, in which the threshold of the eye aspect ratio is 0.22, the infrared intensity of the infrared light source is 4 mW/cm 2 , the color of the image is gray scale, and the A frame size of 620 pixel is the new best combination of control factors.

設計結果確認步驟s17:將該新的最佳控制因子組合重新進行夜 間汽車駕駛人瞌睡偵測的實驗比較後,與原設計相比,得到更高的S/N比,確認 該新的最佳控制因子組合為最佳且強健的控制因子組合。如是,藉由上述揭露 之流程構成一全新之運用田口法於汽車駕駛人夜間瞌睡偵測方法。 Design result confirmation step s17: Re-run the new optimal control factor combination overnight After experimental comparison of driver drowsiness detection between cars, compared with the original design, a higher S/N ratio was obtained, confirming This new optimal control factor combination is an optimal and robust control factor combination. If so, through the above disclosure The process constitutes a brand-new method for detecting driver drowsiness at night using the Taguchi method.

在上述實驗步驟s11中,本發明使用Raspberry Pi Noir Camera V2 8MP紅外線夜視攝影機2,搭配外接式48顆燈泡850 nm紅外線燈3在夜間進行攝影,且該微型單板電腦1更包括有一觸控螢幕顯示模組11,其內設有一應用程式庫12。臉部攝影之影像由Raspberry Pi 4 Model B微型單板電腦1的應用程式庫12連續自動偵測人臉及相關特徵,透過臉部特徵點定位,從該影像中找出左眼位置、右眼位置、嘴巴位置、下巴位置、鼻子位置、左眉毛位置及右眉毛位置,以計算眼睛長寬比,實驗架構圖如第1圖所示。本發明使用該應用程式庫12中的Dlib-ml程式模組121所提供的前臉偵測與臉部特徵點擷取指令定位眼睛座標並計算該眼睛長寬比,然後使用OpenCV程式模組122所提供的凸殼(convex)指令畫出眼睛輪廓,再將上述該些程式模組121、122燒入Raspberry Pi 4 Model B微型單板電腦1來即時偵測,如第3、4圖所示。 In the above experimental step s11, the present invention uses a Raspberry Pi Noir Camera V2 8MP infrared night vision camera 2, and an external 48-bulb 850 nm infrared light 3 to take photos at night, and the micro single-board computer 1 further includes a touch screen display module 11, which has an application library 12. The image of the facial photography is automatically detected by the application library 12 of the Raspberry Pi 4 Model B micro single-board computer 1. The left eye position, right eye position, mouth position, chin position, nose position, left eyebrow position and right eyebrow position are found from the image through facial feature point positioning to calculate the eye aspect ratio. The experimental framework diagram is shown in Figure 1. The present invention uses the front face detection and facial feature point capture instructions provided by the Dlib-ml program module 121 in the application library 12 to locate the eye coordinates and calculate the eye aspect ratio, and then uses the convex instruction provided by the OpenCV program module 122 to draw the eye outline, and then burns the above program modules 121 and 122 into the Raspberry Pi 4 Model B micro single board computer 1 for real-time detection, as shown in Figures 3 and 4.

由第3圖可知眼睛長寬比的公式為: EAR= ; 其中,該 為白眼球最左端;該 為黑眼球最左上端;該 為黑眼球最右上端;該 為白眼球最右端;該 為黑眼球最右下端;及該 為黑眼球最左下端。 From Figure 3, we can see that the formula for the eye aspect ratio is: EAR= ; Among them, the It is the leftmost end of the white eyeball; the It is the upper left corner of the black eyeball; the It is the upper right corner of the black eyeball; the It is the right end of the white eyeball; the is the lower right end of the black eyeball; and the It is the lower left end of the black eyeball.

在實驗實施方面,首先要選定直交表,選定的直交表為L 9( ), 然後輸入為眼睛對紅外線夜視攝影機與紅外線燈的距離(紅外線夜視攝影機與紅外線燈兩個距離相等),其距離為55、60、65 cm。干擾因子為戴眼鏡與不戴 眼鏡,輸出為正確率(百分比,符號為y)。實驗完成後要進行 的計算, 是n 個量測值(即正確率)的平均值,計算式如式(1),其中,該 為第i個正確率: = 再來計算S,S是此n個量測值的標準偏差,計算式如式(2): S= 最後計算S/N比,S/N比為量測值的變異程度,計算式如式(3): S/N= 計算完後各組實驗的正確率的平均值、標準偏差、及S/N比的結果如表二所示。 表二 實驗 A B C D 距離=55cm 距離=60cm 距離=65cm S S/N 戴眼鏡 不戴眼鏡 戴眼鏡 不戴眼鏡 戴眼鏡 不戴眼鏡 1 1 1 1 1 0.82 0.93 0.91 0.99 0.96 0.92 0.922 0.058 24.06 2 1 2 2 2 0.96 0.97 0.96 0.98 0.93 0.91 0.952 0.026 31.14 3 1 3 3 3 0.81 0.81 0.84 0.99 0.88 0.92 0.875 0.071 21.86 4 2 1 2 3 0.91 0.92 0.92 0.92 0.87 0.94 0.913 0.023 31.84 5 2 2 3 1 0.91 0.89 0.95 0.95 0.85 0.92 0.912 0.038 27.56 6 2 3 1 2 0.92 0.95 0.98 0.94 0.92 0.9 0.935 0.028 30.44 7 3 1 3 2 0.89 0.88 0.91 0.91 0.92 0.86 0.895 0.023 31.96 8 3 2 1 3 0.93 0.92 0.84 0.89 0.93 0.9 0.902 0.034 28.39 9 3 3 2 1 0 1 0 0.98 0 0.95 0.488 0.535 -0.80 平均值: 25.1617 In the implementation of the experiment, we first need to select an orthogonal table. The selected orthogonal table is L 9 ( ), and then the input is the distance between the eye and the infrared night vision camera and the infrared light (the two distances of the infrared night vision camera and the infrared light are equal), and the distance is 55, 60, and 65 cm. The interference factor is wearing glasses and not wearing glasses, and the output is the accuracy rate (percentage, symbol y). After the experiment is completed, Calculation, is the average value of n measured values (i.e., accuracy), calculated as in equation (1), where is the i-th accuracy rate: = Next, we calculate S, which is the standard deviation of the n measured values. The calculation formula is as follows: S = Finally, the S/N ratio is calculated. The S/N ratio is the degree of variation of the measured value. The calculation formula is as follows: S/N = After calculation, the average value, standard deviation, and S/N ratio of each group of experiments are shown in Table 2. Experiment A B C D Distance = 55cm Distance = 60cm Distance = 65cm S S/N Wear glasses Without glasses Wear glasses Without glasses Wear glasses Without glasses 1 1 1 1 1 0.82 0.93 0.91 0.99 0.96 0.92 0.922 0.058 24.06 2 1 2 2 2 0.96 0.97 0.96 0.98 0.93 0.91 0.952 0.026 31.14 3 1 3 3 3 0.81 0.81 0.84 0.99 0.88 0.92 0.875 0.071 21.86 4 2 1 2 3 0.91 0.92 0.92 0.92 0.87 0.94 0.913 0.023 31.84 5 2 2 3 1 0.91 0.89 0.95 0.95 0.85 0.92 0.912 0.038 27.56 6 2 3 1 2 0.92 0.95 0.98 0.94 0.92 0.9 0.935 0.028 30.44 7 3 1 3 2 0.89 0.88 0.91 0.91 0.92 0.86 0.895 0.023 31.96 8 3 2 1 3 0.93 0.92 0.84 0.89 0.93 0.9 0.902 0.034 28.39 9 3 3 2 1 0 1 0 0.98 0 0.95 0.488 0.535 -0.80 average value: 25.1617

所謂的因子反應是指控制因子的變動對S/N比或品質特性的影響 的大小,譬如A因子由第1水準(Level 1)變動到第2水準(Level 2),也就是閾值0.21到0.22時,S/N比(或品質特性)的平均變動量稱為A的因子反應,可表示為 ;而B因子由第2水準變動到第3水準時,S/N比(或品質特性)的平均變 動量可表示為 。其中 為在A因子的所有Level1的S/N比總和取平均, 為 在A因子的所有Level2的S/N比總和取平均,計算 方式如下所示: = = 25.686 = = 29.946 = 29.946-25.686 = 4.260 由所有組合計算可以整理成表三所示之S/N比的因子反應表。 表三   A B C D Level 1 25.69 29.28 27.63 16.94 Level 2 29.95 29.03 20.73 31.18 Level 3 19.85 17.17 27.13 27.36 E(1→2) 4.26 -0.25 -6.90 14.24 E(2→3) -10.09 -11.86 6.40 -3.82 全距(Range) 10.09 12.12 6.90 14.24 分類(Rank) 3 2 4 1 顯著性 NO YES NO YES The so-called factor response refers to the impact of changes in control factors on the S/N ratio or quality characteristics. For example, the A factor changes from Level 1 to Level 2, that is, the threshold is 0.21 to 0.22. When , the average variation of S/N ratio (or quality characteristics) is called the factor response of A, which can be expressed as ; When the B factor changes from the second level to the third level, the average change in the S/N ratio (or quality characteristics) can be expressed as . in Average the sum of the S/N ratios of all Level1 in the A factor, Calculate by averaging the sum of the S/N ratios of all Level2 in the A factor The method is as follows: = = 25.686 = = 29.946 = 29.946-25.686 = 4.260 The calculations from all combinations can be compiled into the factor response table of the S/N ratio shown in Table 3. Table 3 A B C D Level 1 25.69 29.28 27.63 16.94 Level 2 29.95 29.03 20.73 31.18 Level 3 19.85 17.17 27.13 27.36 E(1→2) 4.26 -0.25 -6.90 14.24 E(2→3) -10.09 -11.86 6.40 -3.82 Range 10.09 12.12 6.90 14.24 Classification(Rank) 3 2 4 1 Salience NO YES NO YES

由上可知一個因子的變動會對S/N比(或品質特性)產生顯著 (significant)影響時,稱此因子為重要因子。統計學中的變異分析就是用來決定因子的重要性,目前本發明使用一半原則來決定重要因子。 From the above, we can see that when the change of a factor has a significant impact on the S/N ratio (or quality characteristics), this factor is called an important factor. The analysis of variance in statistics is used to determine the importance of factors. Currently, the present invention uses the half principle to determine important factors.

接下來是品質特性的因子反應表,如表四所示。 表四   A B C D Level 1 0.916 0.910 0.919 0.774 Level 2 0.920 0.922 0.784 0.927 Level 3 0.762 0.766 0.894 0.897 E(1→2) 0.004 0.012 -0.135 0.153 E(2→3) -0.158 -0.156 0.109 -0.031 全距(Range) 0.158 0.156 0.135 0.153 分類(Rank) 1 2 4 3 顯著性 YES YES NO NO Next is the factor response table of quality characteristics, as shown in Table 4. Table 4 A B C D Level 1 0.916 0.910 0.919 0.774 Level 2 0.920 0.922 0.784 0.927 Level 3 0.762 0.766 0.894 0.897 E(1→2) 0.004 0.012 -0.135 0.153 E(2→3) -0.158 -0.156 0.109 -0.031 Range 0.158 0.156 0.135 0.153 Classification(Rank) 1 2 4 3 Salience YES YES NO NO

第1類是對S/N具有影響力的因子,可以用來最大化S/N比,也就 縮小變異。第2類是對品質特性有影響力的因子,用來調整品質特性的平均值至目標值而不致於改變品質特性的變異,此類因子稱為調整因子。第3類是對S/N及品質特性都不具影響的因子,此因子是用來降低成本,如表五所示之控制因子的分類。 表五 因子類別 是否有影響 S/N? 是否有影響 品質特性? 控制因子 用途 1 YES YES/NO B、D 用來縮小變異 2 NO YES A 用來調整品質特性至目標值 3 NO NO C 用來降低成本 The first category is factors that have an impact on S/N, which can be used to maximize the S/N ratio, that is, to reduce variation. The second category is factors that have an impact on quality characteristics, which are used to adjust the average value of quality characteristics to the target value without changing the variation of quality characteristics. This type of factor is called an adjustment factor. The third category is factors that have no impact on S/N and quality characteristics. This factor is used to reduce costs, as shown in Table 5. The classification of control factors. Table 5 Factor category Does it affect S/N? Does it affect the quality characteristics? Control Factors use 1 YES YES/NO B.D Used to reduce variation 2 NO YES A Used to adjust quality characteristics to target values 3 NO NO C To reduce costs

再來進行製程最佳化,首先調整第1類控制因子(B、D)來最大 化S/N比: A?     B1      C?      D2 再來調整第2類控制因子(A)來使品質特性達到目標值: A2     B1      C?      D2 再來調整第3類控制因子(C),由於第3類控制因子不會有影響,因此用來降低成本,所以選擇如下: A2     B1      C1      D2 Next for process optimization, first adjust the first type of control factors (B, D) to maximize Chemical S/N ratio: A? B1 C? D2 Then adjust the second type of control factor (A) to make the quality characteristics reach the target value: A2 B1 C? D2 Next, adjust the third type of control factor (C). Since the third type of control factor will not have an impact, it is used to reduce costs, so the selection is as follows: A2 B1 C1 D2

即最佳強健設計參數為眼睛長寬比(A控制因子)為0.22 (Level  2)、紅外線光源的紅外線強度(B控制因子)為4 mW/cm 2(Level 1)、影像色彩(C控制因子)為灰階(Level 1)、及框架大小(D控制因子)為620 pixel(Level 2)。其中表六中的實驗10為最佳強健設計的最後正確率確認實驗。 表六 EXP. A B C D 距離=55cm 距離=60cm 距離=65cm S S/N 戴眼鏡 不戴眼鏡 戴眼鏡 不戴眼鏡 戴眼鏡 不戴眼鏡 10 2 1 1 2 0.94 0.93 0.94 0.92 0.94 0.91 0.93 0.012 37.32 That is, the best robust design parameters are eye aspect ratio (A control factor) of 0.22 (Level 2), infrared intensity of infrared light source (B control factor) of 4 mW/ cm2 (Level 1), image color (C control factor) of grayscale (Level 1), and frame size (D control factor) of 620 pixels (Level 2). Experiment 10 in Table 6 is the final accuracy confirmation experiment of the best robust design. Table 6 EXP. A B C D Distance = 55cm Distance = 60cm Distance = 65cm S S/N Wear glasses Without glasses Wear glasses Without glasses Wear glasses Without glasses 10 2 1 1 2 0.94 0.93 0.94 0.92 0.94 0.91 0.93 0.012 37.32

由表六的實驗10可知S/N比高於表二的各9個實驗的任何一個S/N 比,因此實驗10為最佳且強健的控制因子組合。 It can be seen from Experiment 10 in Table 6 that the S/N ratio is higher than any S/N of the 9 experiments in Table 2. ratio, so Experiment 10 is the best and robust combination of control factors.

由上述可知,本發明係先對汽車駕駛人的臉部進行偵測,然後透 過臉部特徵點定位,從影像中找出臉部的左眼、右眼、嘴巴、下巴、鼻子、左眉毛和右眉毛等位置,以利用眼睛長寬比來判定汽車駕駛人是否打瞌睡。由於在夜間辨別正確率偏低,所以本發明以田口方法進行夜間實驗,以達成在不確定環境下之強健設計(Robust Design)。田口法實驗中,將長寬比閾值、紅外線光源的紅外線強度、影像色彩、及框架大小設定為控制因子,紅外線夜視攝影機與紅外線燈到眼睛的距離為輸入,戴眼鏡與不戴眼鏡為干擾因子,正確率為輸出。所得之裝置設計結果由嵌入式系統實現後,證實正確率大大提高。 As can be seen from the above, the present invention first detects the face of the car driver, and then locates the left eye, right eye, mouth, chin, nose, left eyebrow and right eyebrow of the face from the image through facial feature point positioning, so as to use the aspect ratio of the eyes to determine whether the car driver is dozing off. Since the recognition accuracy is low at night, the present invention uses the Taguchi method to conduct night experiments to achieve a robust design in an uncertain environment. In the Taguchi method experiment, the aspect ratio threshold, the infrared intensity of the infrared light source, the image color, and the frame size are set as control factors, the infrared night vision camera and the distance from the infrared light to the eyes are inputs, wearing glasses and not wearing glasses are interference factors, and the accuracy is the output. After the device design results were implemented by the embedded system, the accuracy was greatly improved.

藉此,本發明係可應用在夜間對汽車駕駛人瞌睡偵測裝置的設計 方法,提出田口直交實驗決定瞌睡偵測裝置的控制因子及其可能水準,所獲得的最佳強健控制因子(設計參數)組合的眼睛長寬比閾值、紅外線光源的紅外線強度、影像色彩、及框架大小,可在不確定環境下維持裝置優異的辨識正確率。此最佳強健設計參數將搭配微型單板電腦、紅外線夜視攝影機、紅外線燈及一偵測眼睛長寬比的應用程式庫來使用。 In this way, the present invention can be applied to the design of a drowsiness detection device for car drivers at night. Method, it is proposed that the Taguchi orthogonal experiment determines the control factors and possible levels of the drowsiness detection device, and the best combination of robust control factors (design parameters) obtained is the eye aspect ratio threshold, infrared intensity of the infrared light source, image color, and frame size, it can maintain the device's excellent recognition accuracy in uncertain environments. This best-in-class robust design will be used with a tiny single-board computer, an infrared night vision camera, an infrared lamp, and an application library that detects eye aspect ratio.

綜上所述,本發明係一種運用田口法於汽車駕駛人夜間瞌睡偵測 方法,可有效改善習用之種種缺點,可適用在不確定環境下,仍然可以在考量最大化S/N比,且兼顧品質特性下,得到最佳強健控制因子(設計參數)組合;因為使用田口直交實驗,可快速設計出夜間行車駕駛人瞌睡偵測的最佳控制因子新組合來增加偵測的正確率,因此可以縮短開發時間,進而使本發明之產生能更進步、更實用、更符合使用者之所須,確已符合發明專利申請之要件,爰依法提出專利申請。 In summary, the present invention is a method for detecting drowsiness of car drivers at night by using the Taguchi method. It can effectively improve various shortcomings of the method and can be applied in uncertain environments. It can still obtain the best combination of robust control factors (design parameters) by considering the maximum S/N ratio and taking into account the quality characteristics. Because the Taguchi orthogonal experiment is used, the best new combination of control factors for detecting drowsiness of drivers at night can be quickly designed to increase the accuracy of detection, thus shortening the development time, making the invention more advanced, more practical, and more in line with the needs of users. It has indeed met the requirements for the invention patent application, and a patent application is filed in accordance with the law.

惟以上所述者,僅為本發明之較佳實施例而已,當不能以此限定 本發明實施之範圍;故,凡依本發明申請專利範圍及發明說明書內容所作之簡單的等效變化與修飾,皆應仍屬本發明專利涵蓋之範圍內。 However, the above is only a preferred embodiment of the present invention and should not be used to limit the scope of implementation of the present invention; therefore, any simple equivalent changes and modifications made according to the scope of the patent application of the present invention and the content of the invention specification should still fall within the scope of the present invention patent.

100:瞌睡偵測裝置 1:微型單板電腦 11:觸控螢幕顯示模組 12:應用程式庫 121:Dlib-ml程式模組 122:OpenCV程式模組 2:紅外線夜視攝影機 3:紅外線燈 4:車輛 s11~s17:步驟 100: Sleepiness detection device 1: Micro single-board computer 11: Touch screen display module 12: Application library 121: Dlib-ml program module 122: OpenCV program module 2: Infrared night vision camera 3: Infrared light 4: Vehicle s11~s17: Steps

第1圖,係本發明瞌睡偵測裝置之架構示意圖。 第2圖,係本發明所提瞌睡偵測方法之偵測流程示意圖。 第3圖,係本發明之睜眼與閉眼偵測照片。 第4圖,係本發明之眼睛長寬比照片。 Figure 1 is a schematic diagram of the structure of the drowsiness detection device of the present invention. Figure 2 is a schematic diagram of the detection flow of the drowsiness detection method proposed by the present invention. Figure 3 is a photo of the invention with eyes open and eyes closed. Figure 4 is a photo of the aspect ratio of the eye of the present invention.

s11~s17:步驟 s11~s17: Steps

Claims (8)

一種運用田口法於汽車駕駛人夜間瞌睡偵測方法,係應用於一瞌睡偵測裝置並且對該瞌睡偵測裝置中一微型單板電腦、一紅外線夜視攝影機以及一紅外線燈進行夜間汽車駕駛人瞌睡偵測(Nodding-off Detection),透過即時與連續自動偵測人臉及臉部相關特徵,以計算眼睛長寬比來判定該汽車駕駛人是否打瞌睡,該方法至少包含下列步驟: 偵測環境建立步驟:使用該紅外線夜視攝影機搭配該紅外線燈在夜間對汽車駕駛人進行臉部攝影,其中該紅外線夜視攝影機與該紅外線燈係位在汽車駕駛座前方,該紅外線夜視攝影機係與該微型單板電腦相連接; 影響品質特性的因子選出步驟:選出影響品質特性的數個控制因子,該些控制因子係可被控制之參數,其包括眼睛長寬比(Eye Aspect Ratio, EAR)的閾值、紅外線光源(mW/cm 2)、影像色彩、及框架大小; 各種控制因子變動的水準決定步驟:每個控制因子各設有三個水準 (level)個數,在該些控制因子中,該眼睛長寬比的閾值以0.21、0.22及0.23作為其變動的三個水準,該紅外線光源的紅外線強度以4、5及6 mW/cm 2作為其變動的三個水準,該影像色彩以灰階、彩色及黑白作為其變動的三個水準,該框架大小以520、620及720 pixel作為其變動的三個水準; 實驗直交表選用步驟:依據該各種控制因子變動的水準決定步驟中每個控制因子及其水準建立直交表,選用的該直交表為L 9(3 4),以該紅外線夜視攝影機與該紅外線燈到眼睛的距離為輸入,戴眼鏡與不戴眼鏡為干擾因子,根據該些輸入與該些干擾因子來獲得對應量測值的一正確率為輸出,並依據該直交表實驗,紀錄辨識該些正確率; 實驗數據與S/N比計算步驟:使用該L 9(3 4)田口實驗直交表分析,進行正交實驗,計算該直交表中各組實驗所得的該些量測值(即正確率)的平均值與標準偏差,最後再根據該些量測值(即正確率)的該平均值與該標準偏差計算出信號對雜訊比(Signal-to-Noise Ratio, S/N),取得該些量測值的變異程度; 因子反應分析步驟:將該S/N比或品質特性放入因子反應分析表並依水準區分,再經由水準分類取平均值,重新取得新的S/N比或品質特性的因子反應分析表,從該新的S/N比或品質特性的因子反應分析表分類該眼睛長寬比的閾值、該紅外線光源、該影像色彩、及該框架大小四類控制因子來最大化S/N比,通過均值分析取其最大值以得到新的最佳控制因子組合,其中該眼睛長寬比的閾值為0.22,該紅外線光源的紅外線強度為4 mW/cm 2,該影像色彩為灰階,及該框架大小為620 pixel為該新的最佳控制因子組合;以及 設計結果確認步驟:將該新的最佳控制因子組合重新進行夜間汽車駕駛人瞌睡偵測的實驗比較後,與原設計相比,得到更高的S/N比,確認該新的最佳控制因子組合為最佳且強健的控制因子組合。 A method for detecting nodding-off of a car driver at night using the Taguchi method is applied to a nodding-off detection device and a micro single-board computer, an infrared night vision camera and an infrared light in the nodding-off detection device are used to detect nodding-off of a car driver at night. The method determines whether the car driver is nodding off by automatically detecting the face and facial features in real time and continuously and calculating the aspect ratio of the eyes. The method comprises at least the following steps: The detection environment establishment step: using the infrared night vision camera and the infrared light to take facial photography of the car driver at night, wherein the infrared night vision camera and the infrared light are located in front of the car driver's seat, and the infrared night vision camera is connected to the micro single board computer; the factor selection step affecting the quality characteristics: selecting several control factors affecting the quality characteristics, which are controllable parameters, including the threshold of the eye aspect ratio (Eye Aspect Ratio, EAR), the infrared light source (mW/ cm2 ), the image color, and the frame size; The step of determining the level of change of various control factors: each control factor is set to three levels. Among the control factors, the threshold of the eye aspect ratio is 0.21, 0.22 and 0.23 as its three levels of change, the infrared intensity of the infrared light source is 4, 5 and 6 mW/ cm2 as its three levels of change, the image color is gray, color and black and white as its three levels of change, and the frame size is 520, 620 and 720 pixels as its three levels of change; the step of selecting an experimental orthogonal table: according to the step of determining the level of change of various control factors, an orthogonal table is established for each control factor and its level. The orthogonal table selected is L 9 (3 4 ), taking the distance between the infrared night vision camera and the infrared lamp and the eye as input, wearing glasses and not wearing glasses as interference factors, obtaining the accuracy of the corresponding measurement value as output according to the inputs and the interference factors, and recording the recognition accuracy according to the orthogonal table experiment; Experimental data and S/N ratio calculation step: using the L 9 (3 4 ) Taguchi experimental orthogonal table analysis, conducting an orthogonal experiment, calculating the average value and standard deviation of the measurement values (i.e., accuracy) obtained from each group of experiments in the orthogonal table, and finally calculating the signal-to-noise ratio (Signal-to-Noise Ratio, S/N) according to the average value and the standard deviation of the measurement values (i.e., accuracy) to obtain the degree of variation of the measurement values; Factor response analysis steps: put the S/N ratio or quality characteristics into a factor response analysis table and differentiate them according to the level, then take the average value through level classification to re-obtain a new factor response analysis table of the S/N ratio or quality characteristics, and classify the four control factors of the eye aspect ratio threshold, the infrared light source, the image color, and the frame size from the new factor response analysis table of the S/N ratio or quality characteristics to maximize the S/N ratio, and take the maximum value through mean analysis to obtain a new optimal control factor combination, wherein the eye aspect ratio threshold is 0.22, the infrared intensity of the infrared light source is 4 mW/ cm2 , the image color is grayscale, and the frame size is 620 pixel is the new optimal control factor combination; and a design result confirmation step: after the new optimal control factor combination is re-tested for nighttime car driver drowsiness detection, a higher S/N ratio is obtained compared with the original design, confirming that the new optimal control factor combination is the best and most robust control factor combination. 依申請專利範圍第1項所述之運用田口法於汽車駕駛人夜間瞌睡偵測方法,其中,該微型單板電腦更包括有一觸控螢幕顯示模組。According to the method for detecting a car driver's drowsiness at night using the Taguchi method as described in item 1 of the patent application, the micro single-board computer further includes a touch screen display module. 依申請專利範圍第1項所述之運用田口法於汽車駕駛人夜間瞌睡偵測方法,其中,該紅外線夜視攝影機到眼睛的距離與該紅外線燈到眼睛的距離相等。According to the method described in item 1 of the patent application, the Taguchi method is used to detect driver's drowsiness at night, wherein the distance from the infrared night vision camera to the eyes is equal to the distance from the infrared lamp to the eyes. 依申請專利範圍第1或3項所述之運用田口法於汽車駕駛人夜間瞌睡偵測方法,其中,該紅外線夜視攝影機與該紅外線燈到眼睛的距離為55~65公分。According to the method for detecting a car driver's drowsiness at night using the Taguchi method as described in item 1 or 3 of the patent application, the distance between the infrared night vision camera and the infrared light and the eyes is 55 to 65 centimeters. 依申請專利範圍第1項所述之運用田口法於汽車駕駛人夜間瞌睡偵測方法,其中,該微型單板電腦內設有一應用程式庫,用以連續對該臉部攝影之影像自動偵測臉部與眼睛辨識,透過臉部特徵點定位,從該影像中找出左眼位置、右眼位置、嘴巴位置、下巴位置、鼻子位置、左眉毛位置及右眉毛位置,以計算該眼睛長寬比。According to the method for detecting drowsiness of car drivers at night using the Taguchi method as described in item 1 of the patent application scope, the micro single-board computer is equipped with an application library for continuous automatic detection of images taken by the face. Face and eye recognition, through facial feature point positioning, find the left eye position, right eye position, mouth position, chin position, nose position, left eyebrow position and right eyebrow position from the image to calculate the length and width of the eyes Compare. 依申請專利範圍第5項所述之運用田口法於汽車駕駛人夜間瞌睡偵測方法,其中,該微型單板電腦係使用該應用程式庫中的Dlib-ml程式模組所提供的前臉偵測與臉部特徵點擷取指令定位眼睛座標並計算該眼睛長寬比,然後使用OpenCV程式模組所提供的凸殼(convex)指令畫出眼睛輪廓。According to the method for detecting driver drowsiness at night using the Taguchi method described in item 5 of the patent application, the micro single-board computer uses the front face detection provided by the Dlib-ml program module in the application library. Use the facial feature point acquisition command to locate the eye coordinates and calculate the eye aspect ratio, and then use the convex command provided by the OpenCV program module to draw the eye outline. 依申請專利範圍第1、5或6項所述之運用田口法於汽車駕駛人夜間瞌睡偵測方法,其中,該眼睛長寬比的公式為: EAR= ; 其中,該 為白眼球最左端;該 為黑眼球最左上端;該 為黑眼球最右上端;該 為白眼球最右端;該 為黑眼球最右下端;及該 為黑眼球最左下端。 A method for detecting a car driver's drowsiness at night using the Taguchi method as described in item 1, 5 or 6 of the patent application, wherein the formula for the eye aspect ratio is: EAR = ; Among them, the The leftmost end of the white of the eye; The upper left end of the black eyeball; The upper right corner of the black eyeball; The rightmost end of the white of the eye; The lower right corner of the black eyeball; and It is the lower left end of the black eyeball. 依申請專利範圍第1項所述之運用田口法於汽車駕駛人夜間瞌睡偵測方法,其中,該些量測值(即正確率)的平均值 、與該些量測值的標準偏差S,以及該S/N比的公式為: = ; S= ; S/N= ; 其中,該 為第i個量測值。 According to the method for detecting driver drowsiness at night using the Taguchi method as described in item 1 of the patent application, the average value of these measurement values (i.e., the accuracy rate) , the standard deviation S of these measured values, and the formula of the S/N ratio is: = ; S= ; S/N= ; Among them, the is the i-th measurement value.
TW111132351A 2022-08-26 2022-08-26 Application of Taguchi method to the detection method of car driver's sleepiness at night TWI799343B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW111132351A TWI799343B (en) 2022-08-26 2022-08-26 Application of Taguchi method to the detection method of car driver's sleepiness at night

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW111132351A TWI799343B (en) 2022-08-26 2022-08-26 Application of Taguchi method to the detection method of car driver's sleepiness at night

Publications (2)

Publication Number Publication Date
TWI799343B TWI799343B (en) 2023-04-11
TW202409905A true TW202409905A (en) 2024-03-01

Family

ID=86948759

Family Applications (1)

Application Number Title Priority Date Filing Date
TW111132351A TWI799343B (en) 2022-08-26 2022-08-26 Application of Taguchi method to the detection method of car driver's sleepiness at night

Country Status (1)

Country Link
TW (1) TWI799343B (en)

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI518644B (en) * 2014-08-26 2016-01-21 國立臺南大學 Method and device for warning driver
TWI536279B (en) * 2015-04-22 2016-06-01 緯創資通股份有限公司 Eye detection method and system
CN106529421A (en) * 2016-10-21 2017-03-22 燕山大学 Emotion and fatigue detecting auxiliary driving system based on hybrid brain computer interface technology
TWI647666B (en) * 2017-08-28 2019-01-11 緯創資通股份有限公司 Drowsiness detection apparatus and drowsiness detection method thereof

Also Published As

Publication number Publication date
TWI799343B (en) 2023-04-11

Similar Documents

Publication Publication Date Title
CN106585629B (en) A kind of control method for vehicle and device
WO2021129059A1 (en) Sun visor control method, sun visor control system and apparatus
JP2541688B2 (en) Eye position detection device
JP5737400B2 (en) Red eye detector
CN103888680A (en) Method for adjusting exposure time of camera
CN107292251A (en) A kind of Driver Fatigue Detection and system based on human eye state
CN205388828U (en) Motor vehicle far -reaching headlamp is violating regulations to be detected and snapshot system
JP2020145724A (en) Apparatus and method for detecting eye position of operator, imaging apparatus with image sensor of rolling shutter driving method, and illumination control method of same
KR100617777B1 (en) Apparatus and method for detecting driver's eye image in drowsy driving warning apparatus
JP4770218B2 (en) Visual behavior determination device
CN103465825A (en) Vehicle-mounted system and control method thereof
CN113140093A (en) Fatigue driving detection method based on AdaBoost algorithm
Du et al. Driver fatigue detection based on eye state analysis
CN115171024A (en) Face multi-feature fusion fatigue detection method and system based on video sequence
JP4895874B2 (en) Eye state determination device, eye state determination method, and eye state determination program
TW202409905A (en) Method for detecting car driver drowsiness at night using Taguchi method capable of quickly designing new combinations to increase the accuracy of detection and shortening time for development
JP2019046141A (en) Device to monitor drivers, method to monitor drivers, and program
Jimenez-Pinto et al. Driver alert state and fatigue detection by salient points analysis
CN108256397A (en) Localization of iris circle method based on projecting integral
CN106652353A (en) Traffic tool control method and device
JP2000142164A (en) Eye condition sensing device and driving-asleep alarm device
US11272086B2 (en) Camera system, vehicle and method for configuring light source of camera system
CN108801599B (en) Matrix type LED car lamp detection method and device
KR20190044818A (en) Apparatus for monitoring driver and method thereof
CN113807126B (en) Fatigue driving detection method and system, computer equipment and storage medium thereof