WO2019169896A1 - 基于脸部特征点定位的疲劳状态检测方法 - Google Patents

基于脸部特征点定位的疲劳状态检测方法 Download PDF

Info

Publication number
WO2019169896A1
WO2019169896A1 PCT/CN2018/115772 CN2018115772W WO2019169896A1 WO 2019169896 A1 WO2019169896 A1 WO 2019169896A1 CN 2018115772 W CN2018115772 W CN 2018115772W WO 2019169896 A1 WO2019169896 A1 WO 2019169896A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
eye
user
frame
detection
Prior art date
Application number
PCT/CN2018/115772
Other languages
English (en)
French (fr)
Inventor
黄翰
李子龙
郝志峰
Original Assignee
华南理工大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华南理工大学 filed Critical 华南理工大学
Priority to EP18908483.3A priority Critical patent/EP3742328A4/en
Priority to PCT/CN2018/115772 priority patent/WO2019169896A1/zh
Publication of WO2019169896A1 publication Critical patent/WO2019169896A1/zh

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/165Evaluating the state of mind, e.g. depression, anxiety
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0077Devices for viewing the surface of the body, e.g. camera, magnifying lens
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/163Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state by tracking eye movement, gaze, or pupil change
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7203Signal processing specially adapted for physiological signals or for diagnostic purposes for noise prevention, reduction or removal
    • A61B5/7207Signal processing specially adapted for physiological signals or for diagnostic purposes for noise prevention, reduction or removal of noise induced by motion artifacts
    • A61B5/721Signal processing specially adapted for physiological signals or for diagnostic purposes for noise prevention, reduction or removal of noise induced by motion artifacts using a separate sensor to detect motion or using motion information derived from signals other than the physiological signal to be measured
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • G06V40/176Dynamic expression
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems

Definitions

  • the invention belongs to the technical field of fatigue detection, and relates to a fatigue state detecting method based on facial feature point positioning.
  • Common methods for detecting fatigue state include: brain wave detection, facial state detection, and physiological index detection.
  • the existing method has the following defects: defect one, based on physiological indicators such as brain waves, complicated equipment, not portable; defect 2, monitoring instrument is expensive, and is not suitable for practical application.
  • the invention has the advantages of high processing speed and high detection accuracy, and can promptly remind the user to ensure the safety of life and property when the user experiences fatigue.
  • the invention aims at the deficiencies of the current fatigue driving detection method, and provides a fatigue detection algorithm based on facial feature point positioning.
  • the object of the present invention is to automatically detect the degree of fatigue of a user in real time.
  • the detection is divided into two parts, one is to initialize the fatigue detection algorithm, and the other is to detect the fatigue level according to each frame of the video input.
  • the invention discloses a fatigue state detecting method based on facial feature point positioning, which comprises the following steps:
  • closed-eye detection is performed on each frame of the image input by the camera by means of feature point positioning and binarization processing;
  • step (a) includes the following steps:
  • step (b) includes the following steps:
  • (b-1) traversing the M frame image, according to the prior knowledge of the user closest to the camera, determining that the user's face detection rectangle is the largest face rectangle detected in each frame, and the average width W 0 and height are counted.
  • H 0 and its average gray value GRAY_FACE 0 detecting the facial feature points of the user's face in the M frame before, determining the position of the pupils of the left and right eyes of the user, and obtaining the average pupil distance according to the pupil;
  • (b-2) According to the user's pupil average distance, and the size of the face rectangle, determine the radius R 0 of the circular human eye approximate region, and calculate the eye gray value in the approximate region, the circle of the first M frame The human eye approximates the region and performs binarization.
  • the threshold of the binarization is GRAY_FACE 0 and the binarized value is counted.
  • the average gray value of the eye region of the first M frame is the average gray value of the user's eye GRAY_EYE 0. ;
  • the lowest detection width of the face detection frame is set to 0.7W 0
  • the lowest detection height is set to 0.7H 0
  • the search range SEARCH_RECT for initializing face detection is centered on the center of the user face detection frame.
  • the width is 2W 0 and the height is 2H 0 .
  • the updated eye gray value GRAY_EYE' is GRAY_EYE 0 .
  • step (c) includes the following steps:
  • Face detection is performed within the rectangular range of SEARCH_RECT. When the face is not detected, the face detection is abnormal; when the face can be detected, the largest face is selected and recorded as the user at this stage. Face rectangle R f (t);
  • step (d) includes the following steps:
  • step (e) includes the following steps:
  • (e-2) According to the user's pupil average distance, and the size of the face rectangle, determine the updated circular human eye approximate region radius R u , and calculate the eye gray value in the approximate region, for the front M frame
  • the circular human eye approximation area is binarized.
  • the threshold of binarization is GRAY_FACE u .
  • the average gray value of the eye area of the first M frame is the average gray value of the user's eye GRAY_EYE. u ;
  • the updated eye area average gray value GRAY_EYE' is GRAY_EYE u .
  • the M is 25 and the N is 5.
  • the present invention has the following advantages and beneficial effects:
  • the face and eye grayscale initialization work of the present invention, and the update strategy ensure that the present invention can adapt to changes in illumination with greater robustness and accuracy.
  • the face search range initialization work of the present invention ensures that the present invention can avoid unnecessary calculations, greatly shortens the calculation time, and improves the operation efficiency.
  • the fatigue value weighting strategy of the present invention which statistically uses the eye closing time of a person during a period of time, avoids interference caused by rapid blinking of the user in a short time, and has stronger robustness.
  • the method of judging blinking compared to the aspect ratio of the eye fitting ellipse, the present invention has higher accuracy and lower computational complexity.
  • FIG. 1 is an overall flow chart of a face detection method based on facial feature points according to the present invention.
  • FIG. 2 is a view showing an example of an eye approximation area obtained by positioning a facial feature point according to the present invention.
  • FIG. 3 is a diagram showing a situation in which the current frame is detected as blinking according to the present invention.
  • FIG. 4 is a diagram of a situation in which the current frame is detected as closed eyes according to the present invention.
  • the face detection method based on deep learning includes the following steps:
  • Step (a) requires the user to perform initial data entry work in a quiet and stable situation.
  • the face detection and facial feature point localization algorithms are not limited, but the face detection should meet the requirements of real-time, ensuring the detection speed is 25 frames/s or more; facial features Point positioning should ensure accurate positioning and meet real-time requirements.
  • Step (a) mainly comprises the following steps:
  • the algorithm initializes the feature information of the face and the human eye for all frame images in the initialization frame set, and performs algorithm initialization work to determine the face position search range and the face size range.
  • the main purpose of step (b) is to perform the initialization of the algorithm.
  • the purpose of determining the reference gray value is to provide a criterion for the determination of closed eye detection for subsequent steps.
  • the main purpose of determining the face search range is to reduce the time consumption of face detection, because in the actual application scenario of driving, the relative change of the face size is relatively small, and the position change of the face is relatively small. We can use these two features to reduce the time consumption of face detection and improve the speed of face detection and facial feature point positioning.
  • Step (b) includes the following steps:
  • (b-1) Traversing 25 frames of images, and determining the face detection rectangle of the user as the largest face rectangle frame detected in each frame according to the prior knowledge of the user closest to the camera.
  • the average width W 0 and height H 0 and their average gray value GRAY_FACE 0 are counted. Detecting the facial feature points of the user's face in the first 25 frames, and determining the position of the left and right eye pupils of the user is recorded as Based on this, the average pupil distance is obtained.
  • the radius R 0 of the approximate area of the circular human eye is determined, as shown in FIG. 2 .
  • the circular human eye approximation area of the first 25 frames is binarized, and the binarization threshold is GRAY_FACE 0 and is statistically binarized.
  • the average gray value of the first 25 frames of the eye region is the user's eye.
  • the average gray value is GRAY_EYE 0 .
  • the minimum detection width of the face detection frame of the algorithm is set to 0.7 W 0 , and the lowest detection height is set to 0.7 H 0 .
  • the search range SEARCH_RECT for initializing face detection is centered on the center of the user face detection frame, and has a width of 2W 0 and a height of 2H 0 . Update the eye gray value GRAY_EYE' to GRAY_EYE 0 .
  • the algorithm performs closed-eye detection on each frame of the image input by the camera through feature point positioning and binarization processing.
  • the main purpose of the step (c) is to perform the face detection of the specified minimum size in the face detection area specified above, and mark the face feature point, and determine whether the closed eye occurs according to the change of the gray value of the approximate circular area of the eye. action.
  • Step (c) includes the following steps:
  • (c-1) Face detection is performed within the rectangular range of SEARCH_RECT. When the face is not detected, the face detection is abnormal; when the face can be detected, the largest face is selected and recorded as the user at this stage.
  • (c-3) Perform facial feature point positioning on the user's face Ft at this stage, and locate the left and right eye pupil positions, the nib position, and the left and right mouth corner positions, as shown in FIGS. 3 and 4.
  • step d The main purpose of step d is to judge the degree of fatigue of the user according to the closed eyes of the neighboring time period.
  • Step (d) consists of the following steps:
  • the algorithm counts the elapsed time, and saves the image of 0s, 12s, 24s, 36s, 48s per minute at intervals of 5 minutes, and at the end of the 5 minutes, according to the time period
  • the 25-frame image is updated with weights.
  • step (e) The main purpose of step (e) is to perform weight update to suppress the effect of illumination changes on fatigue detection.
  • Step (e) consists of the following steps:
  • (e-1) Traverse the 25-frame image for update, and count the average width W u and height H u of the user's face, and its average gray value GRAY_FACE u .
  • the facial feature points of the user's face in the first 25 frames are detected, the position of the left and right eye pupils is determined, and the average pupil distance is obtained by averaging.
  • the updated circular human eye approximate region radius R u is determined , and the eye gray value in the approximate region is calculated.
  • the threshold of binarization is GRAY_FACE u .
  • the average gray value of the eye region of the first 25 frames is the user's eye average gray value GRAY_EYE u .
  • the minimum detection width of the face detection frame of the algorithm is updated to 0.7W u
  • the minimum detection length is updated to 0.7H u
  • the search range of the face detection is updated, centered on the center of the user face detection frame, with a width of 2W u and a height of 2H u .

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Psychiatry (AREA)
  • Surgery (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Social Psychology (AREA)
  • Psychology (AREA)
  • Hospice & Palliative Care (AREA)
  • Educational Technology (AREA)
  • Developmental Disabilities (AREA)
  • Child & Adolescent Psychology (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Physiology (AREA)
  • Ophthalmology & Optometry (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

一种基于脸部特征点定位的疲劳检测方法,用户将摄像头对准脸部,在初始化阶段后,进行检测时,对监控视频输入的每一帧图像使用人脸检测与脸部特征点定位算法检测得到脸部特征点。根据用户脸部特征点确定眼部相关区域,并根据眼部相关区域的灰度值变化程度对用户的闭眼情况做出判断,根据临近帧的闭眼情况对用户的疲劳程度做出预警。基于脸部特征点定位的疲劳检测方法使用简单,计算量少,判断精准,可以应用于实时环境,并有效地提醒用户注意休息,保障了用户的生命财产安全。

Description

基于脸部特征点定位的疲劳状态检测方法 技术领域
本发明属于疲劳检测的技术领域,涉及一种基于脸部特征点定位的疲劳状态检测方法。
背景技术
现有检测疲劳状态的常见方法包括:脑电波检测、面部状态检测以及生理指标检测。现有的方法具有以下缺陷:缺陷一,基于脑电波等生理指标,设备复杂,不便携;缺陷二,监控仪器价格昂贵,不适用于实际应用。而本发明处理速度快,检测准确率高,能在用户发生疲劳现象时基于迅速提醒,保障其生命财产安全。
发明内容
本发明针对当前疲劳驾驶检测方法的不足,提供了基于脸部特征点定位的疲劳检测算法。本发明的目的在于自动实时检测用户的疲劳程度。检测分为两个部分,一是对疲劳检测算法进行初始化,二是根据视频输入的每一帧进行疲劳程度检测。
为了达到上述目的,本发明采用以下技术方案:
本发明基于脸部特征点定位的疲劳状态检测方法,包括以下步骤:
(a)数据录入阶段,用户根据提示,进行初始疲劳检测初始化数据录入,待人脸稳定后读入稳定后的视频前M帧,作为初始化帧集;
(b)初始化阶段,对初始化帧集内的所有帧图像,统计人脸与人眼部的特征信息,并进行初始化工作,确定人脸位置搜索范围和人脸大小范围;
(c)闭眼检测阶段,对摄像头输入的每一帧图片通过特征点定位与二值化处理的方式进行闭眼检测;
(d)疲劳检测阶段,根据当前帧的闭眼情况与前面几帧的闭眼情况做出疲劳判断,并进行提醒;
(e)新阶段,统计已经流逝的时间,以N分钟为时间间隔,保存每分钟按设定的时间间隔选取的5张图像,并在这N分钟结束时根据该时间段内的M帧图像进行权值更新。
作为优选的技术方案,所述步骤(a)包括下述步骤:
(a-1)根据提示,将摄像头放置在最靠近用户的位置,确保摄像头中检测到的最大人脸为用户的人脸,并在用户表情自然的情况下,输入视频;
(a-2)对输入视频每10帧图像,选取一个样本,当相邻样本中人脸框的中心值变化小于人脸框边长的20%时,判定人脸状态移动范围较小,此时,录入连续25帧的图像,作为初始化帧集。
作为优选的技术方案,所述步骤(b)包括下述步骤:
(b-1)遍历M帧图像,根据用户最靠近摄像头的先验知识,确定用户的人脸检测矩形框为每帧中检测到的最大人脸矩形框,统计出其平均宽度W 0和高度H 0以及其平均灰度值 GRAY_FACE 0,检测前M帧中用户人脸的脸部特征点,确定用户左右眼瞳孔位置,并据此求得瞳孔平均距离;
(b-2)根据用户瞳孔平均距离,与人脸矩形框的大小,确定圆形人眼近似区域的半径R 0,并计算近似区域内的眼部灰度值,对前M帧的圆形人眼近似区域,进行二值化处理,二值化的阈值为GRAY_FACE 0并统计进行二值化后的值,前M帧眼部区域的平均灰度值为用户眼部平均灰度值GRAY_EYE 0
(b-3)将人脸检测框的最低检测宽度设定为0.7W 0,最低检测高度设定为0.7H 0,初始化人脸检测的搜索范围SEARCH_RECT为以用户人脸检测框的中心为中心,宽度为2W 0,高度为2H 0,更新眼部灰度值GRAY_EYE′为GRAY_EYE 0
作为优选的技术方案,所述步骤(c)包括下述步骤:
(c-1)在SEARCH_RECT的矩形范围内进行人脸检测,当人脸检测不到时,提示人脸检测异常;当人脸可以检测到时,选取其中的最大人脸,记为用户本阶段人脸矩形R f(t);
(c-2)统计用户本阶段人脸矩形Ft的平均灰度值,记为G f(t);
(c-3)对用户本阶段人脸Ft进行脸部特征点定位,定位出左右眼瞳孔位置,鼻尖位置,左右嘴角位置;
(c-4)统计以左右眼瞳孔为中心,半径为R的两个圆形区域内的灰度值,并进行二值化处理,二值化的阈值为G f(t),并设二值化后的,眼部区域的平均灰度值为G e(t);
(c-5)当
Figure PCTCN2018115772-appb-000001
时可判断为闭眼,则闭眼函数C(t)=1否则C(t)=0。
作为优选的技术方案,所述步骤(d)包括下述步骤:
(d-1)设目前时间点距算法初始化所经过的时间为t帧,目前时间点的用户疲劳程度为Tired(t);
(d-2)使用一维加权的方式确定,目前时间点用户的疲劳程度:
Figure PCTCN2018115772-appb-000002
作为优选的技术方案,所述步骤(e)包括下述步骤:
(e-1)遍历更新用的M帧图像,统计用户人脸的其平均宽度W u和高度H u,以及其平均灰度值GRAY_FACE u,检测前M帧中用户人脸的脸部特征点,确定用户左右眼瞳孔位置,并求平均得到瞳孔平均距离;
(e-2)根据用户瞳孔平均距离,与人脸矩形框的大小,确定更新后的圆形人眼近似区域半径R u,并计算近似区域内的眼部灰度值,对前M帧的圆形人眼近似区域,进行二值化处理,二值化的阈值为GRAY_FACE u,统计进行二值化后的,前M帧眼部区域的平均灰度值为用户眼部平均灰度值GRAY_EYE u
(e-3)将算法的人脸检测框的最低检测宽度更新为0.7W u,最低检测长度更新为0.7H u,更新人脸检测的搜索范围,以用户人脸检测框的中心为中心,宽度为2W u,高度为2H u
(e-4)更新眼部区域平均灰度值GRAY_EYE′为GRAY_EYE u
作为优选的技术方案,所述M为25,所述N为5。
本发明与现有技术相比,具有如下优点和有益效果:
1.本发明的人脸和眼部灰度初始化工作,以及更新策略确保了本发明能适应光照的变化,具有更强的鲁棒性和准确性。
2.本发明的人脸搜索范围初始化工作确保了本发明能避免不必要的计算,大大缩短了计算时间,提升了运行效率。
3.本发明的疲劳值加权策略,统计的时一个时间段内使用人员的眼睛闭合时间,避免了因为使用人员短时间内快速眨眼造成的干扰,具有更强的鲁棒性。
4.相较于眼部拟合椭圆的宽高比来判断眨眼的方法,本发明有更高的准确性和更低的计算复杂性。
附图说明
图1为本发明基于脸部特征点的人脸检测方法整体流程图。
图2为本发明根据脸部特征点定位得到的眼部近似区域例图。
图3为本发明检测到当前帧为睁眼的情况图。
图4为本发明检测到当前帧为闭眼的情况图。
具体实施方式
下面结合实施例及附图对本发明作进一步详细的描述,但本发明的实施方式不限于此。
实施例
如图1,基于深度学习的人脸检测方法,其主要流程包括以下步骤:
(a)数据录入阶段,用户根据提示,进行初始疲劳检测初始化数据录入。算法待人脸稳定后读入稳定后的视频前25帧,作为初始化帧集。
步骤(a)要求用户在安静稳定的情况下,进行初始化数据录入工作。需要注意的是,在具体实施过程中,人脸检测和脸部特征点定位算法不做限制,但人脸检测应满足实时性的要求,确保检测速度在25帧/S及以上;脸部特征点定位应确保定位精准,并满足实时性要求。
步骤(a)主要包含以下步骤:
(a-1)根据算法提示,将摄像头放置在最靠近用户的位置,确保摄像头中检测到的最大人脸为用户的人脸,并在用户表情自然的情况下,输入视频。
(a-2)对输入视频每10帧图像,选取一个样本,当相邻样本中人脸框的中心值变化小于人脸框边长的20%时,判定人脸状态移动范围较小,可以进行初始化。此时,录入连续25帧的图像。
(b)算法初始化阶段,算法对初始化帧集内的所有帧图像,统计人脸与人眼部的特征信息,并进行算法初始化工作,确定人脸位置搜索范围和人脸大小范围。
步骤(b)主要目的是,进行算法的初始化工作,主要有两个方面,一是为了确定基准灰度值,二是为了确定人脸搜索范围。确定基准灰度值的目的是为后续步骤提供闭眼检测的判定标准。确定人脸搜索范围的主要目的是减少人脸检测的时间消耗,因为在驾驶这种实际应用场景下,人脸的大小相对变化较小,并且人脸的位置变化比较小。我们可以利用这两个特性来减少人脸检测的时间消耗,提高人脸检测和脸部特征点定位的速度。
步骤(b)包括以下步骤:
(b-1)遍历25帧图像,根据用户最靠近摄像头的先验知识,确定用户的人脸检测矩形框为每帧中检测到的最大人脸矩形框。统计出其平均宽度W 0和高度H 0以及其平均灰度值GRAY_FACE 0。检测前25帧中用户人脸的脸部特征点,确定用户左右眼瞳孔位置记为
Figure PCTCN2018115772-appb-000003
并据此求得瞳孔平均距离。
(b-2)根据用户瞳孔平均距离,与人脸矩形框的大小,确定圆形人眼近似区域的半径R 0,如图2所示。在我们的实验过程中,确定比较好的取值为,
Figure PCTCN2018115772-appb-000004
并计算近似区域(以
Figure PCTCN2018115772-appb-000005
Figure PCTCN2018115772-appb-000006
为圆心,R 0为半径)内的眼部灰度值。对前25帧的圆形人眼近似区域,进行二值化处理,二值化的阈值为GRAY_FACE 0并统计进行二值化后的,前25帧眼部区域的平均灰度值为用户眼部平均灰度值GRAY_EYE 0
(b-3)将算法的人脸检测框的最低检测宽度设定为0.7W 0,最低检测高度设定为0.7H 0。初始化人脸检测的搜索范围SEARCH_RECT为以用户人脸检测框的中心为中心,宽度为2W 0,高度为2H 0。更新眼部灰度值GRAY_EYE′为GRAY_EYE 0.
(c)闭眼检测阶段,算法对摄像头输入的每一帧图片通过特征点定位与二值化处理的方式进行闭眼检测。
步骤(c)的主要目的是算法在上述指定的人脸检测区域进行指定最低大小的人脸检测,并标记脸部特征点点,根据眼部近似圆形区域的灰度值变化判断是否发生闭眼动作。
步骤(c)包括以下步骤:
(c-1)在SEARCH_RECT的矩形范围内进行人脸检测,当人脸检测不到时,提示人脸检测异常;当人脸可以检测到时,选取其中的最大人脸,记为用户本阶段人脸矩形R f(t)。
(c-2)统计用户本阶段人脸矩形Ft的平均灰度值,记为G f(t)。
(c-3)对用户本阶段人脸Ft进行脸部特征点定位,定位出左右眼瞳孔位置,笔尖位置,左右嘴角位置,如图3、图4所示。
(c-4)统计以左右眼瞳孔为中心,半径为R的两个圆形区域内的灰度值,并进行二值化处理,二值化的阈值为G f(t),并设二值化后的,眼部区域的平均灰度值为G e(t)。
(c-5)当
Figure PCTCN2018115772-appb-000007
时可判断为闭眼,则闭眼函数C(t)=1否则C(t)=0。
(d)近邻帧疲劳检测阶段,算法根据当前帧的闭眼情况与前面几帧的闭眼情况做出疲劳判断,并进行提醒。
步骤d的主要目的是,根据近邻时间段的睁闭眼情况判断用户的疲劳程度。
步骤(d)包含以下步骤:
(d-1)设目前时间点距算法初始化所经过的时间为t帧,目前时间点的用户疲劳程度为Tired(t)。
(d-2)使用一维加权的方式确定,目前时间点用户的疲劳程度:
Figure PCTCN2018115772-appb-000008
(e)权值更新阶段,算法统计已经流逝的时间,以5分钟为时间间隔,保存每分钟第0s,12s,24s,36s,48s的图像,并在这5分钟结束时根据该时间段内的25帧图像进行权值更新。
步骤(e)的主要目的是进行权值更新,以抑制光照变化对疲劳检测的影响。
步骤(e)包含以下步骤:
(e-1)遍历更新用的25帧图像,统计用户人脸的其平均宽度W u和高度H u,以及其平均灰度值GRAY_FACE u。检测前25帧中用户人脸的脸部特征点,确定用户左右眼瞳孔位置,并求平均得到瞳孔平均距离。
(e-2)根据用户瞳孔平均距离,与人脸矩形框的大小,确定更新后的圆形人眼近似区域半径R u,并计算近似区域内的眼部灰度值。对前25帧的圆形人眼近似区域,进行二值化处理,二值化的阈值为GRAY_FACE u。统计进行二值化后的,前25帧眼部区域的平均灰度值为用户眼部平均灰度值GRAY_EYE u
(e-3)将算法的人脸检测框的最低检测宽度更新为0.7W u,最低检测长度更新为0.7H u。更新人脸检测的搜索范围,以用户人脸检测框的中心为中心,宽度为2W u,高度为2H u
(e-4)更新眼部区域平均灰度值GRAY_EYE'为GRAY_EYE u
上述实施例为本发明较佳的实施方式,但本发明的实施方式并不受上述实施例的限制,其他的任何未背离本发明的精神实质与原理下所作的改变、修饰、替代、组合、简化,均应为等效的置换方式,都包含在本发明的保护范围之内。

Claims (7)

  1. 基于脸部特征点定位的疲劳状态检测方法,其特征在于,包括以下步骤:
    (a)数据录入阶段,用户根据提示,进行初始疲劳检测初始化数据录入,待人脸稳定后读入稳定后的视频前M帧,作为初始化帧集;
    (b)初始化阶段,对初始化帧集内的所有帧图像,统计人脸与人眼部的特征信息,并进行初始化工作,确定人脸位置搜索范围和人脸大小范围;
    (c)闭眼检测阶段,对摄像头输入的每一帧图片通过特征点定位与二值化处理的方式进行闭眼检测;
    (d)疲劳检测阶段,根据当前帧的闭眼情况与前面几帧的闭眼情况做出疲劳判断,并进行提醒;
    (e)新阶段,统计已经流逝的时间,以N分钟为时间间隔,保存每分钟按设定的时间间隔选取的5张图像,并在这N分钟结束时根据该时间段内的M帧图像进行权值更新。
  2. 根据权利要求1所述基于脸部特征点定位的疲劳状态检测方法,其特征在于,所述步骤(a)包括下述步骤:
    (a-1)根据提示,将摄像头放置在最靠近用户的位置,确保摄像头中检测到的最大人脸为用户的人脸,并在用户表情自然的情况下,输入视频;
    (a-2)对输入视频每10帧图像,选取一个样本,当相邻样本中人脸框的中心值变化小于人脸框边长的20%时,判定人脸状态移动范围较小,此时,录入连续25帧的图像,作为初始化帧集。
  3. 根据权利要求1所述基于脸部特征点定位的疲劳状态检测方法,其特征在于,所述步骤(b)包括下述步骤:
    (b-1)遍历M帧图像,根据用户最靠近摄像头的先验知识,确定用户的人脸检测矩形框为每帧中检测到的最大人脸矩形框,统计出其平均宽度W 0和高度H 0以及其平均灰度值GRAY_FACE 0,检测前M帧中用户人脸的脸部特征点,确定用户左右眼瞳孔位置,并据此求得瞳孔平均距离;
    (b-2)根据用户瞳孔平均距离,与人脸矩形框的大小,确定圆形人眼近似区域的半径R 0,并计算近似区域内的眼部灰度值,对前M帧的圆形人眼近似区域,进行二值化处理,二值化的阈值为GRAY_FACE 0并统计进行二值化后的值,前M帧眼部区域的平均灰度值为用户眼部平均灰度值GRAY_EYE 0
    (b-3)将人脸检测框的最低检测宽度设定为0.7W 0,最低检测高度设定为0.7H 0,初始化人脸检测的搜索范围SEARCH_RECT为以用户人脸检测框的中心为中心,宽度为2W 0,高度为2H 0,更新眼部灰度值GRAY_EYE′为GRAY_EYE 0
  4. 根据权利要求1所述基于脸部特征点定位的疲劳状态检测方法,其特征在于,所述步骤(c)包括下述步骤:
    (c-1)在SEARCH_RECT的矩形范围内进行人脸检测,当人脸检测不到时,提示人脸检测异常;当人脸可以检测到时,选取其中的最大人脸,记为用户本阶段人脸矩形R f(t);
    (c-2)统计用户本阶段人脸矩形Ft的平均灰度值,记为G f(t);
    (c-3)对用户本阶段人脸Ft进行脸部特征点定位,定位出左右眼瞳孔位置,鼻尖位置,左右嘴角位置;
    (c-4)统计以左右眼瞳孔为中心,半径为R的两个圆形区域内的灰度值,并进行二值化 处理,二值化的阈值为G f(t),并设二值化后的,眼部区域的平均灰度值为G e(t);
    (c-5)当
    Figure PCTCN2018115772-appb-100001
    时可判断为闭眼,则闭眼函数C(t)=1否则C(t)=0。
  5. 根据权利要求1所述基于脸部特征点定位的疲劳状态检测方法,其特征在于,所述步骤(d)包括下述步骤:
    (d-1)设目前时间点距算法初始化所经过的时间为t帧,目前时间点的用户疲劳程度为Tired(t);
    (d-2)使用一维加权的方式确定,目前时间点用户的疲劳程度:
    Figure PCTCN2018115772-appb-100002
  6. 根据权利要求1所述基于脸部特征点定位的疲劳状态检测方法,其特征在于,所述步骤(e)包括下述步骤:
    (e-1)遍历更新用的M帧图像,统计用户人脸的其平均宽度W u和高度H u,以及其平均灰度值GRAY_FACE u,检测前M帧中用户人脸的脸部特征点,确定用户左右眼瞳孔位置,并求平均得到瞳孔平均距离;
    (e-2)根据用户瞳孔平均距离,与人脸矩形框的大小,确定更新后的圆形人眼近似区域半径R u,并计算近似区域内的眼部灰度值,对前M帧的圆形人眼近似区域,进行二值化处理,二值化的阈值为GRAY_FACE u,统计进行二值化后的,前M帧眼部区域的平均灰度值为用户眼部平均灰度值GRAY_EYE u
    (e-3)将算法的人脸检测框的最低检测宽度更新为0.7W u,最低检测长度更新为0.7H u,更新人脸检测的搜索范围,以用户人脸检测框的中心为中心,宽度为2W u,高度为2H u
    (e-4)更新眼部区域平均灰度值GRAY_EYE′为GRAY_EYE u
  7. 根据权利要求1-6中任一项所述基于脸部特征点定位的疲劳状态检测方法,其特征在于,所述M为25,所述N为5。
PCT/CN2018/115772 2018-03-09 2018-11-16 基于脸部特征点定位的疲劳状态检测方法 WO2019169896A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP18908483.3A EP3742328A4 (en) 2018-03-09 2018-11-16 METHOD OF FATIGUE DETECTION BASED ON THE POSITIONING OF A FACIAL FEATURE POINT
PCT/CN2018/115772 WO2019169896A1 (zh) 2018-03-09 2018-11-16 基于脸部特征点定位的疲劳状态检测方法

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
CN201810194220.0A CN108742656B (zh) 2018-03-09 2018-03-09 基于脸部特征点定位的疲劳状态检测方法
CN201810194220.0 2018-03-09
PCT/CN2018/115772 WO2019169896A1 (zh) 2018-03-09 2018-11-16 基于脸部特征点定位的疲劳状态检测方法

Publications (1)

Publication Number Publication Date
WO2019169896A1 true WO2019169896A1 (zh) 2019-09-12

Family

ID=63980181

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/115772 WO2019169896A1 (zh) 2018-03-09 2018-11-16 基于脸部特征点定位的疲劳状态检测方法

Country Status (3)

Country Link
EP (1) EP3742328A4 (zh)
CN (1) CN108742656B (zh)
WO (1) WO2019169896A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113191231A (zh) * 2021-04-21 2021-07-30 栈鹿(上海)教育科技有限公司 疲劳状态下的安全驾驶方法

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108742656B (zh) * 2018-03-09 2021-06-08 华南理工大学 基于脸部特征点定位的疲劳状态检测方法
CN111012353A (zh) * 2019-12-06 2020-04-17 西南交通大学 一种基于人脸关键点识别的身高检测方法
CN112633084B (zh) * 2020-12-07 2024-06-11 深圳云天励飞技术股份有限公司 人脸框确定方法、装置、终端设备及存储介质
CN112541433B (zh) * 2020-12-11 2024-04-19 中国电子技术标准化研究院 一种基于注意力机制的两阶段人眼瞳孔精确定位方法
CN113397544B (zh) * 2021-06-08 2022-06-07 山东第一医科大学附属肿瘤医院(山东省肿瘤防治研究院、山东省肿瘤医院) 一种病患情绪监测方法及系统
CN116467739A (zh) * 2023-03-30 2023-07-21 江苏途途网络技术有限公司 一种计算机大数据存储系统及方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002009025A1 (en) * 2000-07-24 2002-01-31 Seeing Machines Pty Ltd Facial image processing system
CN104809482A (zh) * 2015-03-31 2015-07-29 南京大学 一种基于个体学习的疲劳检测方法
CN106250801A (zh) * 2015-11-20 2016-12-21 北汽银翔汽车有限公司 基于人脸检测和人眼状态识别的疲劳检测方法
CN106846734A (zh) * 2017-04-12 2017-06-13 南京理工大学 一种疲劳驾驶检测装置及方法
CN107563346A (zh) * 2017-09-20 2018-01-09 南京栎树交通互联科技有限公司 一种基于人眼图像处理实现驾驶员疲劳判别的方法
CN108742656A (zh) * 2018-03-09 2018-11-06 华南理工大学 基于脸部特征点定位的疲劳状态检测方法

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6927694B1 (en) * 2001-08-20 2005-08-09 Research Foundation Of The University Of Central Florida Algorithm for monitoring head/eye motion for driver alertness with one camera
CN101692980B (zh) * 2009-10-30 2011-06-08 深圳市汉华安道科技有限责任公司 疲劳驾驶检测方法及装置
CN104504856A (zh) * 2014-12-30 2015-04-08 天津大学 基于Kinect及人脸识别的疲劳驾驶检测方法
WO2018113680A1 (en) * 2016-12-23 2018-06-28 Hong Kong Baptist University Method and apparatus for eye gaze tracking and detection of fatigue

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002009025A1 (en) * 2000-07-24 2002-01-31 Seeing Machines Pty Ltd Facial image processing system
CN104809482A (zh) * 2015-03-31 2015-07-29 南京大学 一种基于个体学习的疲劳检测方法
CN106250801A (zh) * 2015-11-20 2016-12-21 北汽银翔汽车有限公司 基于人脸检测和人眼状态识别的疲劳检测方法
CN106846734A (zh) * 2017-04-12 2017-06-13 南京理工大学 一种疲劳驾驶检测装置及方法
CN107563346A (zh) * 2017-09-20 2018-01-09 南京栎树交通互联科技有限公司 一种基于人眼图像处理实现驾驶员疲劳判别的方法
CN108742656A (zh) * 2018-03-09 2018-11-06 华南理工大学 基于脸部特征点定位的疲劳状态检测方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3742328A4 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113191231A (zh) * 2021-04-21 2021-07-30 栈鹿(上海)教育科技有限公司 疲劳状态下的安全驾驶方法

Also Published As

Publication number Publication date
CN108742656A (zh) 2018-11-06
EP3742328A1 (en) 2020-11-25
CN108742656B (zh) 2021-06-08
EP3742328A4 (en) 2021-12-08

Similar Documents

Publication Publication Date Title
WO2019169896A1 (zh) 基于脸部特征点定位的疲劳状态检测方法
CN108615051B (zh) 基于深度学习的糖尿病视网膜图像分类方法及系统
CN109308445B (zh) 一种基于信息融合的固定岗位人员疲劳检测方法
WO2020125319A1 (zh) 青光眼图像识别方法、设备和筛查系统
WO2017152649A1 (zh) 一种自动提示人眼离屏幕距离的方法和系统
Dey et al. FCM based blood vessel segmentation method for retinal images
CN107506770A (zh) 糖尿病视网膜病变眼底照相标准图像生成方法
Xu et al. Automated anterior chamber angle localization and glaucoma type classification in OCT images
BR112019006165B1 (pt) Processo e dispositivo para determinar uma representação de uma borda de lente de óculos
CN110279391B (zh) 便携红外相机视力检测算法
WO2020125318A1 (zh) 青光眼图像识别方法、设备和诊断系统
CN107895157B (zh) 一种低分辨率图像虹膜中心精确定位的方法
CN105224285A (zh) 眼睛开闭状态检测装置和方法
WO2021190656A1 (zh) 眼底图像黄斑中心定位方法及装置、服务器、存储介质
CN110210357A (zh) 一种基于静态照片面部识别的上睑下垂图像测量方法
CN110348385A (zh) 活体人脸识别方法和装置
Luo et al. The driver fatigue monitoring system based on face recognition technology
CN112989939A (zh) 一种基于视觉的斜视检测系统
CN115456967A (zh) 一种动静脉内瘘血栓检测方法及装置
CN108921010A (zh) 一种瞳孔检测方法及检测装置
JP5139470B2 (ja) 眠気度推定装置および眠気度推定方法
Datta et al. A new contrast enhancement method of retinal images in diabetic screening system
CN116749988A (zh) 一种驾驶员疲劳预警方法、装置、电子设备及存储介质
CN104408409A (zh) 一种适用于散光镜片环境下的瞳孔定位方法
Zhao et al. Fast localization algorithm of eye centers based on improved hough transform

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18908483

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2018908483

Country of ref document: EP

Effective date: 20200819

NENP Non-entry into the national phase

Ref country code: DE