CN111062292B - Fatigue driving detection device and method - Google Patents
Fatigue driving detection device and method Download PDFInfo
- Publication number
- CN111062292B CN111062292B CN201911258187.4A CN201911258187A CN111062292B CN 111062292 B CN111062292 B CN 111062292B CN 201911258187 A CN201911258187 A CN 201911258187A CN 111062292 B CN111062292 B CN 111062292B
- Authority
- CN
- China
- Prior art keywords
- fatigue
- face
- detection
- cache
- angle
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/59—Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
- G06V20/597—Recognising the driver's state or behaviour, e.g. attention or drowsiness
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
- Eye Examination Apparatus (AREA)
Abstract
本发明公开了一种疲劳驾驶检测装置与方法,主控模块分别与图像采集装置、存储模块、报警模块电性连接;报警模块包含LED灯、报警器;图像采集模块包含摄像头和4个红外LED光源;主控模块输出控制信号到摄像头,摄像头采集驾驶员图像,并输入至主控模块;存储模块用于存储哈希指纹信息、对应的人脸定位框以及疲劳检测标准参数;主控模块对接收数据进行处理,实时得到疲劳特征参数,并读取存储模块中的疲劳检测标准参数,根据二者比较的结果得到疲劳程度,输出控制信号到报警模块,控制LED灯闪烁、报警器蜂鸣。发明能够全天候检测。通过注意力分散检测能够在疲劳初期对其预警。利用多特征综合判断疲劳程度,实时性和准确度两个方面效果均好。
The invention discloses a fatigue driving detection device and method. A main control module is electrically connected with an image acquisition device, a storage module and an alarm module respectively; the alarm module includes LED lights and an alarm; the image acquisition module includes a camera and four infrared LEDs Light source; the main control module outputs control signals to the camera, and the camera collects the driver's image and inputs it to the main control module; the storage module is used to store the hash fingerprint information, the corresponding face positioning frame and the fatigue detection standard parameters; Receive the data for processing, obtain the fatigue characteristic parameters in real time, and read the fatigue detection standard parameters in the storage module, obtain the fatigue degree according to the result of the comparison between the two, and output the control signal to the alarm module to control the flashing of the LED light and the buzzer of the alarm. Inventions can be detected around the clock. Distraction detection can give early warning of fatigue. Using multiple features to comprehensively judge the degree of fatigue, the results are good in both real-time and accuracy.
Description
技术领域technical field
本发明属于安全驾驶技术领域,具体为一种疲劳驾驶检测装置与方法。The invention belongs to the technical field of safe driving, in particular to a fatigue driving detection device and method.
背景技术Background technique
随着人们的生活幸福指数的提升,汽车的普及程度也越来越高。但是公众对于交通出行安全问题的重视程度却没有随之升高,尤其是普遍缺乏对疲劳驾驶等造成事故的隐性成因的认识。疲劳驾驶产生的后果有时是极其严重的,因此通过技术手段对疲劳驾驶进行检测并及时预警,能够有效减少交通事故带来的危害。With the improvement of people's life happiness index, the popularity of automobiles is also getting higher and higher. However, the public's attention to traffic safety issues has not increased accordingly, especially the lack of awareness of the hidden causes of accidents such as fatigue driving. The consequences of fatigue driving are sometimes extremely serious. Therefore, the detection and timely warning of fatigue driving through technical means can effectively reduce the harm caused by traffic accidents.
目前研究较多的是基于视觉特征的方法,采集驾驶员脸部图像后通过分析其眼部、嘴部等面部特征来检测疲劳。但是这一过程是极其复杂的,一方面面部特征容易受个体因素和光照环境影响,而且在行驶过程中人脸存在姿态角度问题,对检测精度造成了较大干扰。另一方面有些方法在准确度和实时性两个方面无法做到很好的平衡,有的方法准确度提高了,但是没有考虑实时性的问题;有的方法能够做到快速检测,但是又面临着精度较差的问题。所以该技术在实际应用中还需要进一步完善与研究。At present, most of the researches are based on visual feature methods. After collecting the driver's face image, fatigue is detected by analyzing the facial features such as the eyes and mouth. However, this process is extremely complicated. On the one hand, facial features are easily affected by individual factors and the lighting environment, and there is a problem with the posture and angle of the face during driving, which greatly interferes with the detection accuracy. On the other hand, some methods cannot achieve a good balance in terms of accuracy and real-time performance. Some methods have improved accuracy, but do not consider the real-time problem; some methods can achieve rapid detection, but face the problem of poor accuracy. Therefore, this technology needs further improvement and research in practical application.
发明内容SUMMARY OF THE INVENTION
针对上述现有技术,本发明要解决的技术问题是提供一种兼顾实时性和准确度、不易受光照变化和姿态角度影响的疲劳驾驶检测装置与方法。In view of the above-mentioned prior art, the technical problem to be solved by the present invention is to provide a fatigue driving detection device and method that takes both real-time performance and accuracy into consideration, and is not easily affected by illumination changes and attitude angles.
为解决上述技术问题,本发明的一种疲劳驾驶检测装置,包括图像采集模块、主控模块、存储模块、报警模块,主控模块分别与图像采集装置、存储模块、报警模块电性连接;所述报警模块包含LED灯、报警器;所述图像采集模块包含摄像头和4个红外LED光源;主控模块输出控制信号到摄像头,摄像头采集驾驶员图像,并输入至主控模块;所述存储模块用于存储哈希指纹信息、与哈希指纹对应的人脸定位框以及疲劳检测标准参数;所述主控模块对接收数据进行处理,实时得到疲劳特征参数,并读取存储模块中的疲劳检测标准参数,根据二者比较的结果得到疲劳程度,输出对应控制信号到报警模块,控制报警模块发出设定报警信号。In order to solve the above technical problems, a fatigue driving detection device of the present invention includes an image acquisition module, a main control module, a storage module, and an alarm module, and the main control module is respectively electrically connected with the image acquisition device, the storage module, and the alarm module; The alarm module includes LED lights and an alarm; the image acquisition module includes a camera and 4 infrared LED light sources; the main control module outputs control signals to the camera, and the camera collects driver images and inputs them to the main control module; the storage module It is used to store the hash fingerprint information, the face positioning frame corresponding to the hash fingerprint and the fatigue detection standard parameters; the main control module processes the received data, obtains the fatigue characteristic parameters in real time, and reads the fatigue detection in the storage module. According to the standard parameters, the fatigue level is obtained according to the result of the comparison between the two, and the corresponding control signal is output to the alarm module, which controls the alarm module to issue a set alarm signal.
一种采用上述疲劳驾驶检测装置的检测方法,包括以下步骤:A detection method using the above-mentioned fatigue driving detection device, comprising the following steps:
S1:利用图像采集装置采集驾驶员驾驶图像,图像包括驾驶员清醒时的图像,得到眼部宽高比初始值,并作为疲劳检测标准参数;S1: Use the image acquisition device to collect the driver's driving image, the image includes the image of the driver when he is awake, obtain the initial value of the eye aspect ratio, and use it as the fatigue detection standard parameter;
S2:使用改进后的人脸快速检测算法定位人脸位置;S2: Use the improved fast face detection algorithm to locate the face position;
S3:当检测到人脸时,使用人脸特征点定位算法,得出68个人脸特征点位置;S3: When a face is detected, use the face feature point positioning algorithm to obtain 68 face feature point positions;
S4:根据人脸特征点的位置提取眼部信息,计算眼部宽高比,计算单位时间闭眼帧数占总帧数的比例PERCLOS值和眨眼频率,判断PERCLOS值T与给定阈值之间的关系:当T≥TZ时,判定为重度疲劳,TZ为重度疲劳阈值,报警模块发出设定的报警信号;当TQ<T<TZ时,TQ为轻度疲劳阈值,执行S5;当T≤TQ时,执行步骤S6;S4: Extract the eye information according to the position of the facial feature points, calculate the eye aspect ratio, calculate the ratio of the number of closed eye frames per unit time to the total number of frames, the PERCLOS value and the blinking frequency, and determine the PERCLOS value T and the given threshold. relationship: when T ≥ T Z , it is judged as severe fatigue, T Z is the threshold of severe fatigue, and the alarm module sends out the set alarm signal; when T Q < T < T Z , T Q is the threshold of mild fatigue, and the execution S5; when T≤T Q , execute step S6;
S5:进一步判断眨眼频率是否过快:当眨眼频率大于给定眨眼频率阈值时,判定为正常驾驶,执行步骤S6;否则,判定为轻度疲劳驾驶,报警模块发出设定的报警信号;S5: further determine whether the blinking frequency is too fast: when the blinking frequency is greater than the given blinking frequency threshold, it is determined as normal driving, and step S6 is performed; otherwise, it is determined as mildly fatigued driving, and the alarm module sends a set alarm signal;
S6:进行头部姿态角度计算,当头部姿态垂直方向的角度大于给定角度阈值时,判定为低头,当头部姿态垂直方向的角度大于给定角度阈值的时间大于给定时间阈值时,判定为重度疲劳,报警模块发出设定的报警信号;否则,执行步骤S7;S6: Calculate the head posture angle. When the angle in the vertical direction of the head posture is greater than the given angle threshold, it is determined as bowing, and when the time when the angle in the vertical direction of the head posture is greater than the given angle threshold is greater than the given time threshold, If it is determined to be severe fatigue, the alarm module sends a set alarm signal; otherwise, step S7 is performed;
S7:根据眼部宽高比判断眼睛是否闭合:当眼睛闭合时,无法进行注意力分散检测,判定为正常驾驶;当未闭合时,执行步骤S8;S7: Determine whether the eyes are closed according to the aspect ratio of the eyes: when the eyes are closed, the distraction detection cannot be performed, and it is determined as normal driving; when the eyes are not closed, step S8 is performed;
S8:进行注意力分散检测,计算得到视线水平偏角θl和视线垂直偏角θv,将驾驶员面前区域划分为正常注视区域和视线偏离区域,根据θl和θv得出驾驶员注视落点所在区域,当注视落点在视线偏离区域时间超过给定时间阈值时,判定为轻度疲劳,报警模块发出设定的报警信号;否则,判定为正常驾驶。S8: Perform distraction detection, calculate the horizontal declination angle θ l and vertical declination angle θ v , divide the area in front of the driver into a normal gaze area and a deviated gaze area, and obtain the driver's gaze according to θ l and θ v In the area where the landing point is located, when the time of looking at the landing point in the line-of-sight deviation area exceeds the given time threshold, it is determined as mild fatigue, and the alarm module sends out the set alarm signal; otherwise, it is determined as normal driving.
本发明还包括:The present invention also includes:
1.对S1中后续视频流采样图像进行预处理,包括自适应中值滤波和基于拉普拉斯的图像增强。1. Preprocess the sampled images of subsequent video streams in S1, including adaptive median filtering and Laplacian-based image enhancement.
2.S2中使用改进后的人脸快速检测算法定位人脸位置具体为:2. Using the improved fast face detection algorithm in S2 to locate the face position is as follows:
首先计算待检测驾驶员图像的均值哈希指纹,如果在缓存模块中有与所述均值哈希指纹相差不超过2位的指纹信息,则直接输出缓存中指纹对应的人脸定位框,同时缓存中该指纹调用次数加1;First, calculate the average hash fingerprint of the driver image to be detected. If there is fingerprint information that differs from the average hash fingerprint by no more than 2 bits in the cache module, the face location frame corresponding to the fingerprint in the cache is directly output, and the cache Add 1 to the number of times the fingerprint is called in ;
如果在缓存模块中有与均值哈希指纹相差超过2位但小于5位的指纹信息,则调用AdaBoost人脸检测算法得到定位框,同时判断缓存容量是否已满,如果未满,则将该均值哈希指纹存入缓存数据库;当缓存容量已满,则删除缓存中调用次数最小的指纹,然后将当前均值哈希指纹存入缓存数据库;If there is fingerprint information that differs from the average hash fingerprint by more than 2 bits but less than 5 bits in the cache module, call the AdaBoost face detection algorithm to get the positioning frame, and judge whether the cache capacity is full. The hash fingerprint is stored in the cache database; when the cache capacity is full, the fingerprint with the smallest number of calls in the cache is deleted, and then the current average hash fingerprint is stored in the cache database;
如果在缓存模块中的指纹信息与所述均值哈希指纹相差均超过5位,则调用AdaBoost人脸检测算法得到定位框。If the difference between the fingerprint information in the cache module and the average hash fingerprint is more than 5 bits, the AdaBoost face detection algorithm is called to obtain the positioning frame.
3.S3中的使用人脸特征点定位算法,得出68个特征点坐标具体为:3. Using the facial feature point positioning algorithm in S3, the coordinates of 68 feature points are obtained as follows:
首先里利用HOG-SVM算法判断人脸朝向:选取不同人脸朝向,提取待检测图像HOG特征得到HOG特征向量,并作为SVM输入参数,得到人脸朝向估计分类器;First, use the HOG-SVM algorithm to judge the face orientation: select different face orientations, extract the HOG feature of the image to be detected to obtain the HOG feature vector, and use it as the SVM input parameter to obtain the face orientation estimation classifier;
根据人脸朝向结果的不同选用不同的人脸特征点初始化模型;According to the different face orientation results, different face feature point initialization models are selected;
对于每个人脸特征点位置,利用训练好的随机森林提取LBF特征,利用训练好的线性回归器得到回归结果更新人脸特征点位置,直到到达给定最大迭代次数,输出人脸特征点位置。For each face feature point position, the trained random forest is used to extract the LBF feature, and the trained linear regressor is used to obtain the regression result to update the face feature point position until the maximum number of iterations is reached, and the face feature point position is output.
4.利用训练好的随机森林提取LBF特征是将特征点附近的两个像素灰度做归一化处理之后作为分类特征。4. Using the trained random forest to extract the LBF feature is to normalize the gray levels of the two pixels near the feature point as a classification feature.
5.S4中计算眼部宽高比具体为:5. The calculation of the eye aspect ratio in S4 is as follows:
眼部宽高比WHR满足:The eye aspect ratio WHR satisfies:
其中,W为眼睛宽度,H为高度,WHRinit是驾驶员在清醒状态时的眼部宽高比初始值;Among them, W is the eye width, H is the height, and WHR init is the initial value of the eye aspect ratio when the driver is awake;
S4所述计算单位时间内闭眼帧数占总帧数的比例PERCLOS值T具体为:The ratio of the number of closed-eye frames to the total number of frames in the calculation unit time in S4 PERCLOS value T is specifically:
当WHR大于给定阈值时,判定人眼为闭合状态,单位时间内总帧数为Z,人眼为闭合状态帧数为z,则T满足:When the WHR is greater than the given threshold, it is determined that the human eye is in the closed state, the total number of frames per unit time is Z, and the number of frames in the closed state of the human eye is z, then T satisfies:
S4所述计算眨眼频率具体为:The calculation of the blinking frequency described in S4 is specifically:
当前一帧图像WHR≤3,当前帧图像WHR大于3,则判定发生1次眨眼,给定的单位时间内眨眼次数即为眨眼频率。If the WHR of the current frame image is less than or equal to 3, and the WHR of the current frame image is greater than 3, it is determined that one blink occurs, and the number of blinks in a given unit time is the blink frequency.
6.步骤S6中进行头部姿态角度计算具体为:6. The calculation of the head attitude angle in step S6 is as follows:
在得到人脸特征点位置之后,通过求解旋转、平移矩阵,将图像上二维N个特征点位置映射到标准三维人脸模型上的对应N个特征点位置,最后利用旋转矩阵R计算头部姿态角α、、βγ:After obtaining the position of the facial feature points, by solving the rotation and translation matrices, the two-dimensional N feature points on the image are mapped to the corresponding N feature points on the standard three-dimensional face model, and finally the rotation matrix R is used to calculate the head. Attitude angle α, βγ:
其中β为头部姿态垂直方向的角度,α为头部姿态水平方向的角度,γ为头部姿态前后方向的角度。Among them, β is the angle of the vertical direction of the head posture, α is the angle of the horizontal direction of the head posture, and γ is the angle of the front and rear directions of the head posture.
7.S8中进行注意力分散检测,计算得到视线水平偏角θl和视线垂直偏角θv具体为:7. The attention distraction detection is performed in S8, and the horizontal declination angle θ l and the vertical declination angle θ v of the line of sight are calculated as follows:
对于瞳孔内一点与瞳孔边界点的位移向量和该边界点的梯度向量,二者内积最大值所对应瞳孔上的点为瞳孔中心,定位到瞳孔中心之后,在其附近搜索普尔钦光斑位置,普尔钦光斑是由分布在驾驶员面前区域四周的4个红外LED光源形成的,计算瞳孔中心与4个普尔钦光斑围成的矩形中心的相对位置,得到二者水平偏角垂直偏角η,然后用头部姿态α、β修正视线方向,求出最终视线水平偏角为垂直偏角为θv=η+β。For the displacement vector of a point in the pupil and the boundary point of the pupil and the gradient vector of the boundary point, the point on the pupil corresponding to the maximum value of the inner product of the two is the center of the pupil. The Purkin light spot is formed by 4 infrared LED light sources distributed around the area in front of the driver. Calculate the relative position of the pupil center and the center of the rectangle surrounded by the four Purkin light spots, and obtain the horizontal declination angle of the two. The vertical declination angle η, and then use the head posture α, β to correct the line of sight direction, and the final horizontal declination angle of the line of sight is obtained as The vertical deflection angle is θ v =η+β.
本发明的有益效果:Beneficial effects of the present invention:
1、使用红外装置解决了夜晚等弱光条件下准确率不高的问题,从而实现全天候疲劳检测的目的。在改进人脸检测算法之后,能够快速准确地确定人脸的位置;1. The use of infrared devices solves the problem of low accuracy in low light conditions such as night, so as to achieve the purpose of all-weather fatigue detection. After improving the face detection algorithm, the position of the face can be quickly and accurately determined;
2、通过改进LBF算法初始化策略,减少了人脸朝向问题对人脸特征点定位带来的影响,并且对配戴眼镜和光照变化等问题的鲁棒性增强;2. By improving the initialization strategy of the LBF algorithm, the influence of the face orientation problem on the positioning of face feature points is reduced, and the robustness to problems such as wearing glasses and lighting changes is enhanced;
3、将驾驶员注意力分散检测应用于疲劳驾驶检测算法中,以便在驾驶员陷入深度疲劳之前系统就能够对其预警,更加保障驾驶员的安全;3. The driver distraction detection is applied to the fatigue driving detection algorithm, so that the system can warn the driver before the driver falls into deep fatigue, so as to ensure the safety of the driver;
4、利用多特征综合判断疲劳程度,算法的实时性和准确度两个方面效果均好,成功解决了二者的平衡问题。4. Using multiple features to comprehensively judge the degree of fatigue, the algorithm has good effects in both real-time and accuracy, and successfully solves the balance problem between the two.
附图说明Description of drawings
图1是依照本发明实施例的驾驶员疲劳驾驶检测方法流程示意图。FIG. 1 is a schematic flowchart of a driver fatigue driving detection method according to an embodiment of the present invention.
图2是依照本发明实施例的驾驶员人脸检测算法流程图。FIG. 2 is a flowchart of a driver face detection algorithm according to an embodiment of the present invention.
图3是依照本发明实施例的不同人脸特征点初始化模型示意图。FIG. 3 is a schematic diagram of initialization models of different facial feature points according to an embodiment of the present invention.
图4是依照本发明实施例的人脸特征点定位算法流程图。FIG. 4 is a flowchart of a facial feature point location algorithm according to an embodiment of the present invention.
图5是依照本发明实施例的驾驶员注视区域划分示意图。FIG. 5 is a schematic diagram of dividing a driver's gaze area according to an embodiment of the present invention.
图6是依照本发明实施例的用于疲劳驾驶检测的硬件装置结构图。FIG. 6 is a structural diagram of a hardware device for fatigue driving detection according to an embodiment of the present invention.
具体实施方式Detailed ways
下面结合附图和具体实施方式对本发明作进一步详细描述。The present invention will be further described in detail below with reference to the accompanying drawings and specific embodiments.
如图1所示是依照本发明一实施例给出的驾驶员疲劳驾驶检测方法的流程图,其包括以下步骤:1 is a flowchart of a method for detecting driver fatigue driving according to an embodiment of the present invention, which includes the following steps:
S1、通过摄像头采集驾驶员图像,并进行预处理。S1. Collect driver images through a camera and perform preprocessing.
具体地,所述摄像头为红外摄像头,4个红外LED光源被设置在汽车前挡风玻璃的四周。用于采集驾驶员图像。系统运行初始时驾驶员是清醒的,此时采集输入图像是为了得到眼部宽高比初始值。之后对视频流采样,得到驾驶员输入图像。Specifically, the camera is an infrared camera, and four infrared LED light sources are arranged around the front windshield of the car. Used to capture driver images. When the system runs initially, the driver is awake, and the input image is collected at this time to obtain the initial value of the eye aspect ratio. The video stream is then sampled to obtain the driver input image.
在一优选实施例里,对视频流截取图像之后,对其进行预处理。所述预处理技术包括自适应中值滤波和基于拉普拉斯的图像增强。自适应中值滤波是通过模板中所有像素灰度的中值来代替中间像素,并且模板大小可以调整,减少平滑非冲击噪声的干扰。拉普拉斯图像增强通过遍历整幅图像,然后根据像素邻域的灰度值运用拉普拉斯算子得到该像素之后的值。In a preferred embodiment, the video stream is preprocessed after the image is captured. The preprocessing techniques include adaptive median filtering and Laplacian-based image enhancement. Adaptive median filtering replaces the intermediate pixels by the median of all pixel grayscales in the template, and the template size can be adjusted to reduce the interference of smooth non-impact noise. Laplacian image enhancement traverses the entire image, and then uses the Laplacian operator to obtain the value after the pixel according to the gray value of the pixel neighborhood.
S2、使用改进的人脸检测算法标注人脸位置。S2. Use the improved face detection algorithm to mark the face position.
具体地,如图2所示,对于待检测驾驶员图像来说,首先计算其均值哈希指纹。均值哈希指纹是将图像通过相应计算得出一连串的二进制数字。使用它可以表征两张图像的相似程度。由于疲劳驾驶检测时,驾驶员的动作一般幅度不是很大,因此对于前后几帧的图像来说,可通过比对其均值哈希指纹判断相似程度,并使用缓存机制从存储模块中直接调取人脸定位框,从而加快检测速率。Specifically, as shown in Figure 2, for the driver image to be detected, its mean hash fingerprint is first calculated. Mean hash fingerprint is a series of binary numbers obtained by corresponding calculation of the image. Use it to characterize how similar two images are. Since the driver's movements are generally not very large during fatigue driving detection, for the images of several frames before and after, the similarity can be judged by comparing their mean hash fingerprints, and the cache mechanism can be used to directly retrieve them from the storage module. Face positioning frame, thereby speeding up the detection rate.
改进的快速人脸检测算法流程如下:缩小输入图像尺寸到8*8,之后计算图像灰度均值。统计图像各像素的灰度值与均值的大小关系,大于均值的置1,小于均值的置0,从而得到一串由0、1字符组成的均值哈希指纹信息hash_finger。如果在缓存模块中有与hash_finger相差位数小于等于2的指纹数据,则直接输出缓存中对应指纹数据的人脸定位框;否则利用AdaBoost算法输出人脸定位框,此时根据指纹相差位数情况决定是否将该人脸定位框保存到缓存模块中。如果相差位数大于等于5,则不保存;如果相差位数大于2小于5,则将hash_finger和人脸定位框存入缓存模块。为了针对缓存容量问题,对于每一个哈希指纹都统计其被调用次数,以便在缓存满时将最小调用次数的哈希指纹和对应的人脸定位框删除。The process of the improved fast face detection algorithm is as follows: reduce the size of the input image to 8*8, and then calculate the average gray level of the image. The relationship between the gray value of each pixel of the image and the mean value is counted. If it is greater than the mean value, it is set to 1, and if it is less than the mean value, it is set to 0, so as to obtain a string of mean hash fingerprint information hash_finger consisting of 0 and 1 characters. If there is fingerprint data with a difference of less than or equal to 2 from the hash_finger in the cache module, the face positioning frame corresponding to the fingerprint data in the cache is directly output; otherwise, the AdaBoost algorithm is used to output the face positioning frame. Determines whether to save the face positioning frame to the cache module. If the difference is greater than or equal to 5, it will not be saved; if the difference is greater than 2 and less than 5, the hash_finger and the face positioning frame will be stored in the cache module. In order to solve the problem of cache capacity, the number of calls is counted for each hash fingerprint, so that when the cache is full, the hash fingerprint with the minimum number of calls and the corresponding face location frame are deleted.
S3、使用改进的人脸特征点定位算法,得到68个人脸特征点位置。S3. Using the improved facial feature point positioning algorithm, the positions of 68 facial feature points are obtained.
传统的基于局部二值特征(LBF)的算法需要在开始时输入一个人脸特征点初始化模型,然后通过不断的回归使得该模型越来越逼近真实位置。本发明为提高检测精度与速度,对LBF算法优化,采用不同的初始化策略。不再简单地使用标准平均人脸来进行初始化,而是首先里利用HOG-SVM算法判断人脸朝向,从而使用不同的人脸特征点初始化模型。The traditional algorithm based on local binary feature (LBF) needs to input a face feature point to initialize the model at the beginning, and then make the model more and more close to the real position through continuous regression. In order to improve the detection accuracy and speed, the present invention optimizes the LBF algorithm and adopts different initialization strategies. Instead of simply using the standard average face for initialization, the HOG-SVM algorithm is first used to determine the face orientation, so as to use different face feature points to initialize the model.
具体地,HOG特征将目标图像划分成若干小块(cell),各个小块像素的梯度直方图合并到一起便可形成表征该图像的向量。HOG没有旋转和尺度不变性,因此计算要快许多。而且对形状变化敏感,可以很好的表征人脸轮廓,尤其适合人脸朝向分类任务。提取HOG特征后能够得到HOG特征向量供后续SVM分类使用,最终得到人脸朝向分类器。Specifically, the HOG feature divides the target image into several cells, and the gradient histograms of the pixels in each cell are combined to form a vector representing the image. HOG has no rotation and scale invariance, so the computation is much faster. Moreover, it is sensitive to shape changes and can well represent the face contour, especially suitable for face orientation classification tasks. After the HOG feature is extracted, the HOG feature vector can be obtained for subsequent SVM classification, and finally the face orientation classifier is obtained.
本发明实施例在训练时选取5种人脸朝向,分别是无偏转正向人脸、向左偏转22度人脸、向左偏转45度人脸、向右偏转22度人脸以及向右偏转45度人脸。所以在训练数据集文件夹中新建5个文件夹,分别命名为0、-1、-2、1、2并将对应的图片存入其中。在SVM训练时将图片所在文件夹的名称作为分类标签,提取HOG特征向量作为SVM输入参数,得到人脸朝向估计分类器。In the embodiment of the present invention, five kinds of face orientations are selected during training, namely, the face without deflection, the face with a 22-degree deflection to the left, a face with a 45-degree deflection to the left, a face with a 22-degree deflection to the right, and a face with a deflection to the right. 45 degree face. So create 5 new folders in the training dataset folder, name them 0, -1, -2, 1, 2 and store the corresponding images in them. During SVM training, the name of the folder where the image is located is used as the classification label, and the HOG feature vector is extracted as the SVM input parameter to obtain the face orientation estimation classifier.
如图3,根据人脸朝向结果的不同选用不同的人脸特征点初始化模型。之后如图4利用训练好的随机森林提取LBF特征,利用训练好的线性回归器得到回归结果更新人脸特征点位置,直到到达最大迭代次数。具体地,随机森林训练过程:将训练样本拆分成多个小样本,每个小样本中的图片均有68个特征点。在以每一个特征点为圆心的圆内,生成500个像素点,计算二者之间的差值。由这些差值作为特征,选择阈值将当前小样本中所有图片分成左右子树两大类,并计算分类前后的方差衰减。统计使该值最大的那个阈值就是本次分类的最终阈值,此时的特征就是本次最终特征。不断分裂直到该树的最大深度。一个特征点的决策树建立完毕,然后取下一个小样本集重复上述步骤,通过这种方法,每个特征点构建多个子树组成一个随机森林。本文脸部有68个点,便是68个随机森林。As shown in Figure 3, different face feature point initialization models are selected according to the different face orientation results. After that, as shown in Figure 4, the trained random forest is used to extract the LBF feature, and the trained linear regressor is used to obtain the regression result to update the position of the face feature point until the maximum number of iterations is reached. Specifically, the random forest training process: the training sample is divided into multiple small samples, and the pictures in each small sample have 68 feature points. In the circle with each feature point as the center, 500 pixel points are generated, and the difference between the two is calculated. Using these differences as features, a threshold is selected to divide all the pictures in the current small sample into two categories: left and right subtrees, and the variance attenuation before and after classification is calculated. The threshold value that maximizes the value of statistics is the final threshold value of this classification, and the feature at this time is the final feature of this time. Keep splitting up to the maximum depth of the tree. A decision tree for a feature point is established, and then a small sample set is taken and the above steps are repeated. Through this method, multiple subtrees are constructed for each feature point to form a random forest. There are 68 points on the face in this article, which is 68 random forests.
本发明为提升分类效果,不再简单地使用像素差值作为特征,而是使用归一化值:In order to improve the classification effect, the present invention no longer simply uses the pixel difference value as the feature, but uses the normalized value:
x和y是两个像素值,使用NOL(x,y)作为特征来分类,这种归一化处理之后使得特征对光照更敏感。同时这样处理之后在计算量上甚至没有改变,然而分类效果明显提升。x and y are two pixel values, using NOL(x, y) as the feature to classify, this normalization makes the feature more sensitive to lighting. At the same time, the amount of calculation does not even change after this processing, but the classification effect is significantly improved.
然后利用训练好的随机森林提取局部二值特征。对于每一张图片的每一个特征点来说,该图片必然会被分类到某个叶节点中,此时将该叶节点编码为1。否则将其编码为0。就可得到一棵树的二进制编码,将该特征点的随机森林中所有二进制编码组合起来就是LBF特征。最后把所有LBF组合以形成特征映射Φt。Then use the trained random forest to extract local binary features. For each feature point of each picture, the picture must be classified into a certain leaf node, and the leaf node is encoded as 1 at this time. Otherwise encode it as 0. The binary code of a tree can be obtained, and the combination of all binary codes in the random forest of the feature point is the LBF feature. Finally, all LBFs are combined to form the feature map Φ t .
回归时把位置增量当成学习目标,训练线性回归器Wt。LBF算法在每一层级的回归中将线性回归矩阵Wt和特征映射函数Φt相乘,根据人脸特征点初始化模型和当前特征点位置信息得到一个位置变量ΔS,从而修正本层级的位置信息St,即St=St-1+WtΦt(I,St-1)。然后继续回归与迭代。通过最小化公式的目标函数来学习。随着层级的不断深入,回归产生的特征点越来越逼近真实位置。The position increment is used as the learning target during regression, and the linear regressor W t is trained. The LBF algorithm multiplies the linear regression matrix W t and the feature mapping function Φ t in the regression of each level, and obtains a position variable ΔS according to the face feature point initialization model and the current feature point position information, thereby correcting the position information of this level S t , that is, S t = S t-1 +W t Φ t (I, S t-1 ). Then continue regression and iteration. By minimizing the formula the objective function to learn. With the deepening of the level, the feature points generated by regression are getting closer and closer to the real position.
S4、根据人脸特征点的位置提取眼部信息、计算头部姿态,将它们用作疲劳驾驶检测的参数特征;S4. Extract eye information and calculate head posture according to the positions of the facial feature points, and use them as parameter features for fatigue driving detection;
具体地,本发明首先根据眼部特征点的形状和边缘检测利用眼睛宽度W和高度H计算眼部宽高比WHR。Specifically, the present invention first calculates the eye aspect ratio WHR by using the eye width W and the height H according to the shape of the eye feature points and edge detection.
WHRinit是驾驶员在清醒状态时的眼部宽高比初始值。优选地,本发明实施例在眼部闭合超过80%的时候就认为当前人眼处于闭合状态,即当WHR大于3时的人眼是闭合的。统计闭眼帧数占单位时间总帧数的比例得到PERCLOS值。取单位时间为30秒,把轻度疲劳阈值设为0.3,重度阈值设为0.5。如果检测到PERCLOS值大于0.5,则说明驾驶员重度疲劳;如果PERCLOS值小于0.3,则驾驶员正常;如果在二者之间,则计算眨眼频率。若之前几帧图片有WHR≤3,而当前帧图片大于3,则发生了1次眨眼。设定眨眼频率阈值为5次/5秒,如果大于阈值,则眨眼频率过快,说明不是由于疲劳导致的,而是由于特殊情况下(如强风、强光)应激快速眨眼导致的PERCLOS值升高,判定其为正常驾驶,反正则为轻度疲劳驾驶。WHR init is the initial value of the driver's eye aspect ratio when awake. Preferably, in the embodiment of the present invention, when the eye closure exceeds 80%, it is considered that the human eye is currently in a closed state, that is, when the WHR is greater than 3, the human eye is closed. The PERCLOS value is obtained by counting the ratio of the number of closed eye frames to the total number of frames per unit time. Take the unit time as 30 seconds, set the threshold of mild fatigue as 0.3, and set the threshold of severe fatigue as 0.5. If the detected PERCLOS value is greater than 0.5, it indicates that the driver is severely fatigued; if the PERCLOS value is less than 0.3, the driver is normal; if it is between the two, the blink frequency is calculated. If the previous frames have WHR≤3 and the current frame is greater than 3, then a blink occurs. Set the blink frequency threshold to 5 times/5 seconds. If it is greater than the threshold, the blink frequency is too fast, indicating that it is not caused by fatigue, but the PERCLOS value caused by rapid blinking under special circumstances (such as strong wind, strong light). If it rises, it is judged to be normal driving, and it is mild fatigue driving anyway.
如果利用上述特征参数判断出驾驶员未处于疲劳状态,则计算头部姿态角度。首先在得到人脸特征点位置之后,通过求解旋转、平移矩阵,将图像上二维N个特征点位置映射到标准三维人脸模型上的对应N个特征点位置。在这一步中涉及到对特征点进行正则化。最后利用旋转矩阵R计算头部姿态角α、β、γ。If the above characteristic parameters are used to determine that the driver is not in a fatigued state, the head attitude angle is calculated. First, after obtaining the position of the facial feature points, by solving the rotation and translation matrix, the two-dimensional N feature point positions on the image are mapped to the corresponding N feature point positions on the standard three-dimensional face model. This step involves regularizing the feature points. Finally, the rotation matrix R is used to calculate the head attitude angles α, β, γ.
其中β为头部姿态垂直方向的角度,针对疲劳时驾驶员会出现长时间低头的情况,将头部姿态角度阈值设置为20°,如果连续3秒的β均大于阈值,则判定其为重度疲劳,α为头部姿态水平方向的角度,γ为头部姿态前后方向的角度。Among them, β is the angle of the vertical direction of the head posture. In view of the fact that the driver will bow his head for a long time when fatigued, the threshold of the head posture angle is set to 20°. If the β for 3 consecutive seconds is greater than the threshold, it is determined as severe Fatigue, α is the angle in the horizontal direction of the head posture, and γ is the angle in the front and rear direction of the head posture.
S5、根据人眼中的瞳孔与红外光源形成的普尔钦光斑二者的相对位置判断人眼注视方向,进行驾驶员注意力分散检测。S5: Determine the gaze direction of the human eye according to the relative positions of the pupil in the human eye and the Purkins spot formed by the infrared light source, and perform the driver's distraction detection.
疲劳程度是逐渐加深的,而注意力分散往往是疲劳过程的开始。可能驾驶员眼部特征和头部姿态都正常,但是注意力分散了,这同样影响行车安全。本发明将注意力分散作为一种轻度疲劳指标,如果通过所述步骤S4未检测出疲劳,则进行注意力分散检测。具体地,对于瞳孔内一点与瞳孔边界点的位移向量和该边界点的梯度向量,二者内积最大值所对应瞳孔上的点就是瞳孔中心。定位到瞳孔中心之后,在其附近搜索普尔钦光斑位置。普尔钦光斑是由分布在驾驶员面前区域四周的4个红外LED光源形成的,故有投影空间的性质,计算瞳孔中心与4个普尔钦光斑围成的矩形中心的相对位置,得到二者水平偏角垂直偏角η。然后用头部姿态α、β修正视线方向,求出最终视线水平偏角为垂直偏角为θv=η+β。Fatigue is progressive, and distraction is often the beginning of the fatigue process. It is possible that the driver's eye features and head posture are normal, but his attention is distracted, which also affects driving safety. In the present invention, distraction is taken as a mild fatigue indicator, and if fatigue is not detected through the step S4, distraction detection is performed. Specifically, for the displacement vector of a point in the pupil and the pupil boundary point and the gradient vector of the boundary point, the point on the pupil corresponding to the maximum value of the inner product of the two is the pupil center. After locating the center of the pupil, search for the location of the Purkin spot near it. The Purkin light spot is formed by four infrared LED light sources distributed around the area in front of the driver, so it has the property of projection space. Calculate the relative position of the pupil center and the center of the rectangle surrounded by the four Purkin light spots, and obtain the level of the two. declination Vertical declination angle η. Then use the head posture α, β to correct the line of sight direction, and obtain the final horizontal declination angle of the line of sight as The vertical deflection angle is θ v =η+β.
如图5,本发明实施例将面前区域分成3*3的9部分,各区域角度分界线标示在图中,根据θl、θv角度得出驾驶员注视区域。当驾驶员正常驾驶时,视线方向几乎都在正前方。而除了区域1、4、5以外,注视落点在其它区域上都属于视线偏离。长时间的视线偏离属于分神现象,因此确定分神检测阈值为:如果注视落点不在区域1、4、5的持续时间大于3秒,则驾驶员已经分神,判定其为轻度疲劳。As shown in FIG. 5 , in the embodiment of the present invention, the front area is divided into 9 parts of 3*3, and the angle boundary of each area is marked in the figure, and the driver's gaze area is obtained according to the angles of θ l and θ v . When the driver is driving normally, the line of sight is almost always straight ahead. Except for
本发明实施例还提供了一种疲劳驾驶检测装置,如图6所示。包含图像采集模块、主控模块、存储模块、报警模块。优选地,图像采集模块使用红外CCD摄像头,并配以4个红外LED光源。主控模块采用树莓派3B主板。存储模块使用1GB的LPDDR2 SDRAM存储器。报警模块包括LED灯、报警器和喇叭。The embodiment of the present invention also provides a fatigue driving detection device, as shown in FIG. 6 . Including image acquisition module, main control module, storage module, alarm module. Preferably, the image acquisition module uses an infrared CCD camera and is equipped with 4 infrared LED light sources. The main control module adopts Raspberry Pi 3B motherboard. The memory module uses 1GB of LPDDR2 SDRAM memory. The alarm module includes LED lights, sirens and horns.
本发明包含了图像采集、图像预处理、人脸检测、人脸特征点定位、眼部特征提取、头部姿态求解、注意力分散检测和疲劳状态判断整个检测流程,根据驾驶员疲劳特征参数的不同情况,将疲劳状态分成三个等级,即正常驾驶、轻度疲劳、重度疲劳。根据检测得到的PERCLOS值、眨眼频率、头部姿态角度、驾驶员视线偏离等多特征参数的不同情况,判断出驾驶员当前疲劳状态等级,从而采用不同的预警级别,减少由于疲劳驾驶导致的交通事故。以上所述只是本发明的优选实施例,而没有具体限制本发明的专利范围。虽然上述优选例的描述很详细,但是该领域研究人员应该明白,在本发明的发明构思下,可以在细节上或者结构上对本发明进行各种改变,而没有偏离本发明权利要求书中所限制的范围。The invention includes the entire detection process of image acquisition, image preprocessing, face detection, facial feature point positioning, eye feature extraction, head posture solution, attention distraction detection and fatigue state judgment. In different situations, the fatigue state is divided into three levels, namely normal driving, mild fatigue, and severe fatigue. According to the detected PERCLOS value, eye blink frequency, head posture angle, driver's sight deviation and other multi-feature parameters, the current fatigue state level of the driver is judged, and different warning levels are adopted to reduce traffic caused by fatigued driving. ACCIDENT. The above descriptions are only preferred embodiments of the present invention, and do not specifically limit the patent scope of the present invention. Although the above preferred embodiments are described in detail, researchers in this field should understand that, under the inventive concept of the present invention, various changes can be made to the present invention in detail or structure, without departing from the limitations of the claims of the present invention range.
本发明具体实施方式还包括:The specific embodiment of the present invention also includes:
鉴于疲劳驾驶检测技术在实际应用中的准确度和实时性两个方面无法做到很好的平衡,以及脸部特征由于光照变化和存在姿态角度而影响准确度的问题,本发明提供了一种疲劳驾驶检测方法与装置。本发明的技术方案能够实现全天候检测,并且对配戴眼镜、人脸朝向和光照变化等问题的鲁棒性增强。通过注意力分散检测能够在疲劳初期对其预警。利用多特征综合判断疲劳程度,实时性和准确度两个方面效果均好,成功解决了二者的平衡问题。In view of the fact that the accuracy and real-time performance of the fatigue driving detection technology cannot be well balanced in practical applications, and the accuracy of facial features is affected by changes in illumination and existing posture angles, the present invention provides a Fatigue driving detection method and device. The technical solution of the present invention can realize all-weather detection and enhance the robustness to problems such as wearing glasses, face orientation and illumination changes. Distraction detection can provide early warning of fatigue. Using multiple features to comprehensively judge the degree of fatigue, both real-time and accuracy are effective, and the balance problem between the two is successfully solved.
本发明解决上述问题的技术方案:一种基于多特征的疲劳驾驶检测方法,包括如下步骤:The technical solution of the present invention to solve the above problems: a multi-feature-based fatigue driving detection method, comprising the following steps:
S1、利用红外装置采集驾驶员驾驶图像;S1. Use an infrared device to collect a driver's driving image;
S2、使用改进后的人脸快速检测算法定位人脸位置;S2. Use the improved fast face detection algorithm to locate the face position;
S3、使用人脸特征点定位算法,得出68个人脸特征点坐标;S3. Use the facial feature point positioning algorithm to obtain 68 facial feature point coordinates;
S4、根据人脸特征点的位置提取眼部信息、计算头部姿态,将它们用作疲劳驾驶检测的参数特征;S4. Extract eye information and calculate head posture according to the positions of the facial feature points, and use them as parameter features for fatigue driving detection;
S5、根据人眼中的瞳孔与红外LED造成的普尔钦光斑二者的相对位置判断人眼注视方向,进行驾驶员注意力分散检测;S5. Determine the gaze direction of the human eye according to the relative position of the pupil in the human eye and the Purkin light spot caused by the infrared LED, and perform the driver's distraction detection;
S6、利用步骤S4和S5中实时检测到的疲劳多特征与阈值比较进行疲劳状态检测,其中疲劳特征包括该驾驶员的PERCLOS值、眨眼频率、头部姿态角度、注视区域落点。S6. Perform fatigue state detection by comparing the fatigue multi-features detected in real time in steps S4 and S5 with a threshold, wherein the fatigue features include the driver's PERCLOS value, blinking frequency, head posture angle, and gaze area placement.
所述步骤S1中红外装置是使用4个红外LED光源配合摄像头,其作用一方面是采集驾驶员图像,减少因光照变化产生的干扰。另一方面是在所述步骤S5中,4个红外光源用于确定驾驶员视线方向。采用4个红外LED光源被设置在汽车前挡风玻璃的四周。In the step S1, the infrared device uses four infrared LED light sources to cooperate with the camera, and on the one hand, its function is to collect the driver's image and reduce the interference caused by the change of illumination. On the other hand, in the step S5, four infrared light sources are used to determine the direction of the driver's line of sight. Four infrared LED light sources are installed around the front windshield of the car.
所述步骤S2中进行人脸检测时,对正负人脸样本提取Haar特征,输入到AdaBoost算法中得到人脸检测分类器。针对传统AdaBoost算法实时性不高的问题,提出改进算法。引入均值哈希算法,将人脸特征信息转变为哈希指纹信息,并添加缓存机制,存储类似图片的哈希指纹信息,从而减少检测时间。首先计算当前输入图像的均值哈希指纹信息,如果在缓存装置中有与该信息类似的哈希指纹,则直接输出缓存中的人脸定位框。否则的话使用基于AdaBoost的人脸检测算法,并将当前指纹信息和人脸定位框存入缓存,以便下一帧做相似图片判断。When performing face detection in the step S2, Haar features are extracted from positive and negative face samples, and input into the AdaBoost algorithm to obtain a face detection classifier. Aiming at the low real-time performance of traditional AdaBoost algorithm, an improved algorithm is proposed. The mean hash algorithm is introduced to convert face feature information into hash fingerprint information, and a caching mechanism is added to store hash fingerprint information similar to images, thereby reducing detection time. First, calculate the mean hash fingerprint information of the current input image, and if there is a hash fingerprint similar to the information in the cache device, directly output the face location frame in the cache. Otherwise, the face detection algorithm based on AdaBoost is used, and the current fingerprint information and the face positioning frame are stored in the cache, so that the similar pictures can be judged in the next frame.
所述步骤S3中在使用LBF算法进行人脸特征点定位之前,增加一步以提高检测速率,即使用HOG-SVM算法判断人脸朝向,并根据人脸朝向的检测结果选用不同的人脸特征点初始化模型,然后使用改进后的LBF算法进行定位,得出68个点的位置。先提取人脸图像的HOG特征,之后使用SVM人脸朝向分类器,通过人脸朝向的不同结果采用不同的人脸特征点初始化模型。然后使用LBF优化算法通过不断的回归与迭代使得初始化模型越来越逼近准确位置。LBF优化算法是指在训练LBF随机森林时,不再使用特征点附近的两个像素差值作为随机森林分类特征,而是将这两个像素灰度做归一化处理之后作为分类特征。In the step S3, before using the LBF algorithm to locate the facial feature points, a step is added to improve the detection rate, that is, the HOG-SVM algorithm is used to judge the facial orientation, and different facial feature points are selected according to the detection results of the facial orientation. Initialize the model, then use the improved LBF algorithm for localization, resulting in the location of 68 points. First extract the HOG feature of the face image, then use the SVM face orientation classifier, and use different face feature points to initialize the model according to the different results of the face orientation. Then, the LBF optimization algorithm is used to make the initialization model get closer and closer to the accurate position through continuous regression and iteration. The LBF optimization algorithm means that when training the LBF random forest, the difference between the two pixels near the feature point is no longer used as the random forest classification feature, but the grayscale of the two pixels is normalized as the classification feature.
所述步骤S4中疲劳驾驶检测的参数特征包含人眼闭合度、眨眼频率、PERCLOS参数和头部姿态角度。根据人脸特征点位置得到眼部图像,用于评定眼部闭合度和眨眼频率的参数是眼部宽高比。通过设定宽高比阈值来判断人眼是否闭合,从而统计眨眼频率阈值。根据人眼闭合时间占单位时间的比例得到PERCLOS参数。利用基于梯度的算法定位瞳孔中心。通过二维68个人脸特征点与三维标准人脸模型的映射关系得到头部姿态角度。The parameter features of the fatigue driving detection in the step S4 include human eye closure, blink frequency, PERCLOS parameter and head posture angle. The eye image is obtained according to the position of the facial feature points, and the parameter used to evaluate the degree of eye closure and blinking frequency is the eye aspect ratio. By setting the aspect ratio threshold, it is judged whether the human eye is closed, so as to count the blink frequency threshold. The PERCLOS parameter is obtained according to the proportion of human eye closure time to unit time. The center of the pupil is located using a gradient-based algorithm. The head pose angle is obtained through the mapping relationship between the 2D 68 face feature points and the 3D standard face model.
所述步骤S5首先利用改进的基于梯度的瞳孔中心定位算法得到瞳孔位置,利用映射空间的性质,将驾驶员面前的区域分成9部分。计算4个红外光源形成的普尔钦光斑与瞳孔的相对位置,并结合头部姿态修正角度得到注视区域落点,统计视线偏离情况,判断驾驶员注意力是否分散。The step S5 firstly uses the improved gradient-based pupil center positioning algorithm to obtain the pupil position, and uses the properties of the mapping space to divide the area in front of the driver into 9 parts. Calculate the relative position of the Purchin spot formed by the four infrared light sources and the pupil, and combine the head posture correction angle to obtain the gaze area landing point, count the deviation of the sight line, and judge whether the driver's attention is scattered.
所述步骤S6根据上述得到的多特征检测疲劳。首先根据眼部宽高比判断人眼闭合情况,更新PERCLOS值和眨眼频率,判断二者是否超过阈值,根据情况不同作出相应预警。没有检测出疲劳则计算头部姿态角度,判断是否长时间低头,如果是则判定其为重度疲劳。在上述特征均属于正常驾驶且人眼是张开状态时,根据视线偏离情况判断注意力是否分散,如果分神则判定其为轻度疲劳,反之则处于正常驾驶。疲劳标准参数包括该驾驶员的眼部宽高比初始值、PERCLOS第一阈值、PERCLOS第二阈值、眨眼频率阈值、头部姿态阈值和人眼注视区域分布统计。将疲劳等级分为三级,通过上述多特征的不同表现判断驾驶员疲劳等级,并作出相应预警。In the step S6, fatigue is detected according to the multi-features obtained above. First, judge the closing situation of the human eye according to the eye aspect ratio, update the PERCLOS value and blink frequency, judge whether the two exceed the threshold, and make corresponding warnings according to different situations. If fatigue is not detected, the head posture angle is calculated, and it is judged whether the head is bowed for a long time, and if so, it is judged as severe fatigue. When the above characteristics belong to normal driving and the human eyes are open, it is judged whether the attention is distracted according to the deviation of the sight. The fatigue standard parameters include the initial value of the driver's eye aspect ratio, the first PERCLOS threshold, the second PERCLOS threshold, the blink frequency threshold, the head posture threshold and the distribution statistics of the human eye gaze area. The fatigue level is divided into three levels, and the driver's fatigue level is judged by the different manifestations of the above-mentioned multiple features, and corresponding warnings are given.
本发明还提供一种疲劳驾驶检测装置。The invention also provides a fatigue driving detection device.
一种疲劳驾驶检测装置,所述装置包括图像采集模块、主控模块、存储模块、报警模块。主控模块分别与图像采集模块、存储模块、报警模块电性连接。所述图像采集模块包含安装在仪表台壳体上方的CCD摄像头,安装于仪表台壳体上方、4个位于挡风玻璃四周的红外LED光源;所述主控模块是树莓派主板及外围电路,外围电路包括电源模块和通信模块;所述报警模块包含LED灯、报警器;所述存储模块是LPDDR2 SDRAM存储器,用于存储哈希指纹信息、与哈希指纹对应的人脸定位框以及疲劳检测标准参数。主控模块输出控制信号到摄像头,摄像头采集驾驶员图像,通过USB数据接口输入至主控模块,执行疲劳驾驶检测程序,实时得到如前所述的疲劳特征参数。并读取存储模块中的疲劳检测标准参数,根据二者比较的结果得到疲劳程度,输出控制信号到报警模块,控制LED灯闪烁、报警器蜂鸣。A fatigue driving detection device includes an image acquisition module, a main control module, a storage module, and an alarm module. The main control module is respectively electrically connected with the image acquisition module, the storage module and the alarm module. The image acquisition module includes a CCD camera installed above the instrument panel shell, four infrared LED light sources installed above the instrument panel case and located around the windshield; the main control module is a Raspberry Pi main board and peripheral circuits , the peripheral circuit includes a power supply module and a communication module; the alarm module includes an LED light and an alarm device; the storage module is an LPDDR2 SDRAM memory for storing hash fingerprint information, a face positioning frame corresponding to the hash fingerprint, and fatigue Check standard parameters. The main control module outputs the control signal to the camera, the camera collects the driver's image, inputs it to the main control module through the USB data interface, executes the fatigue driving detection program, and obtains the aforementioned fatigue characteristic parameters in real time. And read the fatigue detection standard parameters in the storage module, get the fatigue degree according to the results of the comparison between the two, and output a control signal to the alarm module to control the flashing of the LED light and the buzzer of the alarm.
Claims (8)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911258187.4A CN111062292B (en) | 2019-12-10 | 2019-12-10 | Fatigue driving detection device and method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911258187.4A CN111062292B (en) | 2019-12-10 | 2019-12-10 | Fatigue driving detection device and method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111062292A CN111062292A (en) | 2020-04-24 |
CN111062292B true CN111062292B (en) | 2022-07-29 |
Family
ID=70300366
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911258187.4A Expired - Fee Related CN111062292B (en) | 2019-12-10 | 2019-12-10 | Fatigue driving detection device and method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111062292B (en) |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111583585B (en) * | 2020-05-26 | 2021-12-31 | 苏州智华汽车电子有限公司 | Information fusion fatigue driving early warning method, system, device and medium |
CN111845736A (en) * | 2020-06-16 | 2020-10-30 | 江苏大学 | A vehicle collision warning system and control method triggered by distraction monitoring |
CN112163470B (en) * | 2020-09-11 | 2024-12-10 | 高新兴科技集团股份有限公司 | Fatigue state recognition method, system, and storage medium based on deep learning |
CN112733772B (en) * | 2021-01-18 | 2024-01-09 | 浙江大学 | Method and system for detecting real-time cognitive load and fatigue degree in warehouse picking task |
CN113034851A (en) * | 2021-03-11 | 2021-06-25 | 中铁工程装备集团有限公司 | Tunnel boring machine driver fatigue driving monitoring device and method |
CN113569785A (en) * | 2021-08-04 | 2021-10-29 | 上海汽车集团股份有限公司 | Driving state sensing method and device |
CN114155513A (en) * | 2021-12-07 | 2022-03-08 | 西安建筑科技大学 | A driving risk behavior monitoring method, system, device and readable storage medium based on face multi-parameter detection |
CN115272645A (en) * | 2022-09-29 | 2022-11-01 | 北京鹰瞳科技发展股份有限公司 | Multimodal data acquisition device and method for training central fatigue detection model |
CN116012822B (en) * | 2022-12-26 | 2024-01-30 | 无锡车联天下信息技术有限公司 | Fatigue driving identification method and device and electronic equipment |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106530623A (en) * | 2016-12-30 | 2017-03-22 | 南京理工大学 | Fatigue driving detection device and method |
CN109299709A (en) * | 2018-12-04 | 2019-02-01 | 中山大学 | Data recommendation method, device, server and client based on face recognition |
CN109325964A (en) * | 2018-08-17 | 2019-02-12 | 深圳市中电数通智慧安全科技股份有限公司 | A kind of face tracking methods, device and terminal |
CN110516734A (en) * | 2019-08-23 | 2019-11-29 | 腾讯科技(深圳)有限公司 | A kind of image matching method, device, equipment and storage medium |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8655029B2 (en) * | 2012-04-10 | 2014-02-18 | Seiko Epson Corporation | Hash-based face recognition system |
CN108423006A (en) * | 2018-02-02 | 2018-08-21 | 辽宁友邦网络科技有限公司 | A kind of auxiliary driving warning method and system |
CN109254654B (en) * | 2018-08-20 | 2022-02-01 | 杭州电子科技大学 | Driving fatigue feature extraction method combining PCA and PCANet |
-
2019
- 2019-12-10 CN CN201911258187.4A patent/CN111062292B/en not_active Expired - Fee Related
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106530623A (en) * | 2016-12-30 | 2017-03-22 | 南京理工大学 | Fatigue driving detection device and method |
CN109325964A (en) * | 2018-08-17 | 2019-02-12 | 深圳市中电数通智慧安全科技股份有限公司 | A kind of face tracking methods, device and terminal |
CN109299709A (en) * | 2018-12-04 | 2019-02-01 | 中山大学 | Data recommendation method, device, server and client based on face recognition |
CN110516734A (en) * | 2019-08-23 | 2019-11-29 | 腾讯科技(深圳)有限公司 | A kind of image matching method, device, equipment and storage medium |
Non-Patent Citations (2)
Title |
---|
"A Bayesian Hashing approach and its application to face recognition";Qi Dai 等;《Neurocomputing》;20161112;第213卷;第5-13页 * |
"基于ZYNQ的优化Adaboost人脸检测";高树静 等;《计算机工程与应用》;20190517;第201-206页 * |
Also Published As
Publication number | Publication date |
---|---|
CN111062292A (en) | 2020-04-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111062292B (en) | Fatigue driving detection device and method | |
Mbouna et al. | Visual analysis of eye state and head pose for driver alertness monitoring | |
CN105354986B (en) | Driver's driving condition supervision system and method | |
CN106846734B (en) | A kind of fatigue driving detection device and method | |
Junaedi et al. | Driver drowsiness detection based on face feature and PERCLOS | |
CN113158850B (en) | Ship driver fatigue detection method and system based on deep learning | |
CN108309311A (en) | A kind of real-time doze of train driver sleeps detection device and detection algorithm | |
CN104778453B (en) | A kind of night pedestrian detection method based on infrared pedestrian's brightness statistics feature | |
CN104361332B (en) | A kind of face eye areas localization method for fatigue driving detection | |
CN111553214B (en) | Method and system for detecting smoking behavior of driver | |
CN106530623A (en) | Fatigue driving detection device and method | |
CN107194346A (en) | A kind of fatigue drive of car Forecasting Methodology | |
CN101950355A (en) | Method for detecting fatigue state of driver based on digital video | |
CN108053615A (en) | Driver tired driving condition detection method based on micro- expression | |
CN103336973B (en) | The eye state identification method of multiple features Decision fusion | |
WO2015067084A1 (en) | Human eye positioning method and apparatus | |
CN112016429A (en) | Fatigue driving detection method based on train cab scene | |
CN108734086A (en) | The frequency of wink and gaze estimation method of network are generated based on ocular | |
CN104091147A (en) | Near infrared eye positioning and eye state identification method | |
CN106682578A (en) | Human face recognition method based on blink detection | |
CN109886086B (en) | Pedestrian detection method based on HOG feature and linear SVM cascade classifier | |
CN108629272A (en) | A kind of embedded gestural control method and system based on monocular cam | |
CN105512630B (en) | Human eye detection and localization method | |
CN109740477A (en) | Study in Driver Fatigue State Surveillance System and its fatigue detection method | |
CN116311180A (en) | Multi-method fusion fatigue driving detection method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20220729 |
|
CF01 | Termination of patent right due to non-payment of annual fee |