CN114332829A - Driver fatigue detection method based on multiple strategies - Google Patents

Driver fatigue detection method based on multiple strategies Download PDF

Info

Publication number
CN114332829A
CN114332829A CN202111479261.2A CN202111479261A CN114332829A CN 114332829 A CN114332829 A CN 114332829A CN 202111479261 A CN202111479261 A CN 202111479261A CN 114332829 A CN114332829 A CN 114332829A
Authority
CN
China
Prior art keywords
fatigue
eyes
mouth
driver
degree
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111479261.2A
Other languages
Chinese (zh)
Inventor
仲从建
李征
付本刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Aerospace Dawei Technology Co Ltd
Original Assignee
Jiangsu Aerospace Dawei Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Aerospace Dawei Technology Co Ltd filed Critical Jiangsu Aerospace Dawei Technology Co Ltd
Priority to CN202111479261.2A priority Critical patent/CN114332829A/en
Priority to PCT/CN2022/081162 priority patent/WO2023103206A1/en
Publication of CN114332829A publication Critical patent/CN114332829A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/70Multimodal biometrics, e.g. combining information from different biometric modalities

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Human Computer Interaction (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Ophthalmology & Optometry (AREA)
  • Image Analysis (AREA)

Abstract

本发明属于人工智能领域,公开了一种基于多策略的驾驶员疲劳检测方法,包括步骤:标定目标区域和输入相关阈值;使用改进的DeepLabV3+算法对目标区域图像进行分割,获得眼部和嘴部的轮廓;根据所述轮廓,计算眼部和嘴部的相关值,判断眼睛和嘴巴状态;根据自适应疲劳判定规则和眼睛和嘴巴状态,判定驾驶员是否为疲劳驾驶。本发明更好的获取眼睛和嘴巴的信息,得到更加准确的轮廓。对分割出的轮廓进行矩形拟合,剔除干扰目标信息;有效避免因驾驶员眼睛小和嘴巴大等因素造成闭眼或打哈欠误判的情况;自适应疲劳判断规则避免信息不全和不充分使用计算信息等原因造成的误判。

Figure 202111479261

The invention belongs to the field of artificial intelligence, and discloses a multi-strategy-based driver fatigue detection method, comprising the steps of: calibrating a target area and inputting a relevant threshold; using an improved DeepLabV3+ algorithm to segment images of the target area to obtain eyes and mouths According to the contour, the correlation values of the eyes and mouth are calculated, and the state of the eyes and mouth is judged; according to the adaptive fatigue judgment rule and the state of the eyes and the mouth, it is judged whether the driver is driving fatigued. The present invention better obtains the information of eyes and mouth, and obtains a more accurate outline. Rectangle fitting is performed on the segmented contours to eliminate interfering target information; effectively avoid the misjudgment of closed eyes or yawning caused by factors such as small eyes and large mouths of drivers; adaptive fatigue judgment rules avoid incomplete information and insufficient use Misjudgment caused by calculation information and other reasons.

Figure 202111479261

Description

一种基于多策略的驾驶员疲劳检测方法A multi-strategy-based driver fatigue detection method

技术领域technical field

本发明属于人工智能技术领域,尤其涉及一种基于多策略的驾驶员疲劳检测方法,针对驾驶员疲劳检测方法技术,可以有效的检测出驾驶员的状态,促进驾驶员安全驾驶。The invention belongs to the technical field of artificial intelligence, and in particular relates to a multi-strategy-based driver fatigue detection method, aiming at the driver fatigue detection method and technology, which can effectively detect the driver's state and promote the driver to drive safely.

背景技术Background technique

随着经济发展和生活质量的提升,汽车逐渐成为出行的主要交通工具之一,也成为全球道路上的一个严重问题。汽车驾驶员的驾驶技术和驾驶状态将直接影响自身和他人的安全,如长途旅行、睡眠不足和单调的环境会导致危险的警惕性下降甚至微睡眠发作,潜意识下进行疲劳驾驶,成为交通事故发生的主要诱因之一。因此,对驾驶员的驾驶状态进行检测并进行预警,有助于降低交通事故发生率和促进道路更加安全,在现实中具有重要意义。With the development of economy and the improvement of quality of life, automobiles have gradually become one of the main means of transportation and a serious problem on global roads. The driving skills and driving status of car drivers will directly affect the safety of themselves and others. For example, long-distance travel, lack of sleep and monotonous environment will lead to a dangerous decrease in vigilance and even microsleep attacks. Fatigue driving subconsciously will become a traffic accident. one of the main incentives. Therefore, the detection and early warning of the driver's driving state can help reduce the incidence of traffic accidents and promote safer roads, which is of great significance in reality.

随着计算机和人工智能技术的快速发展,深度学习算法更加丰富,应用领域不断扩展,使得智能驾驶和驾驶员疲劳检测技术快速发展。目前,驾驶员面部检测的方法可分为传统算法和深度学习算法,其中,深度学习算法逐渐成为主流,且取得了较为理想的效果。在深度学习算法中,分割、检测和识别算法逐渐应用到驾驶员疲劳检测问题中,如通过驾驶员眼睛的闭合度和嘴巴的张合度判断驾驶员是否为疲劳状态,从而做出相应预警,避免危险驾驶行为的发生。With the rapid development of computer and artificial intelligence technologies, deep learning algorithms have become more abundant, and the application fields have been continuously expanded, which has led to the rapid development of intelligent driving and driver fatigue detection technologies. At present, the methods of driver face detection can be divided into traditional algorithms and deep learning algorithms. Among them, deep learning algorithms have gradually become the mainstream and have achieved relatively ideal results. In the deep learning algorithm, segmentation, detection and recognition algorithms are gradually applied to the problem of driver fatigue detection, such as judging whether the driver is in a fatigued state by the closed degree of the driver's eyes and the degree of opening and closing of the mouth, so as to make corresponding warnings to avoid The occurrence of dangerous driving behavior.

现有算法虽取得了一定的效果,但由于驾驶室环境复杂和检测目标特殊等原因,容易误判造成预警,影响驾驶员驾驶状态。对于存在的问题,可归纳为三方面,第一,使用深度学习算法分割眼部和嘴部轮廓时,容易受到驾驶室环境干扰,分割结果会掺入多余目标;第二,针对驾驶室场景的分割算法较少,在驾驶员的眼部和嘴部特征提取时难度较大;第三,疲劳状态判断规则自适应性较差,应用于眼睛较小的驾驶员和佩戴口罩的驾驶员时,容易造成误判。因此,需要一种同时解决上述三种问题的疲劳检测方法,提升检测准确率,促进道路安全。Although the existing algorithm has achieved certain results, due to the complex environment of the cab and the special detection target, it is easy to misjudge and cause early warning, which affects the driving state of the driver. The existing problems can be summarized into three aspects. First, when using the deep learning algorithm to segment the contours of the eyes and mouth, it is easy to be disturbed by the cab environment, and the segmentation results will incorporate redundant targets; second, for the cab scene, the There are few segmentation algorithms, and it is more difficult to extract the features of the driver's eyes and mouth; third, the fatigue state judgment rules are less adaptive, and when applied to drivers with small eyes and drivers wearing masks, It is easy to cause misjudgment. Therefore, there is a need for a fatigue detection method that simultaneously solves the above three problems, improves the detection accuracy, and promotes road safety.

发明内容SUMMARY OF THE INVENTION

针对背景技术中提及的问题,本发明提出一种基于多策略的驾驶员疲劳检测方法。为解决第一个问题,提出第一种策略,在安装设备时对驾驶员头部移动范围进行标定,将驾驶室内座椅上部作为参照物,以座椅可变化范围作为标定区域,并作为输入数据进行分析,用于降低复杂环境对目标的干扰。针对第二个问题,在DeepLabV3+算法的基础上,结合眼部和嘴部分割特性,对算法进行改进,使其可以获得更多的目标信息,以取得更加准确的眼部和嘴部轮廓。针对第三个问题,首先,结合人机交互的优点,输入闭眼和打哈欠的判断阈值,然后,应用提出的自适应判断规则,根据是否佩戴口罩进行眼睛闭合度或嘴部张合度计算,并根据输入的阈值判断是否为闭眼和打哈欠,同时,根据疲劳判断规则进行判断是否为疲劳驾驶,并输出相应的结果。In view of the problems mentioned in the background art, the present invention proposes a multi-strategy-based driver fatigue detection method. In order to solve the first problem, the first strategy is proposed. When installing the equipment, the moving range of the driver's head is calibrated. The upper part of the seat in the cab is used as the reference object, and the variable range of the seat is used as the calibration area and as the input. The data is analyzed to reduce the interference of the complex environment to the target. For the second problem, based on the DeepLabV3+ algorithm, combined with the characteristics of eye and mouth segmentation, the algorithm is improved so that it can obtain more target information to obtain more accurate eye and mouth contours. For the third question, first, combined with the advantages of human-computer interaction, input the judgment thresholds for closing eyes and yawning, and then applying the proposed adaptive judgment rules to calculate the degree of eye closure or mouth opening according to whether a mask is worn or not. And according to the input threshold to judge whether it is eyes closed and yawning, at the same time, according to the fatigue judgment rules to judge whether it is fatigue driving, and output the corresponding results.

具体的,本发明提出的基于多策略的驾驶员疲劳检测方法包括以下步骤:Specifically, the multi-strategy-based driver fatigue detection method proposed by the present invention includes the following steps:

标定目标区域和输入相关阈值;Calibrate the target area and input related thresholds;

使用改进的DeepLabV3+算法对目标区域图像进行分割,获得眼部和嘴部的轮廓;Use the improved DeepLabV3+ algorithm to segment the target area image to obtain the contours of the eyes and mouth;

根据所述轮廓,计算眼部和嘴部的相关值,判断眼睛和嘴巴状态;According to the outline, calculate the correlation value of the eyes and the mouth, and judge the state of the eyes and the mouth;

根据自适应疲劳判定规则和眼睛和嘴巴状态,判定驾驶员是否为疲劳驾驶。According to the adaptive fatigue determination rule and the state of eyes and mouth, it is determined whether the driver is driving fatigued.

进一步的,所述标定目标区域和输入相关阈值包括:Further, the calibration target area and input related thresholds include:

安装设备时,通过人机交互模式,对驾驶室座椅和驾驶员头部活动区域进行标定,并将此区域作为后续输入数据进行分析;When installing the equipment, through the human-computer interaction mode, the cab seat and the active area of the driver's head are calibrated, and this area is used as the subsequent input data for analysis;

根据司机面部情况,计算闭眼阈值和哈欠阈值,用于判断驾驶员眼睛和嘴巴状态。According to the driver's facial condition, the eye closing threshold and the yawn threshold are calculated, which are used to judge the driver's eyes and mouth status.

进一步的,所述改进的DeepLabV3+算法对目标区域图像进行分割,包括以下步骤:Further, the improved DeepLabV3+ algorithm performs segmentation on the target area image, including the following steps:

使用预训练的resnet101算法对输入图像进行分析并提取特征;Analyze the input image and extract features using the pre-trained resnet101 algorithm;

通过分析眼部和嘴部的特点,对原始的编码模块进行改进,添加卷积层和修改参数值构造新的空洞卷积结构,构造新的编码模块获得更加丰富的目标信息;By analyzing the characteristics of the eyes and mouth, the original coding module is improved, a new convolutional structure is constructed by adding a convolution layer and modifying parameter values, and a new coding module is constructed to obtain richer target information;

通过解码模块获得目标的具体轮廓,即获得眼部和嘴部轮廓。The specific contour of the target is obtained through the decoding module, that is, the contours of the eyes and mouth are obtained.

进一步的,所述计算眼部和嘴部的相关值,判断眼睛和嘴巴状态包括:Further, calculating the correlation values of the eyes and the mouth, and judging the states of the eyes and the mouth include:

先对获得的轮廓进行矩形拟合,得到相应的高和宽;First perform rectangle fitting on the obtained contour to obtain the corresponding height and width;

再根据实际应用和驾驶员反馈计算眼睛闭合度和嘴巴张开度,计算公式为:Then calculate the eye closure and mouth opening according to the actual application and driver feedback. The calculation formula is:

Figure BDA0003394381910000031
Figure BDA0003394381910000031

Figure BDA0003394381910000032
Figure BDA0003394381910000032

其中,b为眼睛闭合度,o为嘴巴张开度,h为轮廓高,w为轮廓宽,i为同一类型轮廓数量;Among them, b is the eye closure degree, o is the mouth opening degree, h is the contour height, w is the contour width, and i is the number of contours of the same type;

最后,根据输入阈值判断眼睛和嘴巴状态,判断规则为:Finally, according to the input threshold to determine the state of the eyes and mouth, the judgment rules are:

Figure BDA0003394381910000033
Figure BDA0003394381910000033

其中,α为驾驶员的眼睛闭合阈值,闭合度小于闭合阈值则为闭眼,β为驾驶员的嘴巴张开度阈值,张开度大于张开阈值则为哈欠。Among them, α is the driver's eye closing threshold, if the closing degree is less than the closing threshold, the eyes are closed, β is the driver's mouth opening threshold, and if the opening degree is greater than the opening threshold, it is yawning.

进一步的,所述自适应疲劳判定规则为:Further, the adaptive fatigue determination rule is:

当佩戴口罩时,仅使用眼睛疲劳度判断疲劳状态,若未佩戴口罩,通过眼睛疲劳度和打哈欠次数计算的疲劳度综合判定疲劳状态。When wearing a mask, only the eye fatigue is used to judge the fatigue state. If the mask is not worn, the fatigue state is comprehensively judged by the fatigue calculated by the eye fatigue and the number of yawns.

进一步的,所述眼睛疲劳度计算公式为:Further, the eye fatigue calculation formula is:

Figure BDA0003394381910000041
Figure BDA0003394381910000041

其中,pe为根据眼睛状态计算的疲劳度,Te为检测周期内眼睛为闭合的帧数,Tall为检测周期的总帧数。Among them, pe is the fatigue degree calculated according to the state of the eyes , T e is the number of frames in which the eyes are closed in the detection period, and T all is the total number of frames in the detection period.

进一步的,所述打哈欠次数计算疲劳度的公式为:Further, the formula for calculating the fatigue degree of the number of yawns is:

Figure BDA0003394381910000042
Figure BDA0003394381910000042

其中,pm为根据打哈欠计算的疲劳度,Tm为检测周期内打哈欠的帧数,Tall为检测周期的总帧数。Among them, p m is the fatigue degree calculated according to yawning, T m is the number of yawning frames in the detection period, and T all is the total number of frames in the detection period.

进一步的,所述若未佩戴口罩,通过眼睛疲劳度和打哈欠次数计算的疲劳度综合判定疲劳状态为:Further, if the described mask is not worn, the fatigue state comprehensively judged by the fatigue degree calculated by eye fatigue degree and the number of yawns is:

Figure BDA0003394381910000043
Figure BDA0003394381910000043

其中pe为眼睛疲劳度,pm为打哈欠次数计算的疲劳度。Among them, p e is the degree of eye fatigue, and p m is the degree of fatigue calculated by the number of yawns.

本发明与现有技术相比具有如下优点:Compared with the prior art, the present invention has the following advantages:

第一:本发明通过人机交互的方式标定驾驶员头部活动区域和输入判断阈值,能够有效降低驾驶室复杂环境对算法性能的影响,同时,根据驾驶员情况输入相关阈值,确保判断规则适用于不同的驾驶员。First: the present invention calibrates the driver's head activity area and input judgment threshold by means of human-computer interaction, which can effectively reduce the impact of the complex environment of the cab on the performance of the algorithm. for different drivers.

第二:本发明通过分析眼部和嘴部特征,对DeepLabV3+算法进行改进,使其能够更好的获取眼睛和嘴巴的信息,有效的提高了分割精度,获得了更加准确的轮廓。Second: the present invention improves the DeepLabV3+ algorithm by analyzing the features of the eyes and mouth, so that it can better obtain the information of the eyes and the mouth, effectively improve the segmentation accuracy, and obtain a more accurate contour.

第三:本发明对分割出的轮廓进行矩形拟合,剔除干扰目标信息,并计算轮廓的高和宽,提出一种新的眼睛闭合度和嘴巴张开度的计算方法,结合输入的判断阈值,能够有效避免因驾驶员眼睛小和嘴巴大等因素造成闭眼或打哈欠误判的情况。Third: the present invention performs rectangle fitting on the segmented contour, removes the interference target information, calculates the height and width of the contour, and proposes a new calculation method for the degree of eye closure and mouth opening, combined with the input judgment threshold, which can Effectively avoid the misjudgment of closed eyes or yawning caused by factors such as the driver's small eyes and large mouth.

第四:本发明根据是否佩戴口罩,提出一种自适应的疲劳判断规则,即根据是否检测到嘴部信息选择不同的疲劳判断规则,既能进行单因子判断,也能结合多因子进行综合判断,可以避免信息不全和不充分使用计算信息等原因造成的误判。Fourth: the present invention proposes an adaptive fatigue judgment rule according to whether or not to wear a mask, that is, according to whether or not mouth information is detected, different fatigue judgment rules can be selected, which can not only make single-factor judgments, but also combine multiple factors to make comprehensive judgments. , which can avoid misjudgments caused by incomplete information and insufficient use of computational information.

附图说明Description of drawings

图1为本发明流程图;Fig. 1 is the flow chart of the present invention;

图2(a)为DSM摄像头实时摄像区域,图2(b)为以驾驶室座椅上部为参照物的标定区域,并以此作为输入图像进行下一步处理;Fig. 2(a) is the real-time imaging area of the DSM camera, and Fig. 2(b) is the calibration area with the upper part of the cab seat as the reference object, which is used as the input image for the next step;

图3(a)为DeepLabV3+的编码模块,图3(b)为分析眼部和嘴部特征后改进的DeepLabV3+算法编码模块;Figure 3(a) is the coding module of DeepLabV3+, and Figure 3(b) is the improved DeepLabV3+ algorithm coding module after analyzing the features of eyes and mouth;

图4分割算法性能对比,图4(a)为原始图像,图4(b)为unet分割结果,Figure 4. Performance comparison of segmentation algorithms. Figure 4(a) is the original image, and Figure 4(b) is the unet segmentation result.

图4(c)为DeepLabV3+分割结果,图4(d)为改进后算法的分割结果;Figure 4(c) is the segmentation result of DeepLabV3+, and Figure 4(d) is the segmentation result of the improved algorithm;

图5对分割出的轮廓矩形拟合,以及高和宽选确定示意图,其中,拟合后矩形的高便为轮廓的高,矩形的宽便为轮廓的宽;Figure 5 is a schematic diagram of the contour rectangle fitting of the segmented, and the height and width selection determination, wherein, the height of the rectangle after fitting is the height of the contour, and the width of the rectangle is the width of the contour;

图6闭眼阈值选取示意图,图6(a)为眼睛处于正常状态,图6(b)为眼睛正常状态的分割结果,图6(c)为闭合阈值选取状态,图6(d)为闭合阈值状态分割结果;Figure 6 is a schematic diagram of the selection of the threshold for closing eyes, Figure 6(a) is the normal state of the eye, Figure 6(b) is the segmentation result of the normal state of the eye, Figure 6(c) is the selection state of the closing threshold, and Figure 6(d) is the closed state Threshold state segmentation result;

图7打哈欠嘴部阈值选取示意图,图7(a)为嘴巴处于哈欠状态,图7(b)为嘴巴呵欠状态的分割结果,图7(c)为哈欠阈值选取状态,图7(d)为哈欠阈值状态分割结果。Figure 7 is a schematic diagram of yawning mouth threshold selection, Figure 7(a) is the mouth in the yawning state, Figure 7(b) is the segmentation result of the mouth yawning state, Figure 7(c) is the yawning threshold selection state, Figure 7(d) Split the results for the yawn threshold state.

具体实施方式Detailed ways

下面结合附图对本发明作进一步的说明,但不以任何方式对本发明加以限制,基于本发明教导所作的任何变换或替换,均属于本发明的保护范围。The present invention is further described below in conjunction with the accompanying drawings, but the present invention is not limited in any way, and any transformation or replacement based on the teachings of the present invention belongs to the protection scope of the present invention.

参照图1,本发明的实施例具体步骤如下:1, the specific steps of the embodiment of the present invention are as follows:

(1)车辆驾驶室安装DSM摄像机,并通过人机交互形式标定目标区域和计算判断阈值,如眼睛闭合阈值α和嘴巴张开度阈值β,参照图2、图6和图7;(1) Install a DSM camera in the vehicle cab, and calibrate the target area and calculate the judgment threshold through human-computer interaction, such as the eye closure threshold α and the mouth opening threshold β, refer to Figure 2, Figure 6 and Figure 7;

(2)应用改进的DeepLabV3+算法对输入的图像进行分割,提取眼部和嘴部轮廓。具体分为三个阶段:(2) Apply the improved DeepLabV3+ algorithm to segment the input image and extract the contours of the eyes and mouth. It is divided into three stages:

首先使用ResNet101算法提取图像特征,之后经过编码模块获取丰富的目标信息,最后经过解码模块获得相应的目标轮廓图像,并输出分割后的轮廓图像。其中,结合所分割的眼部和嘴部特征信息,在DeepLabV3+的基础上对编码器模块进行了改进,如图3所示,添加了两层特征融合层,即两个1x1卷积层,可以更加有效的使用目标信息,提升最终的分割精度。ResNet为残差网络Residual Network的缩写,该网络广泛用于目标分类领域以及计算机视觉任务主干经典神经网络的一部分,ResNet101是其中的经典网络,ResNet为本领域的公知常识,本发明不再赘述。DeepLabV3+为语义分割领域影响较为大的一个分支,本发明不再赘述。Firstly, the ResNet101 algorithm is used to extract image features, and then the rich target information is obtained through the encoding module, and finally the corresponding target contour image is obtained through the decoding module, and the segmented contour image is output. Among them, combined with the segmented eye and mouth feature information, the encoder module is improved on the basis of DeepLabV3+. As shown in Figure 3, two feature fusion layers are added, that is, two 1x1 convolution layers. Use target information more effectively to improve the final segmentation accuracy. ResNet is the abbreviation of Residual Network, which is widely used in the field of target classification and a part of the backbone classical neural network of computer vision tasks. DeepLabV3+ is a branch with a relatively large influence in the field of semantic segmentation, which is not repeated in the present invention.

(3)判断是否为闭眼和打哈欠,具体过程如下:(3) To judge whether the eyes are closed and yawning, the specific process is as follows:

(3a)对输出的轮廓进行矩形拟合,实验参照图5。(3a) Rectangle fitting is performed on the output contour, and the experiment is shown in Figure 5.

(3b)计算眼睛闭合度和嘴巴张开度,并根据输入的阈值判断是否为闭眼或打哈欠,公式如下:(3b) Calculate the degree of eye closure and mouth opening, and judge whether the eyes are closed or yawning according to the input threshold. The formula is as follows:

Figure BDA0003394381910000071
Figure BDA0003394381910000071

Figure BDA0003394381910000072
Figure BDA0003394381910000072

其中,b为眼睛闭合度,o为嘴巴张开度,h为轮廓高,w为轮廓宽,i为同一类型轮廓数量,如眼睛为2,嘴巴为1。Among them, b is the eye closure degree, o is the mouth opening degree, h is the contour height, w is the contour width, and i is the number of contours of the same type, such as 2 for eyes and 1 for mouth.

(3c)根据输入的闭眼阈值α和哈欠阈值β判断是否为闭眼或打哈欠,判断规则为:(3c) Judging whether the eyes are closed or yawning according to the input threshold α and yawning threshold β, the judgment rule is:

Figure BDA0003394381910000073
Figure BDA0003394381910000073

其中,α为驾驶员的眼睛闭合阈值,闭合度小于闭合阈值则为闭眼,β为驾驶员的嘴巴张开度阈值,张开度大于张开阈值则为哈欠。眼睛闭合阈值和嘴巴张开度阈值根据实验进行确定。Among them, α is the driver's eye closing threshold, if the closing degree is less than the closing threshold, the eyes are closed, β is the driver's mouth opening threshold, and if the opening degree is greater than the opening threshold, it is yawning. Eye closure thresholds and mouth opening thresholds are determined experimentally.

(4)选择相应的判断规则进行是否为疲劳驾驶判断,具体如下:(4) Select the corresponding judgment rule to judge whether it is fatigue driving, as follows:

(4a)佩戴口罩时,则仅有闭眼信息,仅通过眼睛计算疲劳度,公式为:(4a) When wearing a mask, there is only eye closure information, and fatigue is calculated only by eyes. The formula is:

Figure BDA0003394381910000074
Figure BDA0003394381910000074

其中,pe为根据眼睛状态计算的疲劳度,Te为检测周期内眼睛为闭合的帧数,Tall为检测周期的总帧数。Among them, pe is the fatigue degree calculated according to the state of the eyes , T e is the number of frames in which the eyes are closed in the detection period, and T all is the total number of frames in the detection period.

(4b)未佩戴口罩时,结合闭眼和哈欠情况综合判断,计算公式为:(4b) When not wearing a mask, combined with eyes closed and yawning, the calculation formula is:

Figure BDA0003394381910000075
Figure BDA0003394381910000075

其中,pm为根据打哈欠计算的疲劳度,Tm为检测周期内打哈欠的帧数,Tall为检测周期的总帧数。Among them, p m is the fatigue degree calculated according to yawning, T m is the number of yawning frames in the detection period, and T all is the total number of frames in the detection period.

(4c)优选的,仅有眼部时,设置检测周期为60帧,闭眼状态达到18帧为疲劳,即pe≥0.3。(4c) Preferably, when there are only eyes, the detection period is set to 60 frames, and the eyes closed state reaches 18 frames as fatigue, that is, p e ≥ 0.3.

未戴口罩时,设置检测周期为180帧,哈欠48帧或闭眼60帧为疲劳状态。因此,未戴口罩时疲劳状态判断规则为:When not wearing a mask, set the detection period to 180 frames, and yawn for 48 frames or close your eyes for 60 frames to be fatigued. Therefore, the rules for judging fatigue status when not wearing a mask are:

Figure BDA0003394381910000081
Figure BDA0003394381910000081

本实施例中的判断疲劳状态规则中的阈值还可为其它值,本发明对此不作限制。The threshold in the rule for judging the fatigue state in this embodiment may also be other values, which are not limited in the present invention.

(5)若为疲劳状态则进行预警,否则返回步骤(2)继续检测。(5) If it is in the fatigue state, give an early warning, otherwise return to step (2) to continue the detection.

实施例Example

通过以下实例的仿真进一步说明本发明的效果。The effect of the present invention is further illustrated by the simulation of the following examples.

1.仿真条件和实例来源:1. Simulation conditions and instance sources:

为避免演示需要影响道路交通等问题,所需素材来自实验室或内部封闭道路,并在同一计算设备和环境中完成所有步骤,其中,所有算法的训练参数设置和训练数据也均相同。In order to avoid problems such as the need to affect road traffic in the demonstration, the required materials are from the laboratory or internal closed roads, and all steps are completed in the same computing equipment and environment, where the training parameter settings and training data of all algorithms are also the same.

2.仿真内容2. Simulation content

首先,参照图2的标定和输入相关判断阈值。然后,使用Unet、DeepLabV3+和改进后的算法对实例图像进行分割,获取眼部和嘴部轮廓,结果如图4所示,其中图4(a)为输入图像、4(b)为unet分割结果、4(c)为DeepLabV3+分割结果和4(d)为改进算法的分割结果,分割结果包括眼部区域轮廓和嘴部轮廓区域,而在Unet的分割结果中还包括眉毛、眼镜等。最后,对轮廓进行拟合,计算闭合度和张合度,并根据选择的规则进行疲劳判断。First, refer to the calibration and input correlation judgment thresholds of FIG. 2 . Then, use Unet, DeepLabV3+ and the improved algorithm to segment the instance image to obtain the contours of the eyes and mouth. The results are shown in Figure 4, where Figure 4(a) is the input image, and Figure 4(b) is the unet segmentation result , 4(c) are the segmentation results of DeepLabV3+ and 4(d) are the segmentation results of the improved algorithm. The segmentation results include the contour of the eye area and the contour area of the mouth, and the segmentation results of Unet also include eyebrows, glasses, etc. Finally, fit the contour, calculate the closing degree and the closing degree, and make a fatigue judgment according to the selected rule.

3.结果分析3. Analysis of results

为了更加直观的衡量改进算法的性能,采用Mean Intersection over Union(MIoU)对分割结果进行定量分析,公式为:In order to measure the performance of the improved algorithm more intuitively, Mean Intersection over Union (MIoU) is used to quantitatively analyze the segmentation results. The formula is:

Figure BDA0003394381910000091
Figure BDA0003394381910000091

其中,pij表示真实值为i被预测为j的数量,k+1为类别个数,pii为真实为i预测也为i的数量,数值越大,表明分割效果越好。Among them, p ij represents the number of the true value i is predicted to be j, k+1 is the number of categories, p ii is the number of the true value i is predicted to be i, and the larger the value, the better the segmentation effect.

从图4中,本发明中改进的算法效果明显比unet的效果好,尤其是对眼部区域的分割,避免了非目标区域对算法性能的干扰。DeepLabV3+和本发明改进算法的分割效果接近,但在细节方面本发明的效果更接近输入图像。为了更好的衡量DeepLabV3+和本发明改进的算法性能,进行定量分析,同等条件下,DeepLabV3+算法的MIoU值为0.864,本发明中改进算法的MIoU值为0.8706,由此可知,本发明中的改进算法取得的效果最佳。参照图5和判断规则,算法的分割结果直接影响闭合度和张合度计算,进一步影响判断结果,因此,优异的算法分割结果有助于取得精确的疲劳判断结果。From FIG. 4 , the effect of the improved algorithm in the present invention is obviously better than that of unet, especially the segmentation of the eye area, which avoids the interference of the non-target area on the performance of the algorithm. The segmentation effect of DeepLabV3+ and the improved algorithm of the present invention are similar, but the effect of the present invention is closer to the input image in terms of details. In order to better measure the performance of DeepLabV3+ and the improved algorithm of the present invention, quantitative analysis is carried out. Under the same conditions, the MIoU value of the DeepLabV3+ algorithm is 0.864, and the MIoU value of the improved algorithm in the present invention is 0.8706. The algorithm achieves the best results. Referring to Figure 5 and the judgment rule, the segmentation result of the algorithm directly affects the calculation of the degree of closure and the degree of tension and closure, which further affects the judgment result. Therefore, the excellent algorithm segmentation result helps to obtain accurate fatigue judgment results.

本发明与现有技术相比具有如下优点:Compared with the prior art, the present invention has the following advantages:

第一:本发明通过人机交互的方式标定驾驶员头部活动区域和输入判断阈值,能够有效降低驾驶室复杂环境对算法性能的影响,同时,根据驾驶员情况输入相关阈值,确保判断规则适用于不同的驾驶员。First: the present invention calibrates the driver's head activity area and input judgment threshold by means of human-computer interaction, which can effectively reduce the impact of the complex environment of the cab on the performance of the algorithm. for different drivers.

第二:本发明通过分析眼部和嘴部特征,对DeepLabV3+算法进行改进,使其能够更好的获取眼睛和嘴巴的信息,有效的提高了分割精度,获得了更加准确的轮廓。Second: the present invention improves the DeepLabV3+ algorithm by analyzing the features of the eyes and mouth, so that it can better obtain the information of the eyes and the mouth, effectively improve the segmentation accuracy, and obtain a more accurate contour.

第三:本发明对分割出的轮廓进行矩形拟合,剔除干扰目标信息,并计算轮廓的高和宽,提出一种新的眼睛闭合度和嘴巴张开度的计算方法,结合输入的判断阈值,能够有效避免因驾驶员眼睛小和嘴巴大等因素造成闭眼或打哈欠误判的情况。Third: the present invention performs rectangle fitting on the segmented contour, removes the interference target information, calculates the height and width of the contour, and proposes a new calculation method for the degree of eye closure and mouth opening, combined with the input judgment threshold, which can Effectively avoid the misjudgment of closed eyes or yawning caused by factors such as the driver's small eyes and large mouth.

第四:本发明根据是否佩戴口罩,提出一种自适应的疲劳判断规则,即根据是否检测到嘴部信息选择不同的疲劳判断规则,既能进行单因子判断,也能结合多因子进行综合判断,可以避免信息不全和不充分使用计算信息等原因造成的误判。Fourth: the present invention proposes an adaptive fatigue judgment rule according to whether or not to wear a mask, that is, according to whether or not mouth information is detected, different fatigue judgment rules can be selected, which can not only make single-factor judgments, but also combine multiple factors to make comprehensive judgments. , which can avoid misjudgments caused by incomplete information and insufficient use of computational information.

本文所使用的词语“优选的”意指用作实例、示例或例证。本文描述为“优选的”任意方面或设计不必被解释为比其他方面或设计更有利。相反,词语“优选的”的使用旨在以具体方式提出概念。如本申请中所使用的术语“或”旨在意指包含的“或”而非排除的“或”。即,除非另外指定或从上下文中清楚,“X使用A或B”意指自然包括排列的任意一个。即,如果X使用A;X使用B;或X使用A和B二者,则“X使用A或B”在前述任一示例中得到满足。As used herein, the word "preferred" means serving as an example, instance, or illustration. Any aspect or design described herein as "preferred" is not necessarily to be construed as advantageous over other aspects or designs. Rather, use of the word "preferred" is intended to present concepts in a specific manner. The term "or" as used in this application is intended to mean an inclusive "or" rather than an exclusive "or." That is, unless specified otherwise or clear from context, "X employs A or B" is meant to naturally include either of the permutations. That is, "X uses A or B" is satisfied in any of the preceding examples if X uses A; X uses B; or X uses both A and B.

而且,尽管已经相对于一个或实现方式示出并描述了本公开,但是本领域技术人员基于对本说明书和附图的阅读和理解将会想到等价变型和修改。本公开包括所有这样的修改和变型,并且仅由所附权利要求的范围限制。特别地关于由上述组件(例如元件等)执行的各种功能,用于描述这样的组件的术语旨在对应于执行所述组件的指定功能(例如其在功能上是等价的)的任意组件(除非另外指示),即使在结构上与执行本文所示的本公开的示范性实现方式中的功能的公开结构不等同。此外,尽管本公开的特定特征已经相对于若干实现方式中的仅一个被公开,但是这种特征可以与如可以对给定或特定应用而言是期望和有利的其他实现方式的一个或其他特征组合。而且,就术语“包括”、“具有”、“含有”或其变形被用在具体实施方式或权利要求中而言,这样的术语旨在以与术语“包含”相似的方式包括。Furthermore, although the present disclosure has been shown and described with respect to one implementation or implementation, equivalent variations and modifications will occur to those skilled in the art based on a reading and understanding of this specification and drawings. The present disclosure includes all such modifications and variations and is limited only by the scope of the appended claims. In particular with respect to the various functions performed by the above-described components (eg, elements, etc.), the terms used to describe such components are intended to correspond to any component that performs the specified function of the component (eg, which is functionally equivalent) (unless otherwise indicated), even if not structurally equivalent to the disclosed structures that perform the functions of the exemplary implementations of the present disclosure shown herein. Furthermore, although a particular feature of the present disclosure has been disclosed with respect to only one of several implementations, such feature may be combined with one or other features of other implementations as may be desired and advantageous for a given or particular application combination. Also, to the extent that the terms "including," "having," "containing," or variations thereof, are used in the detailed description or the claims, such terms are intended to include in a manner similar to the term "comprising."

本发明实施例中的各功能单元可以集成在一个处理模块中,也可以是各个单元单独物理存在,也可以多个或多个以上单元集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。所述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。上述提到的存储介质可以是只读存储器,磁盘或光盘等。上述的各装置或系统,可以执行相应方法实施例中的存储方法。Each functional unit in this embodiment of the present invention may be integrated into one processing module, or each unit may exist physically alone, or multiple or more units may be integrated into one module. The above-mentioned integrated modules can be implemented in the form of hardware, and can also be implemented in the form of software function modules. If the integrated modules are implemented in the form of software functional modules and sold or used as independent products, they may also be stored in a computer-readable storage medium. The above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, and the like. The above-mentioned apparatuses or systems may execute the storage methods in the corresponding method embodiments.

综上所述,上述实施例为本发明的一种实施方式,但本发明的实施方式并不受所述实施例的限制,其他的任何背离本发明的精神实质与原理下所做的改变、修饰、代替、组合、简化,均应为等效的置换方式,都包含在本发明的保护范围之内。To sum up, the above-mentioned embodiment is an embodiment of the present invention, but the embodiment of the present invention is not limited by the embodiment, and any other changes that deviate from the spirit and principle of the present invention, Modifications, substitutions, combinations, and simplifications should all be equivalent substitutions, which are all included within the protection scope of the present invention.

Claims (8)

1.一种基于多策略的驾驶员疲劳检测方法,其特征在于,包括以下步骤:1. a driver fatigue detection method based on multiple strategies, is characterized in that, comprises the following steps: 标定目标区域和输入相关阈值;Calibrate the target area and input related thresholds; 使用改进的DeepLabV3+算法对目标区域图像进行分割,获得眼部和嘴部的轮廓;Use the improved DeepLabV3+ algorithm to segment the target area image to obtain the contours of the eyes and mouth; 根据所述轮廓,计算眼部和嘴部的相关值,判断眼睛和嘴巴状态;According to the outline, calculate the correlation value of the eyes and the mouth, and judge the state of the eyes and the mouth; 根据自适应疲劳判定规则和眼睛和嘴巴状态,判定驾驶员是否为疲劳驾驶。According to the adaptive fatigue determination rule and the state of eyes and mouth, it is determined whether the driver is driving fatigued. 2.根据权利要求1所述的基于多策略的驾驶员疲劳检测方法技术,其特征在于,所述标定目标区域和输入相关阈值包括:2. driver fatigue detection method technology based on multi-strategy according to claim 1, is characterized in that, described calibration target area and input relevant threshold value comprise: 安装设备时,通过人机交互模式,对驾驶室座椅和驾驶员头部活动区域进行标定,并将此区域作为后续输入数据进行分析;When installing the equipment, through the human-computer interaction mode, the cab seat and the active area of the driver's head are calibrated, and this area is used as the subsequent input data for analysis; 根据司机面部情况,计算闭眼阈值和哈欠阈值,用于判断驾驶员眼睛和嘴巴状态。According to the driver's facial condition, the eye closing threshold and the yawn threshold are calculated, which are used to judge the driver's eyes and mouth status. 3.根据权利要求1所述的基于多策略的驾驶员疲劳检测方法技术,其特征在于,所述改进的DeepLabV3+算法对目标区域图像进行分割,包括以下步骤:3. driver fatigue detection method technology based on multi-strategy according to claim 1, is characterized in that, described improved DeepLabV3+ algorithm is segmented to target area image, may further comprise the steps: 使用预训练的resnet101算法对输入图像进行分析并提取特征;Analyze the input image and extract features using the pre-trained resnet101 algorithm; 通过分析眼部和嘴部的特点,对原始的编码模块进行改进,添加卷积层和修改参数值构造新的空洞卷积结构,构造新的编码模块获得更加丰富的目标信息;By analyzing the characteristics of the eyes and mouth, the original encoding module is improved, adding a convolution layer and modifying the parameter values to construct a new atrous convolution structure, and constructing a new encoding module to obtain richer target information; 通过解码模块获得目标的具体轮廓,即获得眼部和嘴部轮廓。The specific contour of the target is obtained through the decoding module, that is, the contours of the eyes and mouth are obtained. 4.根据权利要求1所述的基于多策略的驾驶员疲劳检测方法技术,其特征在于,所述计算眼部和嘴部的相关值,判断眼睛和嘴巴状态包括:4. the driver fatigue detection method technology based on multi-strategy according to claim 1, is characterized in that, described calculating the correlation value of eye and mouth, judging eye and mouth state comprises: 先对获得的轮廓进行矩形拟合,得到相应的高和宽;First perform rectangle fitting on the obtained contour to obtain the corresponding height and width; 再根据实际应用和驾驶员反馈计算眼睛闭合度和嘴巴张开度,计算公式为:Then calculate the eye closure and mouth opening according to the actual application and driver feedback. The calculation formula is:
Figure FDA0003394381900000021
Figure FDA0003394381900000021
Figure FDA0003394381900000022
Figure FDA0003394381900000022
其中,b为眼睛闭合度,o为嘴巴张开度,h为轮廓高,w为轮廓宽,i为同一类型轮廓数量;Among them, b is the eye closure degree, o is the mouth opening degree, h is the contour height, w is the contour width, and i is the number of contours of the same type; 最后,根据输入阈值判断眼睛和嘴巴状态,判断规则为:Finally, according to the input threshold to determine the state of the eyes and mouth, the judgment rules are:
Figure FDA0003394381900000023
Figure FDA0003394381900000023
其中,α为驾驶员的眼睛闭合阈值,闭合度小于闭合阈值则为闭眼,β为驾驶员的嘴巴张开度阈值,张开度大于张开阈值则为哈欠。Among them, α is the driver's eye closing threshold, if the closing degree is less than the closing threshold, the eyes are closed, β is the driver's mouth opening threshold, and if the opening degree is greater than the opening threshold, it is yawning.
5.根据权利要求1所述的一种基于多策略的驾驶员疲劳检测方法技术,其特征在于,所述自适应疲劳判定规则为:5. a kind of driver fatigue detection method technology based on multi-strategy according to claim 1, is characterized in that, described self-adaptive fatigue determination rule is: 当佩戴口罩时,仅使用眼睛疲劳度判断疲劳状态,若未佩戴口罩,通过眼睛疲劳度和打哈欠次数计算的疲劳度综合判定疲劳状态。When wearing a mask, only the eye fatigue is used to judge the fatigue state. If the mask is not worn, the fatigue state is comprehensively judged by the fatigue calculated by the eye fatigue and the number of yawns. 6.根据权利要求5所述的一种基于多策略的驾驶员疲劳检测方法技术,其特征在于,所述眼睛疲劳度计算公式为:6. a kind of driver fatigue detection method technology based on multi-strategy according to claim 5, is characterized in that, described eyestrain calculation formula is:
Figure FDA0003394381900000024
Figure FDA0003394381900000024
其中,pe为根据眼睛状态计算的疲劳度,Te为检测周期内眼睛为闭合的帧数,Tall为检测周期的总帧数。Among them, pe is the fatigue degree calculated according to the state of the eyes , T e is the number of frames in which the eyes are closed in the detection period, and T all is the total number of frames in the detection period.
7.根据权利要求5所述的一种基于多策略的驾驶员疲劳检测方法技术,其特征在于,所述打哈欠次数计算的疲劳度的公式为:7. a kind of driver fatigue detection method technology based on multi-strategy according to claim 5, is characterized in that, the formula of the degree of fatigue that described yawn times calculates is:
Figure FDA0003394381900000025
Figure FDA0003394381900000025
其中,pm为根据打哈欠计算的疲劳度,Tm为检测周期内打哈欠的帧数,Tall为检测周期的总帧数。Among them, p m is the fatigue degree calculated according to yawning, T m is the number of yawning frames in the detection period, and T all is the total number of frames in the detection period.
8.根据权利要求5所述的一种基于多策略的驾驶员疲劳检测方法技术,其特征在于,所述若未佩戴口罩,通过眼睛疲劳度和打哈欠次数计算的疲劳度综合判定疲劳状态为:8. a kind of driver fatigue detection method technology based on multi-strategy according to claim 5, is characterized in that, described if not wearing a mask, the fatigue state comprehensively judged by the fatigue degree calculated by eye fatigue degree and the number of yawns is :
Figure FDA0003394381900000031
Figure FDA0003394381900000031
其中pe为眼睛疲劳度,pm为打哈欠次数计算的疲劳度。Among them, p e is the degree of eye fatigue, and p m is the degree of fatigue calculated by the number of yawns.
CN202111479261.2A 2021-12-06 2021-12-06 Driver fatigue detection method based on multiple strategies Pending CN114332829A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111479261.2A CN114332829A (en) 2021-12-06 2021-12-06 Driver fatigue detection method based on multiple strategies
PCT/CN2022/081162 WO2023103206A1 (en) 2021-12-06 2022-03-16 Driver fatigue detection method based on multiple strategies

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111479261.2A CN114332829A (en) 2021-12-06 2021-12-06 Driver fatigue detection method based on multiple strategies

Publications (1)

Publication Number Publication Date
CN114332829A true CN114332829A (en) 2022-04-12

Family

ID=81049306

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111479261.2A Pending CN114332829A (en) 2021-12-06 2021-12-06 Driver fatigue detection method based on multiple strategies

Country Status (2)

Country Link
CN (1) CN114332829A (en)
WO (1) WO2023103206A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118279878A (en) * 2024-03-01 2024-07-02 杭州圆点科技有限公司 Multi-mode physiological information fusion vehicle-mounted driver fatigue state intelligent recognition method and system

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117115894A (en) * 2023-10-24 2023-11-24 吉林省田车科技有限公司 Non-contact driver fatigue state analysis method, device and equipment
CN117341715B (en) * 2023-12-05 2024-02-09 山东航天九通车联网有限公司 Vehicle driving safety early warning method based on joint self-checking
CN118230296B (en) * 2024-03-21 2024-11-05 无锡学院 A lightweight method for detecting and tracking fatigue driving
CN118494499B (en) * 2024-07-17 2024-10-22 武汉车凌智联科技有限公司 Fatigue driving detection reminding system based on camera
CN119169596B (en) * 2024-11-18 2025-02-11 武汉展为物联科技有限公司 A method for detecting driver fatigue in logistics transportation

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106372621A (en) * 2016-09-30 2017-02-01 防城港市港口区高创信息技术有限公司 Face recognition-based fatigue driving detection method
CN107679468A (en) * 2017-09-19 2018-02-09 浙江师范大学 A kind of embedded computer vision detects fatigue driving method and device
CN109934199A (en) * 2019-03-22 2019-06-25 扬州大学 A method and system for driver fatigue detection based on computer vision
CN111368580A (en) * 2018-12-25 2020-07-03 北京入思技术有限公司 Fatigue state detection method and device based on video analysis
CN112967271A (en) * 2021-03-25 2021-06-15 湖南大学 Casting surface defect identification method based on improved DeepLabv3+ network model

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106781282A (en) * 2016-12-29 2017-05-31 天津中科智能识别产业技术研究院有限公司 A kind of intelligent travelling crane driver fatigue early warning system
CN108294759A (en) * 2017-01-13 2018-07-20 天津工业大学 A kind of Driver Fatigue Detection based on CNN Eye state recognitions
CN111753674A (en) * 2020-06-05 2020-10-09 广东海洋大学 A detection and recognition method of fatigue driving based on deep learning
CN113343926A (en) * 2021-07-01 2021-09-03 南京信息工程大学 Driver fatigue detection method based on convolutional neural network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106372621A (en) * 2016-09-30 2017-02-01 防城港市港口区高创信息技术有限公司 Face recognition-based fatigue driving detection method
CN107679468A (en) * 2017-09-19 2018-02-09 浙江师范大学 A kind of embedded computer vision detects fatigue driving method and device
CN111368580A (en) * 2018-12-25 2020-07-03 北京入思技术有限公司 Fatigue state detection method and device based on video analysis
CN109934199A (en) * 2019-03-22 2019-06-25 扬州大学 A method and system for driver fatigue detection based on computer vision
CN112967271A (en) * 2021-03-25 2021-06-15 湖南大学 Casting surface defect identification method based on improved DeepLabv3+ network model

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
LIANG-CHIEH CHEN等: ""Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation"", 《PROCEEDINGS OF THE EUROPEAN CONFERENCE ON COMPUTER VISION (ECCV)》, 24 August 2018 (2018-08-24), pages 3 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118279878A (en) * 2024-03-01 2024-07-02 杭州圆点科技有限公司 Multi-mode physiological information fusion vehicle-mounted driver fatigue state intelligent recognition method and system

Also Published As

Publication number Publication date
WO2023103206A1 (en) 2023-06-15

Similar Documents

Publication Publication Date Title
CN114332829A (en) Driver fatigue detection method based on multiple strategies
CN112633280B (en) A method and system for generating an adversarial sample
CN105286802B (en) Driver Fatigue Detection based on video information
Yang et al. All in one network for driver attention monitoring
CN111753674A (en) A detection and recognition method of fatigue driving based on deep learning
CN112793576B (en) Lane change decision method and system based on rule and machine learning fusion
US20210303732A1 (en) Measuring the sensitivity of neural network image classifiers against adversarial attacks
CN118238847B (en) Autonomous lane change decision planning method and system adaptive to different driving styles and road surface environments
CN111626272A (en) Driver fatigue monitoring system based on deep learning
JP2009096365A (en) Risk recognition system
CN114168940A (en) Patch attack resisting method for vehicle target detection model
CN117325865A (en) Intelligent vehicle lane change decision method and system for LSTM track prediction
CN115690750A (en) Driver distraction detection method and device
Islam et al. Enhancing longitudinal velocity control with attention mechanism-based deep deterministic policy gradient (DDPG) for safety and comfort
Vinoth et al. Multi-sensor fusion and segmentation for autonomous vehicle multi-object tracking using deep Q networks
Ilievski Wisebench: A motion planning benchmarking framework for autonomous vehicles
CN119600671A (en) Display method and system for vehicle-mounted LED display screen
CN119418583A (en) Intelligent driving skill training method and system based on behavior cloning and reinforcement learning
KR20210022891A (en) Lane keeping method and apparatus thereof
Caballero et al. Some statistical challenges in automated driving systems
CN118279878B (en) Multi-mode physiological information fusion vehicle-mounted driver fatigue state intelligent recognition method
Balal et al. Comparative evaluation of fuzzy inference system, support vector machine and multilayer feed-forward neural network in making discretionary lane changing decisions
Rasidescu et al. Socially Intelligent Path-planning for Autonomous Vehicles Using Type-2 Fuzzy Estimated Social Psychology Models
CN117416364A (en) Driving-related augmented virtual fields
Fasanmade et al. Context-aware quantitative risk assessment machine learning model for drivers distraction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination