CN115937830A - Special vehicle-oriented driver fatigue detection method - Google Patents

Special vehicle-oriented driver fatigue detection method Download PDF

Info

Publication number
CN115937830A
CN115937830A CN202211494466.2A CN202211494466A CN115937830A CN 115937830 A CN115937830 A CN 115937830A CN 202211494466 A CN202211494466 A CN 202211494466A CN 115937830 A CN115937830 A CN 115937830A
Authority
CN
China
Prior art keywords
driver
fatigue
mouth
model
abnormal event
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211494466.2A
Other languages
Chinese (zh)
Inventor
胡海苗
叶灵枫
龚轩
李明竹
郑彪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Innovation Research Institute of Beihang University
Original Assignee
Hangzhou Innovation Research Institute of Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Innovation Research Institute of Beihang University filed Critical Hangzhou Innovation Research Institute of Beihang University
Priority to CN202211494466.2A priority Critical patent/CN115937830A/en
Publication of CN115937830A publication Critical patent/CN115937830A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Traffic Control Systems (AREA)

Abstract

The invention discloses a special vehicle-oriented driver fatigue detection method, which realizes early warning and recognition of a driver fatigue state and comprises the following steps: a behavior response time prediction model is built by utilizing the response period of the abnormal event, the response period of the next abnormal event is estimated, and the early warning of the fatigue state is realized; and identifying the shielding condition of the face and the mouth region of the driver by using the face attribute identification model, and executing a corresponding identification method according to an identification result to realize identification of the fatigue state. The invention can advance the detection time window of the fatigue event, early warn the fatigue state of the driver in early stage, and can carry out fatigue detection for the driver wearing special equipment, thereby reducing the accident rate.

Description

Special vehicle-oriented driver fatigue detection method
Technical Field
The invention relates to the field of safe driving of automobiles, in particular to a driver fatigue detection method for special vehicles.
Background
Fatigue driving affects driving safety and is one of the main causes of traffic accidents. The driver can form a fatigue state due to long-time driving or insufficient rest, the body reaction capacity can be reduced in the state, the control capacity on the vehicle is weakened, and the visual expression shows that the driver is not concentrated and cannot accurately judge and process sudden abnormal conditions, so that driving hidden dangers can be generated, and the traffic safety is damaged.
The fatigue detection technology of the driver mainly performs research around two main types of contact type and non-contact type, the contact type needs the driver to additionally wear sensing equipment to acquire physiological data, and the fatigue state of the driver is judged by analyzing acquired parameters such as heart rate, blood oxygen concentration, pulse, respiratory rate, myoelectricity, brain wave and the like. The non-contact type mainly judges the fatigue state of a driver by collecting images of the driver through a camera to analyze behavior characteristics or counting vehicle running information, wherein the behavior characteristics mainly utilize facial images of the driver to judge behaviors such as eye closing and yawning, and the vehicle running information mainly comprises running speed, brake pedal treading force, steering wheel holding force, total running accumulated time and the like. Generally speaking, the detection accuracy of the contact technology is higher than that of the non-contact technology, but the price of the physiological data acquisition equipment required by the technology is generally higher, and the driving habit of the driver is influenced, so that the technology is not suitable for practical application.
At present, because vehicle driving information is difficult to obtain, a non-contact fatigue detection method based on facial image features of a driver is most widely used in common vehicles, and eye closing and yawning behaviors of the driver are taken as the most key indexes of fatigue to be included in fatigue detection. However, the method has obvious short plates, and particularly when the method is applied to special vehicles, on one hand, due to the particularity of the special vehicles, drivers need to wear helmets, masks and other equipment, so that the faces of the drivers are greatly shielded, and the characteristics of eyes and mouths cannot be obtained. On the other hand, the yawning behavior is in a deep fatigue state and cannot meet the application scene of the special vehicle, so that a fatigue detection time window of a driver of the special vehicle needs to be early warned of the fatigue state. At present, accumulated driving time length is mainly adopted for fatigue early warning, but the fatigue time of road conditions, individual differences of drivers and mental states is different, and false alarm conditions are easily caused.
Disclosure of Invention
In view of the defects existing in the prior art, the invention aims to provide a method and a device for detecting fatigue of a driver of a special vehicle, so as to realize early warning and detection of the fatigue of the driver of the special vehicle.
In a first aspect, an embodiment of the present invention provides a driver fatigue early warning method, including:
detecting an abnormal event in the running process of the vehicle, and recording the whole process of disappearance of the abnormal event as a time period;
constructing a behavior response time prediction model according to the recorded time period;
predicting the response time of the next abnormal event according to the behavior response model, determining the fatigue state of the current driver, and performing voice reminding;
and updating the behavior response time prediction model according to the difference between the response time of the predicted next abnormal event and the response time of the actual next abnormal event.
Optionally, in this embodiment, the detecting an abnormal event in a driving process of the vehicle, and recording a whole process of disappearance of the abnormal event as a time period includes the following steps:
detecting abnormal events in real time according to a pre-trained abnormal event detection model based on deep learning, and simultaneously recording the working duration of the abnormal event detection model;
starting timing after detecting the abnormal event, finishing timing after detecting the disappearance of the abnormal event, and forming a response time period by the time difference value of the two;
and repeatedly recording the response time period of the abnormal events until the preset number is reached or the working time of the abnormal event detection model meets the preset time.
Optionally, in this embodiment, the constructing a behavior response time prediction model according to the recorded time period includes the following steps:
generating a driver response period table by the recorded multiple abnormal event response periods;
clustering data in a driver response periodic table by using a density clustering algorithm, and extracting the maximum boundary of the most clusters as a fatigue early warning threshold;
and fitting a driver response periodic table by using a neural network to construct a behavior response time prediction model.
Optionally, in this embodiment, the predicting the response time of the next abnormal event according to the behavior response time prediction model to determine the fatigue state of the current driver includes the following steps:
predicting the response time period of the next abnormal event according to the obtained behavior response time prediction model;
judging the fatigue state of the driver according to the relationship between the response time of the next abnormal event obtained by prediction and a fatigue early warning threshold value;
and triggering voice prompt according to the fatigue state of the driver.
Optionally, in this embodiment, the updating the behavior response time prediction model according to the difference between the response time of the predicted next exceptional event and the response time of the actual next exceptional event includes the following steps:
acquiring the actual response time of the next abnormal event;
judging the difference value between the response time of the abnormal event obtained by prediction and the actual response time;
and if the difference value is larger than the preset threshold value, adding the actual response time into a driver response periodic table, and updating the behavior response time prediction model.
In a second aspect, an embodiment of the present invention provides a driver fatigue identification method, including:
acquiring a face image, a posture image and voice of a driver;
detecting the attribute of the human face by using a facial attribute model according to the facial image of the driver;
executing a corresponding driver fatigue identification method according to the detected face attribute result;
and carrying out fatigue reminding according to the identification result.
Optionally, in this embodiment, the acquiring the face image, the posture image, and the voice of the driver includes the following steps:
acquiring a driver face image of each frame of video by using a front camera;
acquiring a time sequence attitude image of a driver by using a side camera;
and acquiring the voice information of the driver by using the voice sensing module.
Optionally, in this embodiment, the detecting, according to the facial image of the driver, attributes of the human face by using a facial attribute model includes the following steps:
carrying out attribute identification on whether the face covering the mouth and eyes is covered according to a pre-trained face attribute model based on deep learning;
optionally, in this embodiment, the identifying by using different fatigue identification models according to the face attribute result includes the following steps:
according to the face attribute recognition result, executing a corresponding driver fatigue recognition method, which specifically comprises the following steps:
if the eyes and eyes of the non-shielding mouth part detect key points of the eyes and the mouth part by using a human face key point detection model, identifying the opening and closing eyes and the opening and closing mouth of the key point region by using a classification model, and judging whether the driver is tired or not by counting the times of closing eyes and the times of opening mouth in a preset time period;
if the eyes are shielded and the mouth is not shielded, detecting key points of the mouth by using a human face key point detection model, performing mouth opening and closing recognition by using a classification model, and comprehensively judging whether a driver is tired or not by counting mouth opening duration and a voice recognition model;
if the eyes are not shielded by the shielding mouth part, detecting eye key points by using a human face key point detection model, identifying the eyes by using a classification model, and judging whether the driver is tired or not by counting the duration time of the eyes;
if the eyes and the mouth are shielded, the driver behavior recognition model is used for recognizing the driver behavior, and whether the driver is tired or not is judged through the head and hand actions.
Optionally, in this embodiment, the fatigue reminding according to the identification result is specifically as follows:
judging fatigue according to the recognition result;
and performing sound and light reminding when the fatigue state is judged.
In a third aspect, an embodiment of the present invention provides a driver fatigue detection apparatus, including:
a sensing module; a processing module; and a reminding module.
Optionally, in this embodiment, the sensing module includes:
the voice acquisition unit in the cab is used for acquiring the voice of a driver;
the system comprises an image acquisition unit in a cab, a face image acquisition unit and a posture image acquisition unit, wherein the image acquisition unit is used for acquiring a face image and a posture image of a driver;
the driving cab external image acquisition unit is used for acquiring a road image and an image of the environment around the vehicle body;
optionally, in this embodiment, the processing module includes:
a memory for storing a computer program executable in the processor;
a processor, communicatively connected to the memory, equipped with a computational accelerator card, and a computer program for executing the driver fatigue warning method and the driver fatigue identification method according to the first and second aspects.
Optionally, in this embodiment, the reminding module includes:
a fatigue early warning stage adopts sound reminding;
and in the fatigue identification stage, sound and light alarm is adopted.
As can be seen from the above description, the embodiments of the present invention have the following advantageous effects:
the invention provides a special vehicle-oriented driver fatigue detection method and device, which are used for detecting the fatigue state of a driver from two aspects of early warning and recognition, early warning the fatigue accumulation condition of the driver by recording the response time of the driver to an abnormal event, and simultaneously establishing a fatigue recognition model by utilizing the facial, head, voice and posture characteristics of the driver to judge whether the driver is in the fatigue state and carrying out corresponding acousto-optic reminding. Therefore, the time window for detecting the fatigue event can be advanced, and the driver is reminded when the driver is in the fatigue state, so that the safety of the special vehicle in the driving process is improved.
Drawings
FIG. 1 is a component diagram of a driver fatigue detection method according to one embodiment of the present invention;
FIG. 2 is a flow chart of a driver fatigue warning method according to one embodiment of the invention;
FIG. 3 is a flow diagram of a driver fatigue identification method according to one embodiment of the invention;
FIG. 4 is a flow chart of a first type of identification method in a driver fatigue identification method according to one embodiment of the present invention;
FIG. 5 is a flow chart of a second type of recognition method in a driver fatigue recognition method according to one embodiment of the present invention;
FIG. 6 is a flow chart of a third type identification method in a driver fatigue identification method according to an embodiment of the present invention;
FIG. 7 is a flow chart of a fourth type of recognition method in a driver fatigue recognition method according to one embodiment of the present invention;
fig. 8 is a schematic structural view of a driver fatigue detecting apparatus according to an embodiment of the present invention;
Detailed Description
To make the objects, technical solutions and advantages of the present invention more apparent, the following description of the embodiments of the present invention with reference to the accompanying drawings is included for understanding only and does not limit the scope of the present invention.
It should be noted that, in the description of the present invention, "a plurality" means two or more. The terms "first," "second," and the like in the description, claims, and drawings of the present application are used solely to distinguish one from another and are not intended to emphasize sequential or chronological order. Also, the embodiments mentioned in the present disclosure are exemplary, and features, structures, characteristics, and the like described in connection with the embodiments may be included in at least one embodiment of the present disclosure.
As shown in fig. 1, is a composition diagram of a driver fatigue detection method according to an embodiment of the present invention, including:
the driver fatigue early warning method 101 is used for early warning the fatigue state of a driver by estimating the response time of the driver under the abnormal condition at the next moment.
The driver fatigue recognition method 102 recognizes the fatigue state of the driver by analyzing the face, posture and voice information of the driver at the present time.
As shown in fig. 2, it is a flowchart of a driver fatigue warning method according to an embodiment of the present invention, the method includes the following steps:
step S201, acquiring a front road image and a vehicle body surrounding environment image;
the method comprises the steps of collecting front road images by using a camera arranged at the middle position of the upper side of a cab windshield, and collecting surrounding environment images by using cameras arranged on the left side, the right side and the tail of a vehicle body. Optionally, the camera for collecting the front road image is a wide-angle camera with a resolution of 1080P and with infrared light supplement, and the camera for collecting the vehicle body surrounding image is a fisheye camera with a field angle larger than 180 degrees.
Step S202, abnormal event detection is carried out according to an abnormal event detection model trained in advance based on deep learning;
the method comprises the following steps: the abnormal event detection model based on deep learning extracts semantic information from the acquired images through a convolutional neural network, and completes detection of abnormal events such as obstacles, lane deviation and the like. Particularly, images of the environment around the vehicle body are obtained by pre-calibrating images obtained from different positions of the vehicle body, an abnormal event detection model is used for detecting the images, the abnormal event detection model is synchronously started along with a processor, the working time is timed and is recorded as t AEW
Step S203, circularly recording the response time period of the abnormal event processing;
the method comprises the following steps: starting timing t from the detection of an abnormal event AEC1 Ending the timing t after the abnormal event is ended AEC2 This time period is recorded as an exception handling response time period: t is AEC =t AEC2 -t AEC1 (ii) a Recording the abnormal event processing response time period in a circulating manner until the number T n Reach a preset value N AEC Or the abnormal event detection model continuously works to reach the threshold value T AEW (ii) a In one embodiment of the invention, N is AEC Is set to be 40, T AEW Setting for 1 hour;
step S204, generating a driver response periodic table, constructing a behavior response time prediction model and obtaining a fatigue early warning threshold;
the method comprises the following steps: a plurality of recorded abnormal event processing response time periods are tabulated to generate a driver response period table S AE And fitting a driver response curve on the table by using a BP back propagation neural network to construct a behavior response time prediction model, and clustering the table by using a density clustering algorithm DBSCAN, wherein the maximum boundary of the most clusters is used as a fatigue early warning threshold Th. Wherein the BP neural network is defined as follows:
Figure BDA0003965022170000051
wherein, BP neural netThe network adopts a three-layer model, x i For network input, O k For the network output, g (x) is the excitation function, w ij For input of layer-to-hidden layer weights, w jk For hidden layer to output layer weights, a j For input layer to hidden layer biasing, b k For the hidden layer to output layer bias, l is the number of hidden layer nodes, n is the number of input layer nodes, i is the input layer node label, and j is the hidden layer node label. Inventive example i was set to 10 and n was set to 1.
Step S205, predicting the response period of the next abnormal event and carrying out fatigue early warning;
the method comprises the following steps: estimating the response period T of the next abnormal event according to the behavior response time prediction model AEP And performing fatigue early warning by comparing with a fatigue early warning threshold Th according to the following formula:
|T AEP -Th|>ξ
where ξ is a critical fluctuation value, which is set to 2 seconds by the embodiment of the present invention; if T AEP When the absolute value of the difference between the fatigue early warning threshold Th in the step S204 and the critical fluctuation value exceeds the critical fluctuation value, the driver is considered to be fatigued, and voice reminding is carried out;
step S206, updating the behavior response time prediction model.
The method comprises the following steps: detecting to obtain the next abnormal event processing response time period T AER Judging and predicting the response period T of the abnormal event AEP If the difference between the two values is greater than a preset difference threshold Td, the following formula is satisfied:
|T AER -T AEP |<Td
wherein, if T AER And T AEP If the difference is greater than Td, the behavior response time prediction model is considered to be wrong, and T is used AER Added to the driver response cycle table S AE And updating the behavior response time prediction model. In one embodiment of the present invention, td is set to 1 second.
Fig. 3 is a flowchart of a driver fatigue recognition method according to an embodiment of the present invention, the method including the steps of:
step S301, acquiring a face image, a posture image and voice information of a driver;
the method comprises the following steps: the method comprises the steps of collecting facial images of a driver in real time through a camera arranged right in front of the driver cab, collecting posture images of the driver through a camera arranged on the side of the driver cab, and collecting voice information of the driver through a voice collector arranged right above the driver. Optionally, the face image capturing camera is a wide dynamic camera with a field angle of 30 degrees, the posture image capturing camera is a wide dynamic camera with a field angle of 45 degrees, and the voice capturing device is a voice device with a sensitivity of-37.5 dB (± 2 dB).
Step S302, recognizing the face attribute according to a pre-trained face attribute recognition model based on deep learning;
the method comprises the following steps: the face attribute model based on deep learning utilizes a convolutional neural network to extract information characteristics from the acquired image, and further realizes attribute identification on whether the driver shields the mouth and eyes. The model structure of the convolutional neural network is constructed based on a multi-label learning mode, and model evaluation indexes are as follows:
Figure BDA0003965022170000061
wherein L is the number of attributes, N is the number of samples, i is the attribute label, TP i And TN i Number of positive and negative samples, P, respectively, correctly classified i And N i The total number of positive and negative samples, respectively, L is set to 2 in the present embodiment. The background of the model in the use scene of the invention is fixed and single, and the number of the attribute types of the human face is small, and the model is optimized based on a MobileNet V2 model in the embodiment of the invention.
Step S303, executing a corresponding recognition method according to the face attribute recognition result;
the method comprises the following steps: the face attribute recognition results are divided into the following four categories:
(1) Eyes are not shielded, and mouths are not shielded;
(2) Eyes are shielded, and mouths are not shielded;
(3) Eyes are not shielded, and mouths are shielded;
(4) Eye shielding and mouth shielding;
and step S304, carrying out fatigue judgment and reminding according to the result of the identification method.
The method comprises the following steps: the fatigue state of the driver is judged according to the result of the identification method, the fatigue information corresponding to the fatigue state is output by a reminding module through voice, and meanwhile, light flicker stimulation is triggered to help the driver to improve the attention and relieve the fatigue condition, so that the driving safety is ensured.
Fig. 4 illustrates a method for identifying (1) class identification results in step S303 according to an embodiment of the present invention, including:
step S401, obtaining key points of the face by using a key point detection model;
the method comprises the following steps: the human face key point detection model based on deep learning extracts deep semantic information from the acquired image through a convolutional neural network to finish the detection of key points of a human face, human eyes, a nose and a mouth.
Step S402, calculating the head inclination angle according to the key points of the left eye and the right eye;
the method comprises the following steps: the head inclination angle s is calculated as follows:
Figure BDA0003965022170000062
wherein θ is the head inclination angle (x) 1 ,y 1 ) As the left eye coordinate, (x) 2 ,y 2 ) Is the right eye coordinate.
Step S403, performing face correction according to the inclination angle;
the method comprises the following steps: calculating an affine change matrix by using the obtained head inclination angle, wherein the rotation center is the position of the key point of the nose, and carrying out face analysis by using the calculated affine change matrixCorrecting to make it be in horizontal face position, correcting key point coordinate left eye
Figure BDA0003965022170000063
Right eye->
Figure BDA0003965022170000064
Nose->
Figure BDA0003965022170000065
Left mouth angle->
Figure BDA0003965022170000066
Right mouth angle->
Figure BDA0003965022170000067
/>
Step S404, extracting an eye mouth region image according to the standard human face model;
the method comprises the following steps: the standard human face model divides the human face into 5 equal parts horizontally and 3 equal parts vertically, the central position of eyes divides the human face into 2 equal parts, and the extraction of the eye region and the mouth region meets the following formula:
(1) Left eye region:
Figure BDA0003965022170000068
Figure BDA0003965022170000069
Figure BDA0003965022170000071
Figure BDA0003965022170000072
(2) Right eye area:
Figure BDA0003965022170000073
Figure BDA0003965022170000074
Figure BDA0003965022170000075
Figure BDA0003965022170000076
(3) Mouth area:
Figure BDA0003965022170000077
Figure BDA0003965022170000078
Figure BDA0003965022170000079
Figure BDA00039650221700000710
where, face _ w is the width of the detected face, face _ h is the length of the detected face, (x) l1 ,y l1 ) As coordinates of the upper left corner of the left eye region, (x) l2 ,y l2 ) Is the lower right corner coordinate of the left eye region, (x) r1 ,y r1 ) As coordinates of the upper left corner of the right eye region, (x) r2 ,y r2 ) Is the coordinate of the lower right corner of the right eye region, (x) m1 ,y m1 ) As coordinates of the upper left corner of the mouth region, (x) m2 ,y m2 ) Is the coordinates of the lower right corner of the mouth region,
Figure BDA00039650221700000711
is the coordinate of the center point of the left eye, and is taken as the position of the left eye>
Figure BDA00039650221700000712
For the coordinates of the center point of the right eye, and>
Figure BDA00039650221700000713
is the coordinate at the left mouth angle, is based on the left hand-held device>
Figure BDA00039650221700000714
For the coordinates at the right mouth corner, a, b, c, d are scaling factors for opening and closing the eyes, scaling the eyes and mouth area when opening and closing the mouth, in an embodiment of the invention a is set to 0.15, b to 0.25, c to 0.15, d to 0.35.
Step S405, classifying the eye and mouth regions;
the method comprises the following steps: the eye opening and closing and mouth opening and closing classification is carried out on the eye and mouth region by using a deep learning-based classification model, the classification model is improved based on a MobileNet V3 model in the embodiment of the invention, wherein a cross entropy loss function is selected as a loss function L, and the formula is as follows:
Figure BDA00039650221700000715
where N is the total number of all samples, y i For the true value of the ith sample,
Figure BDA00039650221700000717
the value output for the ith sample model.
In step S406, fatigue determination is performed based on the classification result.
The method comprises the following steps: setting a fatigue continuous frame threshold Tk and a head inclination threshold Ts, and recording continuous time k frames of fatigue in the presence of eye and mouth closing behaviors, wherein Tk is set to be 10, ts is set to be 45 in the embodiment of the invention, and the judgment condition of fatigue meets the following formula:
Figure BDA00039650221700000716
where S is the head inclination angle calculated in step S402, and if the formula is satisfied, the driver is considered to be in a fatigue state.
Further, as shown in fig. 5, the method for identifying a class (2) identification result in step S303 according to an embodiment of the present invention includes:
step S501, obtaining key points of the face by using a key point detection model;
the method comprises the following steps: extracting deep semantic information from the acquired image through a convolutional neural network by a deep learning-based face key point detection model to complete the detection of the face and the mouth key points; only the left mouth corner and right mouth corner key points need to be based on in one embodiment of the invention.
Step S502, extracting and classifying mouth region images;
the method comprises the following steps: the mouth region is extracted according to the extraction rule of the mouth region in step S304, and the mouth is classified into the open and closed states according to the classification model in step S405.
Step S503, recognizing the voice information according to the voice recognition model;
the method comprises the following steps: and when the classification model judges that the current frame is in a mouth opening state, the classification model triggers the voice recognition model to recognize the voice frequency captured by the voice collector.
In step S504, fatigue determination is performed.
The method comprises the following steps: if the duration time of the mouth-opening state is greater than the preset time threshold value Tc, judging the state of fatigue; if the duration time of the open mouth state is less than a preset time threshold value Tb, determining the state of non-fatigue, if the duration time of the open mouth state is between Tb and Tc, determining whether similar frequency of continuous duration Td exists in the duration time according to voice recognition, if yes, determining the state of fatigue, and if not, determining the state of non-fatigue. In one embodiment of the present invention Tc is set to 10 frames, tb is set to 2 frames, and Td is set to 4 frames.
Fig. 6 illustrates a method for identifying a class identification result in step S303 (3), according to an embodiment of the present invention, including:
step S601, obtaining key points of the face by using a key point detection model;
the method comprises the following steps: the human face key point detection model based on deep learning extracts deep semantic information from the acquired image through a convolutional neural network to complete the detection of the human face and the eye key points.
Step S602, extracting and classifying eye region images;
the method comprises the following steps: the eye region is extracted according to the extraction rule of the eye region in step S404, and the classification model in step S405 classifies the open/closed eye state of the eyes.
In step S603, fatigue determination is performed.
The method comprises the following steps: continuously detecting that the left eye and the right eye are in the eye closing state, and if the continuous eye closing time exceeds a preset double-eye closing time threshold Tec, judging the eyes to be in the fatigue state; if the continuous detected monocular eye closing time exceeds a preset monocular eye closing time threshold Ted, judging the state to be in a fatigue state; if the detected eye closing time length exceeds a preset eye closing time length threshold Tee, judging the eye closing time length is in a fatigue state; wherein the decision priority is as follows: tec > Ted > Tee. In one embodiment of the present invention, tec is set to 6 frames, ted is set to 8 frames, and Tee is set to 10 frames.
Fig. 7 illustrates a method for identifying a class identification result in step S303 (4), according to an embodiment of the present invention, including:
step S701, recognizing the driver behavior by using a driver behavior recognition model;
the method comprises the following steps: 10 frames of images are input into a driver behavior recognition model based on deep learning at one time, behavior recognition including the hands and the head of a driver is carried out on a video time sequence frame, and whether behavior actions of raising the head, covering the mouth and stretching the waist of the driver occur or not is detected. The driver behavior recognition model is composed of a 3D convolution kernel.
In step S702, fatigue determination is performed.
The method comprises the following steps: and judging the behavior of the driver under the current time sequence by the driver behavior recognition model, and if the action duration exceeds a preset action duration threshold Tac, judging that the driver is in a fatigue state. In one embodiment of the invention Tac is set to 10 frames.
Fig. 8 shows a driver fatigue detecting apparatus according to an embodiment of the present invention, including:
the sensing module 801 comprises an external camera, an internal camera and a voice collector, wherein the external camera is used for acquiring road condition images and vehicle body surrounding environment images, and the internal camera is used for acquiring facial images and posture images of a target driver; the voice collector is used for obtaining voice information of a target driver;
a processing module 802 comprising a memory and a processor, wherein the memory is for storing a computer program executable in the processor; the processor is in communication connection with the memory, is equipped with a calculation acceleration card and is used for executing a computer program for realizing the driver fatigue early warning method and the driver fatigue identification method;
the reminding module 803 is used for reminding according to the result of the driver fatigue early warning method and the result of the driver fatigue identification method, wherein the reminding modes comprise voice reminding and light reminding, and the result of the driver fatigue early warning method adopts a voice broadcast reminding mode; the result of the driver fatigue identification method adopts sound and light and voice reminding modes.
It can be understood that the driver fatigue detection apparatus in practical application further includes other types of necessary components, and the connection manner between the components is not limited, and all driver fatigue detection apparatuses that can implement the embodiments of the present invention are within the protection scope of the present invention. In addition, the technical section in the protective scope of the present invention is not limited to the specific embodiments given in the present specification, and all the technical sections which do not contradict the scheme of the present invention are included in the protective scope of the present invention.

Claims (7)

1. A driver fatigue detection method for special vehicles is characterized by comprising the following steps:
a) Carrying out fatigue early warning on a driver;
b) The fatigue of the driver is identified,
wherein:
the step A comprises the following steps:
a1 Acquiring a front road image and a vehicle body surrounding image, wherein the front road image is acquired by using a camera arranged at the middle position of the upper side of a cab windshield, and the surrounding image is acquired by using cameras arranged at the left side, the right side and the tail part of a vehicle body;
a2 Detection of an abnormal event is performed according to an abnormal event detection model trained in advance based on deep learning, and the method comprises the following steps:
constructing an abnormal event detection model to complete the detection of abnormal events such as obstacles, lane deviation and the like; recording the working time of the abnormal event detection model;
a3 Loop recording exception event handling response time period, comprising: recording an abnormal event processing response time period T from the detection of the abnormal event to the end of the abnormal event AEC (ii) a The cycle record abnormal event processing response time period meets the requirement that the number of the abnormal event processing response time periods reaches a preset value N AEC Or the abnormal event detection model continuously works to reach the threshold value T AEW 。;
A4 Generating a driver response periodic table, constructing a behavior response time prediction model and obtaining a fatigue early warning threshold value, wherein the method comprises the following steps:
generating a driver response periodic table S AE
According to the driver response periodic table, a behavior response time prediction model is constructed by using a BP back propagation neural network, and the behavior response time prediction model is expressed as follows:
Figure FDA0003965022160000011
wherein, the BP neural network adopts a three-layer model, x i For network input, O k For the network output, g (x) is the excitation function, w ij For input of layer-to-hidden layer weights, w jk For hidden layer to output layer weights, a j For input layer to hidden layer biasing, b k For hidden layer to output layer biasing, l is the number of hidden layer nodes, n is the number of input layer nodes, i is the input layer node label, j is the hidden layer node label;
Clustering the table by using a density clustering algorithm DBSCAN, and taking the maximum boundary of the most clusters as a fatigue early warning threshold Th:
a5 Predicting the response cycle of the next abnormal event and carrying out fatigue early warning, comprising the following steps:
estimating the response period T of the next abnormal event according to the behavior response time prediction model AEP
And comparing the fatigue early warning threshold value Th to perform fatigue early warning, wherein the fatigue early warning is represented as:
|T AEP -Th|>ξ
xi is a critical fluctuation value, and when the value is exceeded, the driver is considered to be fatigued soon, and voice reminding is carried out;
a6 Update behavior response time prediction model, comprising:
detecting to obtain the next abnormal event processing response time period T AER
Judging and predicting abnormal event response period T AEP Is related to a preset difference threshold Td, which is expressed as:
|T AER -T AEP |<Td
wherein if T AER And T AEP If the difference is greater than Td, the behavior response time prediction model is considered to be wrong, and T is used AER Added to the driver response cycle table S AE And the behavior response time prediction model is updated,
the step B comprises the following steps:
b1 Obtaining a face image, a posture image, and voice information of the driver, including:
a camera arranged right in front of a cab is used for collecting facial images of a driver in real time;
acquiring a time sequence attitude image of a driver by using a side camera arranged in a cab;
the voice information of the driver is acquired by a voice collector arranged right above the driver,
b2 The method) carries out the recognition of the face attribute according to a pre-trained face attribute recognition model based on deep learning, and comprises the following steps:
a face attribute recognition model is constructed based on deep learning, and evaluation indexes are expressed as:
Figure FDA0003965022160000021
wherein L is the number of attributes, N is the number of samples, i is the attribute label, TP i And TN i Number of positive and negative samples, P, respectively, correctly classified i And N i The total number of positive and negative samples respectively;
the attribute recognition of whether the driver obstructs the mouth and eyes is completed,
b3 According to the face attribute recognition result, executing a corresponding recognition method, comprising:
b31 Under the condition that the eyes of the driver are not shielded and the mouth of the driver is not shielded, detecting key points of the eyes and the mouth of the driver by using a human face key point detection model, identifying the eyes and the mouth of the driver in a key point region by using a classification model, and judging whether the driver is tired or not by counting the times of eye closing and the times of mouth opening in a preset time period;
b32 Under the condition that eyes of a driver are shielded and the mouth of the driver is not shielded, detecting key points of the mouth of the driver by using a human face key point detection model, identifying the mouth to be opened or closed by using a classification model, and comprehensively judging whether the driver is tired or not by counting the mouth opening duration and a voice identification model;
b33 Under the condition that the eyes of the driver are not shielded and the mouth of the driver is shielded, detecting eye key points by using a human face key point detection model, performing open-close eye identification by using a classification model, and judging whether the driver is tired or not by counting the duration of closed eyes;
b34 B) under the condition that the eyes of the driver are shielded and the mouth of the driver is shielded, utilizing a driver behavior recognition model to recognize the behavior of the driver, judging whether the driver is tired or not through head and hand actions, and B4) carrying out fatigue judgment and reminding according to the result of the recognition method.
2. The driver fatigue detection method according to claim 1, characterized in that step B31 includes:
acquiring key points of the face by using a key point detection model;
the head tilt angle is calculated from the left and right eye keypoints, as represented by:
Figure FDA0003965022160000022
wherein θ is the head inclination angle, (x) 1 ,y 1 ) As the left eye coordinate, (x) 2 ,y 2 ) Is the coordinates of the right eye;
correcting the face according to the inclination angle;
according to the standard human face model, extracting an eye and mouth region image, wherein the extraction of the eye region and the mouth region is expressed as:
left eye region:
Figure FDA0003965022160000023
Figure FDA0003965022160000031
Figure FDA0003965022160000032
Figure FDA0003965022160000033
right eye region:
Figure FDA0003965022160000034
/>
Figure FDA0003965022160000035
Figure FDA0003965022160000036
Figure FDA0003965022160000037
mouth region:
Figure FDA0003965022160000038
Figure FDA0003965022160000039
Figure FDA00039650221600000310
Figure FDA00039650221600000311
where, face _ w is the width of the detected face, face _ h is the length of the detected face, (x) l1 ,y l1 ) Is the coordinate of the upper left corner of the left eye region (x) l2 ,y l2 ) Is the coordinate of the lower right corner of the left eye region, (x) r1 ,y r1 ) As coordinates of the upper left corner of the right eye region, (x) r2 ,y r2 ) Is the coordinate of the lower right corner of the right eye region, (x) m1 ,y m1 ) As coordinates of the upper left corner of the mouth region, (x) m2 ,y mm2 ) Is the coordinates of the lower right corner of the mouth region,
Figure FDA00039650221600000312
as coordinates of the center point of the left eye,/>
Figure FDA00039650221600000313
Is the coordinate of the center point of the right eye, and is used for judging whether the eye is normal or normal>
Figure FDA00039650221600000314
Is the coordinate at the left mouth angle, is based on the left hand-held device>
Figure FDA00039650221600000315
The coordinates at the right mouth corner are shown, a, b, c and d are respectively scaling factors which are used for opening eyes and closing eyes, and the eyes and mouth area are scaled when the mouth is opened and closed;
classifying the eye region and the mouth region, wherein a cross entropy loss function is selected as a loss function L and is expressed as:
Figure FDA00039650221600000316
where N is the total number of all samples, y i Is the true actual value of the ith sample,
Figure FDA00039650221600000317
the value output for the ith sample model;
and carrying out fatigue judgment according to the classification result, wherein the judgment formula is as follows:
Figure FDA00039650221600000318
wherein s is the head inclination angle, k is the continuous time frame, and Tk and Ts are the set fatigue continuous frame threshold and the head inclination threshold, respectively.
3. The driver fatigue detection method according to claim 2, characterized in that step B32 includes:
acquiring key points of the face by using a key point detection model;
extracting and classifying mouth region images;
recognizing voice information according to the voice recognition model;
and (6) judging fatigue.
4. The driver fatigue detection method according to claim 2, characterized in that step B33 includes:
acquiring key points of the face by using a key point detection model;
extracting and classifying eye region images;
and (6) judging fatigue.
5. The driver fatigue detection method according to claim 2, characterized in that step B34 includes:
identifying the driver behavior by using the driver behavior identification model;
and (6) judging fatigue.
6. The driver fatigue detection method according to claim 2, wherein step B4 includes:
carrying out fatigue judgment according to the identification result;
outputting fatigue information corresponding to the fatigue state by a reminding module in a voice mode;
the trigger light flickers to improve the attention of the driver.
7. A computer-readable storage medium storing a computer program enabling a processor to perform the method according to one of claims 1 to 6.
CN202211494466.2A 2022-11-25 2022-11-25 Special vehicle-oriented driver fatigue detection method Pending CN115937830A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211494466.2A CN115937830A (en) 2022-11-25 2022-11-25 Special vehicle-oriented driver fatigue detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211494466.2A CN115937830A (en) 2022-11-25 2022-11-25 Special vehicle-oriented driver fatigue detection method

Publications (1)

Publication Number Publication Date
CN115937830A true CN115937830A (en) 2023-04-07

Family

ID=86655262

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211494466.2A Pending CN115937830A (en) 2022-11-25 2022-11-25 Special vehicle-oriented driver fatigue detection method

Country Status (1)

Country Link
CN (1) CN115937830A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116570835A (en) * 2023-07-12 2023-08-11 杭州般意科技有限公司 Method for determining intervention stimulation mode based on scene and user state
CN117115894A (en) * 2023-10-24 2023-11-24 吉林省田车科技有限公司 Non-contact driver fatigue state analysis method, device and equipment

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116570835A (en) * 2023-07-12 2023-08-11 杭州般意科技有限公司 Method for determining intervention stimulation mode based on scene and user state
CN116570835B (en) * 2023-07-12 2023-10-10 杭州般意科技有限公司 Method for determining intervention stimulation mode based on scene and user state
CN117115894A (en) * 2023-10-24 2023-11-24 吉林省田车科技有限公司 Non-contact driver fatigue state analysis method, device and equipment

Similar Documents

Publication Publication Date Title
US11783601B2 (en) Driver fatigue detection method and system based on combining a pseudo-3D convolutional neural network and an attention mechanism
CN109902562B (en) Driver abnormal posture monitoring method based on reinforcement learning
CN115937830A (en) Special vehicle-oriented driver fatigue detection method
CN110119676A (en) A kind of Driver Fatigue Detection neural network based
CN108791299A (en) A kind of driving fatigue detection of view-based access control model and early warning system and method
CN104013414A (en) Driver fatigue detecting system based on smart mobile phone
CN109740477A (en) Study in Driver Fatigue State Surveillance System and its fatigue detection method
CN111434553B (en) Brake system, method and device, and fatigue driving model training method and device
CN104637246A (en) Driver multi-behavior early warning system and danger evaluation method
CN113033503A (en) Multi-feature fusion dangerous driving behavior detection method and system
CN110103816B (en) Driving state detection method
CN113642522B (en) Audio and video based fatigue state detection method and device
CN108108651B (en) Method and system for detecting driver non-attentive driving based on video face analysis
Pech et al. Head tracking based glance area estimation for driver behaviour modelling during lane change execution
CN112949345A (en) Fatigue monitoring method and system, automobile data recorder and intelligent cabin
Rani et al. Development of an Automated Tool for Driver Drowsiness Detection
CN112052829B (en) Pilot behavior monitoring method based on deep learning
CN109308467A (en) Traffic accident prior-warning device and method for early warning based on machine learning
CN116012822B (en) Fatigue driving identification method and device and electronic equipment
CN112926364A (en) Head posture recognition method and system, automobile data recorder and intelligent cabin
CN115861982A (en) Real-time driving fatigue detection method and system based on monitoring camera
CN113361452B (en) Driver fatigue driving real-time detection method and system based on deep learning
CN112329566A (en) Visual perception system for accurately perceiving head movements of motor vehicle driver
WO2021024905A1 (en) Image processing device, monitoring device, control system, image processing method, computer program, and recording medium
CN114792437A (en) Method and system for analyzing safe driving behavior based on facial features

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination