CN111882827A - Fatigue driving monitoring method, system and device and readable storage medium - Google Patents

Fatigue driving monitoring method, system and device and readable storage medium Download PDF

Info

Publication number
CN111882827A
CN111882827A CN202010732492.9A CN202010732492A CN111882827A CN 111882827 A CN111882827 A CN 111882827A CN 202010732492 A CN202010732492 A CN 202010732492A CN 111882827 A CN111882827 A CN 111882827A
Authority
CN
China
Prior art keywords
fatigue
driver
data
obtaining
video frames
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010732492.9A
Other languages
Chinese (zh)
Inventor
杨超
何彬宇
韩定定
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fudan University
Original Assignee
Fudan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fudan University filed Critical Fudan University
Priority to CN202010732492.9A priority Critical patent/CN111882827A/en
Publication of CN111882827A publication Critical patent/CN111882827A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • G08B21/06Alarms for ensuring the safety of persons indicating a condition of sleep, e.g. anti-dozing alarms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation

Abstract

The invention belongs to the field of fatigue driving monitoring and traffic safety, and particularly relates to a fatigue driving monitoring method, a system, a device and a readable storage medium, wherein the method comprises the following steps: obtaining a plurality of video frames for describing a driver according to the video data acquired in real time; respectively obtaining a plurality of face data of the driver from the plurality of video frames; obtaining a first judgment result on the basis of a first fatigue judgment algorithm according to the plurality of face data; then respectively obtaining a plurality of attitude data of the driver from the plurality of video frames; then, according to the plurality of attitude data, on the basis of a second fatigue judgment algorithm, obtaining a second judgment result; generating a fatigue warning when either one of the first determination result and the second determination result indicates driver fatigue driving. The fatigue driving monitoring method can reduce the monitoring cost of fatigue driving and improve the accuracy and timeliness of the monitoring result.

Description

Fatigue driving monitoring method, system and device and readable storage medium
Technical Field
The invention belongs to the field of fatigue driving monitoring and traffic safety, and particularly relates to a fatigue driving monitoring method, a system, a device and a readable storage medium.
Background
Fatigue driving affects the driver's attention, perception, judgment and movement to the outside world, and is very likely to cause traffic accidents. According to statistics, the proportion of traffic accidents caused by fatigue driving is 20-30% of the proportion of road traffic accidents. Because the causes of fatigue driving are manifold, especially physiological and psychological, it is difficult to detect and accurately detect in a timely manner and is not easily controllable. Therefore, real-time monitoring of the driving state of the driver is required in order to warn the driver in time when the driver enters a fatigue state.
Regarding the monitoring work of the current driving state of the driver, the method provided by the related art is to monitor the vehicle behavior through various sensors to obtain information such as a driving route, a vehicle speed, a steering wheel rotation amplitude and the like in the driving process of the vehicle, so as to indirectly complete the monitoring work of the current driving state of the driver.
Because the method provided by the related art needs to be additionally provided with various sensors on the vehicle, the monitoring cost is high; meanwhile, the monitored subject is a vehicle rather than a driver, so that the accuracy and timeliness of the monitoring result are poor.
Disclosure of Invention
Aiming at the defects in the related art, the invention provides a fatigue driving monitoring method, a system, a device and a readable storage medium, which can reduce the monitoring cost of fatigue driving and improve the accuracy and timeliness of the monitoring result.
The above object of the present invention is achieved by the following technical solutions:
in a first aspect: the embodiment of the invention provides a fatigue driving monitoring method, which comprises the following steps
Obtaining a plurality of video frames for describing a driver according to the video data acquired in real time;
obtaining a plurality of face data of the driver from the plurality of video frames, respectively;
obtaining a first judgment result on the basis of a first fatigue judgment algorithm according to the plurality of face data;
respectively obtaining a plurality of posture data of the driver from the plurality of video frames, wherein the posture data is used for describing the bone posture of the driver;
obtaining a second judgment result on the basis of a second fatigue judgment algorithm according to the plurality of attitude data;
generating a fatigue warning if either one of the first determination result and the second determination result indicates fatigue driving by the driver.
Compared with a mode of monitoring the vehicle behavior by utilizing various sensors to indirectly monitor the current driving state of the driver, the mode of directly monitoring the current driving state of the driver by acquiring and analyzing video data in the driving process of the driver is adopted, and the monitoring cost of fatigue driving is reduced due to the fact that various sensor installation and maintenance expenses are saved;
meanwhile, the method provided by the application directly monitors the current driving state of the driver, so that the timeliness and the accuracy of the monitoring result are improved;
and the method comprehensively utilizes the information of the posture and the face of the driver, so that the probability that the posture or the face information of the driver cannot be identified can be reduced, and the accuracy of the monitoring result can be further improved.
Optionally, the obtaining a first determination result based on a first fatigue determination algorithm according to the plurality of face data includes:
respectively identifying the plurality of face data through a first fatigue judgment algorithm to obtain a plurality of face identification results corresponding to the plurality of face data one by one; the face recognition result comprises a fatigue state or a normal state;
storing the multiple face recognition results into a preset cache queue;
and judging whether the proportion of the fatigue state face identification result in all the face identification results in the cache queue is greater than a preset warning threshold value or not, and obtaining a first judgment result.
Optionally, before storing the multiple face recognition results in a preset buffer queue, the method further includes:
setting actual total capacity according to a capacity calculation formula, wherein the actual total capacity is used for describing the capacity of the cache queue during use;
and generating the buffer queue according to the actual total capacity.
Optionally, the capacity calculation formula is: n = M1+ M2;
wherein, N is the actual total capacity of the buffer queue, M1 is the preset initial capacity of the buffer queue, and M1 is a positive integer; m2 is the adjustment value of the buffer queue capacity, and M2 is an integer with an absolute value smaller than M1.
Optionally, the calculation formula of the warning threshold is as follows: k = 0.2N;
and K is the warning threshold value, and N is the actual total capacity of the buffer queue.
Optionally, the obtaining a plurality of posture data of the driver from the plurality of video frames respectively; obtaining a second determination result based on a second fatigue determination algorithm according to the plurality of attitude data includes:
respectively obtaining a plurality of attitude data of the driver according to the plurality of video frames and the feature extraction algorithm;
and obtaining a second judgment result according to the plurality of posture data and a second fatigue judgment algorithm trained in advance.
Optionally, the training process of the second fatigue determination algorithm may be:
obtaining a plurality of fatigue video frames from a training video; the fatigue video frame is used for describing that the driver is in fatigue driving;
constructing a second fatigue judgment algorithm according to the adaboost algorithm;
respectively obtaining a plurality of fatigue posture data of the driver from the plurality of fatigue video frames; the fatigue posture data is used for describing the skeleton posture of the driver during fatigue driving;
and training a second fatigue judgment algorithm according to the plurality of fatigue posture data.
In a second aspect, a fatigue driving monitoring device, the device comprising:
the acquisition module is used for acquiring video data in real time;
the processing module is used for obtaining a plurality of video frames for describing the driver according to the video data;
the processing module is further used for respectively obtaining a plurality of face data of the driver from the plurality of video frames;
the processing module is further used for respectively obtaining a plurality of posture data of the driver from the plurality of video frames, wherein the posture data is used for describing the bone posture of the driver;
the judging module is used for obtaining a first judging result on the basis of a first fatigue judging algorithm according to the plurality of face data;
the judging module is further used for obtaining a second judging result on the basis of a second fatigue judging algorithm according to the plurality of attitude data;
a warning module configured to generate a fatigue warning when any one of the first determination result and the second determination result indicates fatigue driving of the driver.
In a third aspect, a fatigue driving monitoring system, the system comprising:
the acquisition device is used for acquiring video data in real time;
processing means for obtaining a plurality of video frames describing a driver from the video data;
the processing device is further used for respectively obtaining a plurality of face data of the driver from the plurality of video frames;
the processing device is further used for respectively obtaining a plurality of posture data of the driver from the plurality of video frames, wherein the posture data is used for describing the bone posture of the driver;
the judging device is used for obtaining a first judging result on the basis of a first fatigue judging algorithm according to the plurality of face data;
the judging device is further used for obtaining a second judging result on the basis of a second fatigue judging algorithm according to the plurality of attitude data;
warning means for generating a fatigue warning when any one of the first determination result and the second determination result indicates fatigue driving of the driver.
In a fourth aspect, a computer readable storage medium, having stored thereon a computer program comprising program instructions which, when executed by a processor, implement the fatigue driving monitoring method according to the first aspect as described above.
The application provides a fatigue driving monitoring method, a system, a device and a readable storage medium, which comprises the following steps: obtaining a plurality of video frames for describing a driver according to the video data acquired in real time; obtaining a plurality of face data of the driver from the plurality of video frames, respectively; obtaining a first judgment result on the basis of a first fatigue judgment algorithm according to the plurality of face data; respectively obtaining a plurality of posture data of the driver from the plurality of video frames, wherein the posture data is used for describing the bone posture of the driver; obtaining a second judgment result on the basis of a second fatigue judgment algorithm according to the plurality of attitude data; and if any one of the first judgment result and the second judgment result meets a preset condition, generating a fatigue warning.
The technical scheme that this application provided reaches beneficial effect is: the method has the advantages that the monitoring cost of fatigue driving is reduced by monitoring the face data and the posture data of the driver in real time, and meanwhile, the accuracy and the timeliness of the monitoring result are improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a flowchart of a fatigue driving monitoring method according to an embodiment of the present application;
fig. 2 is a flowchart of a fatigue driving monitoring method according to a second embodiment of the present application;
fig. 3 is a schematic diagram for explaining a relationship between a buffer length and a delay time according to a second embodiment of the present application;
fig. 4 is a schematic structural diagram of a fatigue driving monitoring device according to a third embodiment of the present application;
fig. 5 is a schematic diagram of a fatigue driving monitoring system according to a fourth embodiment of the present application.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
In order to make the purpose, technical solutions and advantages of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The first embodiment is as follows:
referring to fig. 1, a method for monitoring fatigue driving disclosed in the present application specifically includes the following steps:
101. according to the video data collected in real time, a plurality of video frames for describing the driver are obtained.
102. From the plurality of video frames, a plurality of pieces of face data of the driver are obtained, respectively.
103. A first determination result is obtained based on a first fatigue determination algorithm based on the plurality of face data.
Specifically, the plurality of face data are respectively identified through a first fatigue judgment algorithm to obtain a plurality of face identification results corresponding to the plurality of face data one by one; the face recognition result comprises a fatigue state or a normal state;
storing a plurality of face recognition results into a preset cache queue;
and judging whether the proportion of the fatigue state face identification result in all the face identification results in the cache queue is greater than a preset warning threshold value or not, and obtaining a first judgment result.
Optionally, the setting process of the buffer queue may be:
setting the actual total capacity of the buffer queue according to a capacity calculation formula, wherein the actual total capacity is used for describing the capacity of the buffer queue during use;
and generating a buffer queue according to the actual total capacity.
Optionally, the capacity calculation formula in the above step may be: n = M1+ M2;
in the above capacity calculation formula, N is the actual total capacity of the buffer queue, M1 is the preset initial capacity of the buffer queue, and M1 is a positive integer; m2 is the adjustment value of the buffer queue capacity, and M2 is an integer with the absolute value smaller than M1.
Optionally, in the foregoing step, the calculation formula of the warning threshold may be: k = 0.2N;
in the calculation formula of the warning threshold, K is the warning threshold, and N is the actual total capacity of the buffer queue.
104. A plurality of posture data of the driver are respectively obtained from the plurality of video frames obtained in step 101, and a second determination result is obtained based on a second fatigue determination algorithm according to the plurality of posture data.
Specifically, the posture data is used for describing the skeletal posture of the driver; the process described in step 104 may be:
respectively obtaining a plurality of attitude data of the driver according to a plurality of video frames and a feature extraction algorithm;
and obtaining a second judgment result according to the plurality of posture data and a second fatigue judgment algorithm trained in advance.
The training process of the second fatigue determination algorithm may be:
obtaining a plurality of fatigue video frames from a training video; the fatigue video frame is used for describing that the driver is in fatigue driving;
constructing a second fatigue judgment algorithm according to an adaptive boosting algorithm;
respectively obtaining a plurality of fatigue posture data of the driver from a plurality of fatigue video frames; the fatigue posture data is used for describing the skeleton posture of the driver during fatigue driving;
a second fatigue determination algorithm is trained based on the plurality of fatigue attitude data.
It should be noted that steps 102 to 103 are processes of acquiring a first determination result, step 104 is a process of acquiring a second determination result, and the process of acquiring the first determination result and the process of acquiring the second determination result are parallel processes.
105. If either of the first determination result and the second determination result indicates fatigue driving by the driver, a fatigue warning is generated.
Example two:
the embodiment of the application provides a fatigue driving monitoring method, and as shown in fig. 2, the method includes:
201. according to the video data collected in real time, a plurality of video frames for describing the driver are obtained.
Specifically, video data of a driver in the driving process is collected in real time;
identifying the video data according to the time sequence and obtaining a plurality of video frames; wherein the video frame comprises a face image and a body image for describing the driver; the identification process may be:
carrying out face recognition on the video data to obtain a plurality of video frames comprising the face image of the driver; the recognition process can be realized by a depth recognition algorithm facing face recognition, and the embodiment of the invention does not limit the specific depth recognition algorithm;
carrying out body image recognition on a plurality of video frames comprising facial images of a driver to obtain a plurality of video frames; the recognition process can be realized by a depth recognition algorithm facing face recognition, and the embodiment of the invention does not limit the specific depth recognition algorithm.
In practical application, the device for acquiring video data in real time may be a Kinect, a vehicle-mounted camera configured with a camera module and a data transmission module, or other devices configured with a camera module and a data transmission module.
202. From the plurality of video frames, a plurality of pieces of face data of the driver are obtained, respectively.
Specifically, the face data includes eye feature data of the driver and mouth feature data of the driver;
according to a face recognition algorithm configured by the Kinect, eye feature data and mouth feature data corresponding to each video frame are respectively obtained from a plurality of video frames; the eye feature data is used for describing eye features of the driver, and the mouth feature data is used for describing mouth features of the driver.
When a person is tired, the person is innervated by brain nerves and can carry out deep breathing activity in a yawning mode; in the yawning process, the eyes of a person are usually in a closed state, and the mouth of the person is in an open state; based on the reasons, whether the driver is in the yawning process or not can be judged according to the eye characteristics and the mouth characteristics of the driver in the driving process; if the driver is in the yawning process, the current fatigue driving state of the driver is estimated; if the driver is not in the process of yawning, it is estimated that the driver is currently in a normal driving state.
203. And respectively identifying the plurality of face data through a first fatigue judgment algorithm to obtain a plurality of face identification results corresponding to the plurality of face data one by one.
Specifically, the face recognition result includes a fatigue state or a normal state;
2031. according to the eye feature data obtained in step 202, on the basis of the first fatigue determination algorithm, an eye opening distance for describing the opening and closing condition of the driver's eyes is obtained, and whether the eye opening distance is greater than an eye closing threshold value is determined, if yes, step 2032 is executed; otherwise, go to step 2033;
the eye closing threshold value is a critical value for describing the opening and closing condition of the eyes of the driver, and when the opening distance of the eyes of the driver is greater than the eye closing threshold value, the eyes of the driver are judged to be in the opening condition; when the opening distance of the driver eyes is smaller than the eye closing threshold value, judging that the driver eyes are in a closed state; in practical application, the first fatigue judgment algorithm can be a depth recognition algorithm applied to human eye positioning, and the training process of the depth recognition algorithm is not limited in the application;
2032. according to the mouth feature data obtained in step 202, on the basis of the first fatigue determination algorithm, obtaining a mouth opening distance for describing the opening and closing condition of the driver's mouth, and determining whether the mouth opening distance is smaller than a mouth closing threshold value, if so, executing step 2034; otherwise, go to step 2033;
the closed mouth threshold value is a critical value describing the opening and closing condition of the mouth of the driver, and when the opening distance of the mouth of the driver is greater than the closed mouth threshold value, the mouth of the driver is judged to be in an open state; when the opening distance of the driver mouth is smaller than the threshold value of the closed mouth, judging that the driver mouth is in a closed state; in practical application, the first fatigue determination algorithm may be a depth recognition algorithm applied to mouth positioning, and the training process of the depth recognition algorithm is not limited in the present application;
2033. judging that the face identification result corresponding to the face data is in a normal state;
2034. judging that the face identification result corresponding to the face data is in a fatigue state;
the operations of steps 2031 to 2034 are performed on any face data, and the operations of steps 2031 to 2034 are performed on the next face data in the same way until the face data all obtain their corresponding face recognition results.
204. And setting the actual total capacity of the buffer queue according to a capacity calculation formula, and generating the buffer queue according to the actual total capacity.
Specifically, the actual total capacity of the cache queue is obtained through a capacity calculation formula; the buffer queue is used for storing a plurality of face recognition results obtained in step 203, and the actual total capacity is the capacity of the buffer queue when in use;
and generating a buffer queue according to the actual total capacity.
Illustratively, assuming that the actual total capacity obtained by the capacity calculation formula is N, N data nodes for storing the face recognition results obtained in step 203 are generated, and the clockwise direction of the first data node is connected to the counterclockwise direction of the second data node, the clockwise direction of the second data node is connected to the counterclockwise direction of the third data node … …, the clockwise direction of the N-1 node is connected to the counterclockwise direction of the nth node, the clockwise direction of the nth node is connected to the counterclockwise direction of the first node, and the N data nodes are connected end to end in a ring, so that a buffer queue for storing the face recognition results obtained in step 203 is generated, and the buffer queue is a ring queue.
Wherein, the capacity calculation formula can be: n = M1+ M2;
in the above capacity calculation formula, N is the actual total capacity of the buffer queue, and the actual total capacity is used to describe the capacity of the buffer queue when in use; m1 is the preset initial capacity of the buffer queue, and M1 is a positive integer; m2 is the adjustment value of the buffer queue capacity, and M2 is an integer with the absolute value smaller than M1.
It should be noted that step 204 may be executed between step 203 and step 205, may also be executed synchronously with step 201, and may also be executed after step 201 or step 202, and the execution order of step 204 is not limited in the embodiment of the present application.
205. And storing the plurality of face recognition results obtained in the step 203 into the buffer queue generated in the step 204, then judging whether the proportion of the fatigue state face recognition results in all the face recognition results in the buffer queue is greater than a preset warning threshold value, and obtaining a first judgment result.
In particular, the method comprises the following steps of,
2051. sequentially storing a plurality of face recognition results obtained in the step 203 into the cache queue generated in the step 204 according to a time sequence; when the cache queue is in a full queue state, discarding the face recognition result stored firstly in the cache queue according to a First-in First-out (FIFO) principle, and storing a new face recognition result in a data node where the discarded face recognition result originally exists;
for example, it is assumed that the actual total capacity N of the above-mentioned cache queue is equal to 100, and the cache queue is formed by the data node N1, the data node N2, and the data node N3... the data node N99 and the data node N100 which are connected in order; the data node N1 is the data node which is stored with the face recognition result firstly in the cache queue;
the face recognition results obtained in step 203 are: a face recognition result a1, a face recognition result a2, a face recognition result a3... a face recognition result a99, a face recognition result a100, a face recognition result a101, and a face recognition result a 102;
the face recognition result a1 is stored in the data node N1, the face recognition result a2 is stored in the data node N2, the face recognition result a99 is stored in the data node N3, the face recognition result a100 is stored in the data node N100, and the buffer queue is full;
when the cache queue stores the face recognition result a101, according to the FIFO principle, the cache queue discards the face recognition result a1 stored in the data node N1 first, and then stores the face recognition result a101 into the data node N1;
similarly, when the cache queue continues to store the face recognition result a102, according to the FIFO principle, the cache queue discards the face recognition result a2 stored in the data node N2 first, and then stores the face recognition result a102 into the data node N2;
if other face recognition results behind the face recognition result a102 exist, the other face recognition results are also stored in the cache queue in order according to the mode;
2052. after the buffer queue is in a full queue state, acquiring the number of face recognition results in the fatigue state in the buffer queue, and calling the number as fatigue amount;
2053. dividing the fatigue amount obtained in step 2052 by the actual total capacity of the cache queue to obtain a proportion value of the face recognition result in the fatigue state in the cache queue, and referring the proportion value as a fatigue proportion;
2054. judging whether the fatigue ratio obtained in step 2053 is greater than a preset warning threshold, and if so, setting the first judgment result as fatigue driving; otherwise, setting the first judgment result as normal driving;
the calculation formula of the warning threshold value can be as follows: k = 0.2N;
in the formula for calculating the warning threshold, K is the warning threshold, and N is the actual total capacity of the buffer queue generated in step 203.
Optionally, in order to further reduce the delay time of the first determination result obtaining process and ensure the timeliness of the first determination result, the value of the actual total capacity N may be adjusted by setting the value of M2 through the capacity calculation formula in step 204, and the setting process may be;
acquiring delay time to be set;
acquiring actual total capacity corresponding to the delay time according to an aging formula;
setting the value of M2 according to the actual total capacity corresponding to the delay time;
wherein, the aging formula can be:
Figure DEST_PATH_IMAGE001
in the above aging equation, t is a delay time, i.e., a time taken for the first fatigue determination algorithm to recognize an action of yawning by the driver and set the first determination result as fatigue driving after the action;
k is an alert threshold value of the cache queue;
n is the actual total capacity of the buffer queue;
a is the accuracy of the first fatigue judgment algorithm identification;
f is the frame rate of the video data;
since the timeliness of the first fatigue determination algorithm is closely related to the actual total capacity of the buffer queue, in order to adapt to different occasions, the actual total capacity of the buffer queue is dynamically adjusted through a capacity calculation formula, so that the timeliness of the final monitoring result is improved. In order to further explain that the purpose of further reducing the delay time is achieved by setting the value of M2 according to the embodiment of the present invention, and the first determination result is guaranteed to have strong timeliness, the embodiment of the present application discloses a relationship between an actual total capacity (buffer length) and a delay time in actual use, which is shown in fig. 3.
As can be seen from fig. 3, when the buffer length is within the range of 20-40, the delay time is not significantly changed, because the delay time is mainly generated by the computer performing data calculation and data transmission when the buffer length is within 40.
When the buffer length is larger than 40, it can be seen from the figure that the delay time will increase with the increase of the buffer length, and the trend is linear, which confirms the existence of the aging formula.
206. According to the feature extraction algorithm and the video frames obtained in step 201, a plurality of posture data describing the bone posture of the driver are obtained respectively.
Specifically, on the basis of the gesture recognition algorithm, a plurality of gesture data corresponding to the plurality of video frames one to one are respectively obtained according to the plurality of video frames obtained in step 201 according to a time sequence;
the posture data comprise head characteristic data of a driver, limb characteristic data of the driver and body characteristic data of the driver; the target detection process may be:
detecting a human head target of the video frames through a gesture recognition algorithm to obtain a plurality of video frames comprising the head characteristic data of the driver; the gesture recognition algorithm can be a target detection algorithm applied to human head recognition, and the training process of the target detection algorithm is not limited in the application;
performing limb target detection on a plurality of video frames comprising the head characteristic data of the driver through a gesture recognition algorithm to obtain a plurality of video frames comprising the limb characteristic data of the driver; the gesture recognition algorithm can be a target detection algorithm applied to limb recognition, and the training process of the target detection algorithm is not limited in the application;
carrying out body target detection on a plurality of video frames comprising limb characteristic data through a posture recognition algorithm to obtain a plurality of posture data comprising driver body characteristic data; the gesture recognition algorithm can be a target detection algorithm applied to trunk recognition, and the training process of the target detection algorithm is not limited in the application.
207. And obtaining a second judgment result according to the plurality of posture data obtained in the step 206 and a second fatigue judgment algorithm trained in advance.
Specifically, the plurality of posture data are respectively substituted into a pre-trained second fatigue judgment algorithm, and a plurality of posture judgment results corresponding to the plurality of posture data one to one are obtained; the posture judgment result comprises fatigue driving or normal driving;
judging whether a posture judgment result of fatigue driving exists in the plurality of posture judgment results, if so, setting a second judgment result as the fatigue driving; otherwise, the second determination result is set as normal driving.
The training process of the second fatigue determination algorithm may be:
obtaining a plurality of fatigue video frames from a training video; the fatigue video frame is used for describing that the driver is in fatigue driving;
constructing a second fatigue judgment algorithm according to an adaptive boosting algorithm;
respectively obtaining a plurality of fatigue posture data of the driver from a plurality of fatigue video frames; the fatigue posture data is used for describing the skeleton posture of the driver during fatigue driving;
training a second fatigue decision algorithm according to the plurality of fatigue attitude data and the gradient descent algorithm.
208. If either of the first determination result and the second determination result indicates fatigue driving by the driver, a fatigue warning is generated.
Specifically, if the first determination result is fatigue driving or the second determination result is fatigue driving, warning information is generated;
according to the warning information, a fatigue warning is sent to the driver in a voice broadcast mode.
Compared with a mode of monitoring the vehicle behavior by utilizing various sensors to indirectly monitor the current driving state of the driver, the mode of directly monitoring the current driving state of the driver by acquiring and analyzing video data in the driving process of the driver is adopted, and the monitoring cost of fatigue driving is reduced due to the fact that various sensor installation and maintenance expenses are saved;
in addition, the method comprehensively utilizes the information of the posture and the face of the driver, so that the posture or the face information of the driver cannot be recognized, or the probability of the situation of error recognition is reduced, and the accuracy of the monitoring result is improved.
EXAMPLE III
The embodiment of the present application provides a fatigue driving monitoring device 300, and as shown in fig. 4, the device 300 includes:
the acquisition module 301 is used for acquiring video data in real time;
a processing module 302, configured to obtain a plurality of video frames describing a driver according to the video data;
the processing module 302 is further configured to obtain a plurality of face data of the driver from the plurality of video frames, respectively;
the processing module 302 is further configured to obtain a plurality of posture data of the driver from the plurality of video frames, respectively, where the posture data is used for describing a bone posture of the driver;
a determining module 303, configured to obtain a first determination result based on a first fatigue determination algorithm according to the plurality of face data;
the determination module 303 is further configured to obtain a second determination result based on a second fatigue determination algorithm according to the plurality of posture data;
a warning module 304 for generating a fatigue warning when any one of the first determination result and the second determination result indicates fatigue driving of the driver.
Optionally, the apparatus 300 further comprises:
a front-end module 305, configured to set an actual total capacity of the buffer queue according to a capacity calculation formula, where the actual total capacity is used to describe a capacity of the buffer queue when the buffer queue is used; and generating a buffer queue according to the actual total capacity.
Wherein, the capacity calculation formula can be: n = M1+ M2;
n is the actual total capacity of the buffer queue, M1 is the preset initial capacity of the buffer queue, and M1 is a positive integer; m2 is the adjustment value of the buffer queue capacity, and M2 is an integer with the absolute value smaller than M1.
Optionally, the determining module 303 is specifically configured to:
respectively identifying the plurality of face data through a first fatigue judgment algorithm to obtain a plurality of face identification results corresponding to the plurality of face data one by one; the face recognition result comprises a fatigue state or a normal state;
storing a plurality of face recognition results into a preset cache queue;
and judging whether the proportion of the fatigue state face identification results in all the face identification results in the cache queue is greater than a preset warning threshold value or not, and obtaining a first judgment result.
The calculation formula of the warning threshold value can be as follows: k = 0.2N;
in the calculation formula of the warning threshold, K is the warning threshold, and N is the actual total capacity of the buffer queue.
Optionally, the processing module 302 is specifically configured to: respectively obtaining a plurality of attitude data of the driver according to a plurality of video frames and a feature extraction algorithm;
optionally, the determining module 303 is specifically configured to: and obtaining a second judgment result according to the plurality of posture data and a second fatigue judgment algorithm trained in advance.
Optionally, the apparatus 300 further comprises:
a training module 306, configured to obtain a plurality of fatigue video frames from a training video; the fatigue video frame is used for describing that the driver is in fatigue driving;
constructing a second fatigue judgment algorithm according to the adaboost algorithm;
respectively obtaining a plurality of fatigue posture data of the driver from a plurality of fatigue video frames; the fatigue posture data is used for describing the skeleton posture of the driver during fatigue driving;
a second fatigue determination algorithm is trained based on the plurality of fatigue attitude data.
Example four
The embodiment of the present application provides a fatigue driving monitoring system, referring to fig. 5, the system includes:
the acquisition device 401 is used for acquiring video data in real time;
processing means 402 for obtaining a plurality of video frames describing a driver from the video data;
the processing device 402 is further configured to obtain a plurality of face data of the driver from the plurality of video frames, respectively;
the processing device 402 is further configured to obtain, from the plurality of video frames, a plurality of posture data of the driver, respectively, the posture data being used for describing a bone posture of the driver;
a decision device 403 for obtaining a first decision result based on a first fatigue decision algorithm based on the plurality of face data;
the determining device 403 is further configured to obtain a second determination result based on a second fatigue determination algorithm according to the plurality of posture data;
warning means 404 for generating a fatigue warning when either one of the first determination result and the second determination result indicates fatigue driving of the driver.
Optionally, the determining device 403 is specifically configured to:
respectively identifying the plurality of face data through a first fatigue judgment algorithm to obtain a plurality of face identification results corresponding to the plurality of face data one by one; the face recognition result comprises a fatigue state or a normal state;
storing a plurality of face recognition results into a preset cache queue;
and judging whether the proportion of the fatigue state face identification results in all the face identification results in the cache queue is greater than a preset warning threshold value or not, and obtaining a first judgment result.
Optionally, the system further comprises:
a front-end device 405, configured to set an actual total capacity according to a capacity calculation formula, where the actual total capacity is used to describe a capacity of the cache queue when the cache queue is used; and generating a buffer queue according to the actual total capacity.
Wherein, the capacity calculation formula can be: n = M1+ M2;
n is the actual total capacity of the buffer queue, M1 is the preset initial capacity of the buffer queue, and M1 is a positive integer; m2 is the adjustment value of the buffer queue capacity, M2 is an integer with the absolute value less than M1;
the alert threshold may be calculated as: k = 0.2N;
k is an alert threshold value, and N is the actual total capacity of the buffer queue.
Optionally, the processing device 402 is specifically configured to: respectively obtaining a plurality of attitude data of the driver according to a plurality of video frames and a feature extraction algorithm;
optionally, the determining device 403 is specifically configured to: and obtaining a second judgment result according to the plurality of posture data and a second fatigue judgment algorithm trained in advance.
Optionally, the system further comprises:
a training device 406, configured to obtain a plurality of fatigue video frames from a training video; the fatigue video frame is used for describing that the driver is in fatigue driving;
constructing a second fatigue judgment algorithm according to the adaboost algorithm;
respectively obtaining a plurality of fatigue posture data of the driver from a plurality of fatigue video frames; the fatigue posture data is used for describing the skeleton posture of the driver during fatigue driving;
a second fatigue determination algorithm is trained based on the plurality of fatigue attitude data.
Example five:
the embodiment of the application provides a computer-readable storage medium, in which one or more preset programs are stored, and when the preset programs are executed by a processor, the steps of the fatigue driving monitoring method in the first embodiment or the second embodiment are implemented.
The embodiment of the application provides a fatigue driving monitoring method, a system, a device and a readable storage medium, wherein a plurality of video frames for describing a driver are obtained according to video data acquired in real time; obtaining a plurality of face data of the driver from the plurality of video frames, respectively; obtaining a first judgment result on the basis of a first fatigue judgment algorithm according to the plurality of face data; respectively obtaining a plurality of posture data of the driver from the plurality of video frames, wherein the posture data is used for describing the bone posture of the driver; obtaining a second judgment result on the basis of a second fatigue judgment algorithm according to the plurality of attitude data; and if any one of the first judgment result and the second judgment result meets a preset condition, generating a fatigue warning.
Compared with a mode of monitoring the vehicle behavior by utilizing various sensors to indirectly monitor the current driving state of the driver, the mode of directly monitoring the current driving state of the driver by acquiring and analyzing the video data in the driving process of the driver saves various sensor preparation and maintenance expenses, so that the monitoring cost of fatigue driving is reduced;
meanwhile, the method provided by the application directly monitors the current driving state of the driver, so that the timeliness and the accuracy of the monitoring result are improved;
and the method comprehensively utilizes the information of the posture and the face of the driver, so that the situation that the posture or the face information of the driver cannot be identified can be avoided, and the accuracy of the monitoring result can be further improved.
It should be noted that: in the device and the system for monitoring fatigue driving provided by the above embodiments, when the method for monitoring fatigue driving is executed, only the division of the above functional modules is taken as an example, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structures of the device and the device are divided into different functional modules so as to complete all or part of the above described functions. In addition, the fatigue driving monitoring method, the fatigue driving monitoring device and the fatigue driving monitoring system provided by the embodiment belong to the same concept, and specific implementation processes are described in the method embodiment in detail and are not described again.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (10)

1. A method of monitoring fatigue driving, the method comprising:
obtaining a plurality of video frames for describing a driver according to the video data acquired in real time;
obtaining a plurality of face data of the driver from the plurality of video frames, respectively;
obtaining a first judgment result on the basis of a first fatigue judgment algorithm according to the plurality of face data;
respectively obtaining a plurality of posture data of the driver from the plurality of video frames, wherein the posture data is used for describing the bone posture of the driver;
obtaining a second judgment result on the basis of a second fatigue judgment algorithm according to the plurality of attitude data;
generating a fatigue warning if either one of the first determination result and the second determination result indicates fatigue driving by the driver.
2. The method of claim 1, wherein obtaining a first determination based on a first fatigue determination algorithm based on the plurality of facial data comprises:
respectively identifying the plurality of face data through a first fatigue judgment algorithm to obtain a plurality of face identification results corresponding to the plurality of face data one by one; the face recognition result comprises a fatigue state or a normal state;
storing the multiple face recognition results into a preset cache queue;
and judging whether the proportion of the fatigue state face identification result in all the face identification results in the cache queue is greater than a preset warning threshold value or not, and obtaining a first judgment result.
3. The method of claim 2, wherein before storing the face recognition results in a predetermined buffer queue, the method further comprises:
setting actual total capacity according to a capacity calculation formula, wherein the actual total capacity is used for describing the capacity of the cache queue during use;
and generating the buffer queue according to the actual total capacity.
4. The method of claim 3, wherein the capacity calculation formula is: n = M1+ M2;
wherein, N is the actual total capacity of the buffer queue, M1 is the preset initial capacity of the buffer queue, and M1 is a positive integer; m2 is the adjustment value of the buffer queue capacity, and M2 is an integer with an absolute value smaller than M1.
5. The method according to claim 4, wherein the alert threshold is calculated by the formula: k = 0.2N;
and K is the warning threshold value, and N is the actual total capacity of the buffer queue.
6. The method of claim 1, wherein the obtaining, from the plurality of video frames, a plurality of pose data of the driver, respectively; obtaining a second determination result based on a second fatigue determination algorithm according to the plurality of attitude data includes:
respectively obtaining a plurality of attitude data of the driver according to the plurality of video frames and the feature extraction algorithm;
and obtaining a second judgment result according to the plurality of posture data and a second fatigue judgment algorithm trained in advance.
7. The method of claim 6, wherein the training process of the second fatigue decision algorithm comprises:
obtaining a plurality of fatigue video frames from a training video; the fatigue video frame is used for describing that the driver is in fatigue driving;
constructing a second fatigue judgment algorithm according to the adaboost algorithm;
respectively obtaining a plurality of fatigue posture data of the driver from the plurality of fatigue video frames; the fatigue posture data is used for describing the skeleton posture of the driver during fatigue driving;
and training a second fatigue judgment algorithm according to the plurality of fatigue posture data.
8. A fatigue driving monitoring device, the device comprising:
the acquisition module is used for acquiring video data in real time;
the processing module is used for obtaining a plurality of video frames for describing the driver according to the video data;
the processing module is further used for respectively obtaining a plurality of face data of the driver from the plurality of video frames;
the processing module is further used for respectively obtaining a plurality of posture data of the driver from the plurality of video frames, wherein the posture data is used for describing the bone posture of the driver;
the judging module is used for obtaining a first judging result on the basis of a first fatigue judging algorithm according to the plurality of face data;
the judging module is further used for obtaining a second judging result on the basis of a second fatigue judging algorithm according to the plurality of attitude data;
a warning module configured to generate a fatigue warning when any one of the first determination result and the second determination result indicates fatigue driving of the driver.
9. A fatigue driving monitoring system, the system comprising:
the acquisition device is used for acquiring video data in real time;
processing means for obtaining a plurality of video frames describing a driver from the video data;
the processing device is further used for respectively obtaining a plurality of face data of the driver from the plurality of video frames;
the processing device is further used for respectively obtaining a plurality of posture data of the driver from the plurality of video frames, wherein the posture data is used for describing the bone posture of the driver;
the judging device is used for obtaining a first judging result on the basis of a first fatigue judging algorithm according to the plurality of face data;
the judging device is further used for obtaining a second judging result on the basis of a second fatigue judging algorithm according to the plurality of attitude data;
warning means for generating a fatigue warning when any one of the first determination result and the second determination result indicates fatigue driving of the driver.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
CN202010732492.9A 2020-07-27 2020-07-27 Fatigue driving monitoring method, system and device and readable storage medium Pending CN111882827A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010732492.9A CN111882827A (en) 2020-07-27 2020-07-27 Fatigue driving monitoring method, system and device and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010732492.9A CN111882827A (en) 2020-07-27 2020-07-27 Fatigue driving monitoring method, system and device and readable storage medium

Publications (1)

Publication Number Publication Date
CN111882827A true CN111882827A (en) 2020-11-03

Family

ID=73201680

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010732492.9A Pending CN111882827A (en) 2020-07-27 2020-07-27 Fatigue driving monitoring method, system and device and readable storage medium

Country Status (1)

Country Link
CN (1) CN111882827A (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104123545A (en) * 2014-07-24 2014-10-29 江苏大学 Real-time expression feature extraction and identification method
CN105551182A (en) * 2015-11-26 2016-05-04 吉林大学 Driving state monitoring system based on Kinect human body posture recognition
CN106377228A (en) * 2016-09-21 2017-02-08 中国人民解放军国防科学技术大学 Monitoring and hierarchical-control method for state of unmanned aerial vehicle operator based on Kinect
CN106850714A (en) * 2015-12-04 2017-06-13 中国电信股份有限公司 Caching sharing method and device
CN108038453A (en) * 2017-12-15 2018-05-15 罗派智能控制技术(上海)有限公司 A kind of driver's state-detection and identifying system based on RGBD
CN108720851A (en) * 2018-05-23 2018-11-02 释码融和(上海)信息科技有限公司 A kind of driving condition detection method, mobile terminal and storage medium
CN109953763A (en) * 2019-02-28 2019-07-02 扬州大学 A kind of vehicle carried driving behavioral value early warning system and method based on deep learning
CN110096957A (en) * 2019-03-27 2019-08-06 苏州清研微视电子科技有限公司 The fatigue driving monitoring method and system merged based on face recognition and Activity recognition
CN110298213A (en) * 2018-03-22 2019-10-01 北京深鉴智能科技有限公司 Video analytic system and method
CN110891023A (en) * 2019-10-31 2020-03-17 上海赫千电子科技有限公司 Signal routing conversion method and device based on priority strategy
CN111414813A (en) * 2020-03-03 2020-07-14 南京领行科技股份有限公司 Dangerous driving behavior identification method, device, equipment and storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104123545A (en) * 2014-07-24 2014-10-29 江苏大学 Real-time expression feature extraction and identification method
CN105551182A (en) * 2015-11-26 2016-05-04 吉林大学 Driving state monitoring system based on Kinect human body posture recognition
CN106850714A (en) * 2015-12-04 2017-06-13 中国电信股份有限公司 Caching sharing method and device
CN106377228A (en) * 2016-09-21 2017-02-08 中国人民解放军国防科学技术大学 Monitoring and hierarchical-control method for state of unmanned aerial vehicle operator based on Kinect
CN108038453A (en) * 2017-12-15 2018-05-15 罗派智能控制技术(上海)有限公司 A kind of driver's state-detection and identifying system based on RGBD
CN110298213A (en) * 2018-03-22 2019-10-01 北京深鉴智能科技有限公司 Video analytic system and method
CN108720851A (en) * 2018-05-23 2018-11-02 释码融和(上海)信息科技有限公司 A kind of driving condition detection method, mobile terminal and storage medium
CN109953763A (en) * 2019-02-28 2019-07-02 扬州大学 A kind of vehicle carried driving behavioral value early warning system and method based on deep learning
CN110096957A (en) * 2019-03-27 2019-08-06 苏州清研微视电子科技有限公司 The fatigue driving monitoring method and system merged based on face recognition and Activity recognition
CN110891023A (en) * 2019-10-31 2020-03-17 上海赫千电子科技有限公司 Signal routing conversion method and device based on priority strategy
CN111414813A (en) * 2020-03-03 2020-07-14 南京领行科技股份有限公司 Dangerous driving behavior identification method, device, equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
程雷: ""基于Kinect的安全驾驶状态监测方法和技术研究"", 《中国优秀硕士学位论文全文数据库工程科技II辑》 *
银声音像出版社: "《煤矿重大瓦斯煤尘爆炸事故预测与控制技术手段(第一卷)》", 31 March 2004 *

Similar Documents

Publication Publication Date Title
Eren et al. Estimating driving behavior by a smartphone
CN105788176B (en) Fatigue driving monitors based reminding method and system
WO2020042984A1 (en) Vehicle behavior detection method and apparatus
CN112016457A (en) Driver distraction and dangerous driving behavior recognition method, device and storage medium
CN105654753A (en) Intelligent vehicle-mounted safe driving assistance method and system
US20220175287A1 (en) Method and device for detecting driver distraction
CN111434553B (en) Brake system, method and device, and fatigue driving model training method and device
CN110855934A (en) Fatigue driving identification method, device and system, vehicle-mounted terminal and server
Ma et al. Real time drowsiness detection based on lateral distance using wavelet transform and neural network
JP2020042785A (en) Method, apparatus, device and storage medium for identifying passenger state in unmanned vehicle
CN115937830A (en) Special vehicle-oriented driver fatigue detection method
CN112381015A (en) Fatigue degree identification method, device and equipment
CN112698660B (en) Driving behavior visual perception device and method based on 9-axis sensor
CN111882827A (en) Fatigue driving monitoring method, system and device and readable storage medium
CN117542027A (en) Unit disabling state monitoring method based on non-contact sensor
KR102314864B1 (en) safe driving system of a vehicle by use of edge deep learning of driving status information
CN115861982A (en) Real-time driving fatigue detection method and system based on monitoring camera
US9881233B2 (en) Image recognition apparatus
US10945651B2 (en) Arousal level determination device
KR101582454B1 (en) Method for cognition of movement object in photographing image and system for prevention of vehicle boarding accident
CN110135305B (en) Method, apparatus, device and medium for fatigue detection
CN113239798A (en) Three-dimensional head posture estimation method based on twin neural network, storage medium and terminal
WO2021262166A1 (en) Operator evaluation and vehicle control based on eyewear data
Wang et al. Research on driver fatigue state detection method based on deep learning
DE112019007484T5 (en) INFORMATION PROCESSING DEVICE, PROGRAM AND INFORMATION PROCESSING METHOD

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20201103

RJ01 Rejection of invention patent application after publication