CN111645695B - Fatigue driving detection method and device, computer equipment and storage medium - Google Patents

Fatigue driving detection method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN111645695B
CN111645695B CN202010601635.2A CN202010601635A CN111645695B CN 111645695 B CN111645695 B CN 111645695B CN 202010601635 A CN202010601635 A CN 202010601635A CN 111645695 B CN111645695 B CN 111645695B
Authority
CN
China
Prior art keywords
organ
face
image
classification result
fatigue
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010601635.2A
Other languages
Chinese (zh)
Other versions
CN111645695A (en
Inventor
王珂尧
冯浩城
岳海潇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202010601635.2A priority Critical patent/CN111645695B/en
Publication of CN111645695A publication Critical patent/CN111645695A/en
Application granted granted Critical
Publication of CN111645695B publication Critical patent/CN111645695B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • B60W2040/0818Inactivity or incapacity of driver

Abstract

The application discloses a detection method and device for fatigue driving, computer equipment and a storage medium, which relate to the field of image processing and the field of deep learning, and the method comprises the following steps: acquiring a face image frame matched with a vehicle driver in real time, and acquiring an organ image of at least one fatigue detection organ corresponding to the face image frame; acquiring state classification results respectively corresponding to the organ images of the face image frame; calculating long-term classification results respectively corresponding to the fatigue detection organs according to state classification results respectively corresponding to the organ images of the plurality of continuously acquired face image frames; and carrying out fatigue detection on the driver according to the long-term classification results respectively corresponding to the fatigue detection organs. By using the technical scheme, whether the driver is in a fatigue driving state or not can be quickly and accurately detected.

Description

Fatigue driving detection method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of image processing and the field of deep learning, and in particular, to a method and an apparatus for detecting fatigue driving, a computer device, and a storage medium.
Background
With the popularization of various vehicles, the problem of driving safety of the vehicles becomes an important topic. How to effectively detect the fatigue driving of the driver and prompt the driver in time is an important aspect for improving the driving safety and reducing traffic accidents.
In the prior art, whether a driver is tired or not is predicted by acquiring an image containing a driver portrait in a cab and performing feature extraction on the image. In the prior art, the fatigue driving detection method has the disadvantages of low accuracy rate of fatigue driving detection, slow detection speed and incapability of timely, accurately and rapidly detecting the fatigue driving state of a driver.
Disclosure of Invention
The application provides a method and a device for detecting fatigue driving, computer equipment and a storage medium, so as to realize quick and accurate detection of whether a driver is in a fatigue driving state.
In a first aspect, an embodiment of the application discloses a method for detecting fatigue driving, which includes:
acquiring a face image frame matched with a vehicle driver in real time, and acquiring an organ image of at least one fatigue detection organ corresponding to the face image frame;
acquiring state classification results respectively corresponding to the organ images of the face image frame;
calculating long-term classification results respectively corresponding to the fatigue detection organs according to state classification results respectively corresponding to the organ images of the plurality of continuously acquired face image frames;
and carrying out fatigue detection on the driver according to the long-term classification results respectively corresponding to the fatigue detection organs.
In a second aspect, an embodiment of the present application discloses a detection apparatus for fatigue driving, including:
the organ image acquisition module is used for acquiring a face image frame matched with a vehicle driver in real time and acquiring an organ image of at least one fatigue detection organ corresponding to the face image frame;
a state classification result acquisition module for acquiring state classification results corresponding to the organ images of the face image frame respectively;
the long-term classification result acquisition module is used for calculating long-term classification results respectively corresponding to the fatigue detection organs according to state classification results respectively corresponding to the organ images of a plurality of continuously acquired face image frames;
and the fatigue detection module is used for carrying out fatigue detection on the driver according to the long-term classification results respectively corresponding to the fatigue detection organs.
In a third aspect, an embodiment of the present application discloses an electronic device, which includes at least one processor and a memory communicatively connected to the at least one processor. The memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor to enable the at least one processor to execute the fatigue driving detection method according to any one of the embodiments of the present application.
In a fourth aspect, embodiments of the present application disclose a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the method for detecting fatigue driving described in any of the embodiments of the present application.
According to the technical scheme of the embodiment of the application, whether the driver is in a fatigue driving state or not is quickly and accurately detected.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present application, nor do they limit the scope of the present application. Other features of the present application will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not intended to limit the present application. Wherein:
fig. 1 is a flowchart of a method for detecting fatigue driving in an embodiment of the present application;
FIG. 2a is a flow chart of a method for detecting fatigue driving in an embodiment of the present application;
FIG. 2b is a schematic diagram of an eye state classification model suitable for use in the embodiments of the present application;
FIG. 2c is a schematic structural diagram of a mouth state classification model suitable for use in embodiments of the present application;
FIG. 2d is a flowchart of a fatigue driving detection method in a specific application scenario of the present application;
fig. 3 is a schematic structural diagram of a fatigue driving detection device in an embodiment of the present application;
fig. 4 is a schematic structural diagram of a computer device in an embodiment of the present application.
Detailed Description
The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application for the understanding of the same, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Fig. 1 is a flowchart of a method for detecting fatigue driving according to an embodiment of the present application, and the technical solution according to the embodiment of the present application may be applied to a situation where whether fatigue driving exists in a driver during driving of the driver. The method can be realized by a fatigue driving detection device, which can be realized by software and/or hardware, and is generally integrated in electronic equipment and used together with a shooting device.
As shown in fig. 1, the technical solution of the embodiment of the present application specifically includes the following steps:
s110, acquiring a human face image frame matched with a vehicle driver in real time, and acquiring an organ image of at least one fatigue detection organ corresponding to the human face image frame.
The human face image frames are images which are acquired in real time and contain the images of the driver of the vehicle, the human face image frames can be acquired through a camera arranged on the vehicle, and the vehicle can be a vehicle such as an automobile, a train or an airplane and the like which needs to be driven or assisted by the driver. The fatigue detection organ is an organ for judging whether the driver is in a fatigue driving state, and the organ image is obtained according to the human face image frame and contains the image of the whole area of the fatigue detection organ. The purpose of acquiring the organ images is to make it possible to classify the state only for the organ images, thereby improving the accuracy of fatigue driving detection.
In an alternative embodiment of the present application, the organ image of the fatigue detection organ may include: a left eye image corresponding to the left eye, a right eye image corresponding to the right eye, and a mouth image corresponding to the mouth;
in the embodiment of the application, the left eye, the right eye and the mouth are selected as fatigue detection organs, so that whether the driver is in a fatigue driving state can be judged more accurately, behaviors such as dozing off, yawning and the like are usually shown when the driver is in fatigue driving, and when the left eye or the right eye is in a closed state for a long time or the mouth is in an open state, the fatigue driving behaviors of the driver can be shown.
In an alternative embodiment of the present application, acquiring an organ image of at least one fatigue detection organ corresponding to the face image frame may include: in the human face image frame, identifying a human face area; acquiring a plurality of face key points in the face area; and acquiring an organ image of at least one fatigue detection organ corresponding to the face image frame according to the plurality of face key points.
The human face area is an area which is obtained by recognition in the human face image frame and only comprises a human image part of the driver, and the purpose of recognizing the human face area in the human face image frame is that the fatigue driving state of the driver can be judged only aiming at the human face area, so that the accuracy of the fatigue driving state judgment can be improved. The identification of the face region in the face image frame may be implemented by a preset image identification algorithm, or by inputting the face image frame into the face region detection model, and the specific manner of identifying the face region in the face image frame is not limited in this embodiment.
The face key points are points identified in the face region and related to face features, for example, the face key points may be eye corner points, upper eyelid points, lower eyelid points, nose tip points, eyebrow tail points, upper lip points, lower lip points, and the like.
In the embodiment of the application, a face region is determined in a face image frame, face key points in the face region are obtained, and organ images corresponding to fatigue detection organs are obtained according to the face key points corresponding to the fatigue detection organs.
And S120, acquiring state classification results respectively corresponding to the organ images of the human face image frame.
And the state classification result is used for representing the state of the fatigue detection organ corresponding to the organ image in the human face image frame.
In an optional embodiment of the present application, the state classification result may include: open or closed.
The state classification result of the organ image can be obtained through a pre-trained state classification result detection model, and can also be obtained through judging the position relation of the feature points in the organ image.
And S130, calculating long-term classification results respectively corresponding to the fatigue detection organs according to the state classification results respectively corresponding to the organ images of the plurality of continuously acquired human face image frames.
The long-term classification result is a comprehensive state classification result obtained from a plurality of human face image frames which are acquired within a period of time and respectively correspond to each fatigue detection organ.
In the embodiment of the application, after the state classification result of each organ image in each face image frame is obtained for a plurality of continuously obtained face image frames, the long-term classification result of each fatigue detection organ corresponding to the plurality of face image frames is obtained. The advantage of setting up like this lies in, can the comprehensive consideration driver in the state of each fatigue detection organ in a period to judge whether be in fatigue driving state, improved the rate of accuracy that fatigue driving detected.
And S140, carrying out fatigue detection on the driver according to the long-term classification results respectively corresponding to the fatigue detection organs.
In the embodiment of the application, whether the driver is in a fatigue driving state or not is judged according to the long-term classification result of each fatigue detection organ, so that the accuracy of fatigue driving detection can be improved. Meanwhile, the fatigue detection of the driver can be realized in real time by acquiring the face image frames in real time and calculating the long-term classification result of each fatigue detection organ according to the plurality of face image frames, and the fatigue driving state of the driver can be detected in time, so that the driving safety is improved.
According to the technical scheme, the method and the device for monitoring the fatigue of the driver are characterized in that a plurality of continuous face image frames are obtained, the state classification results of the organ images corresponding to the fatigue monitoring organs in the face image frames are calculated, the long-term classification results of the fatigue monitoring organs are calculated, and the fatigue monitoring is performed on the driver according to the long-term classification results of the fatigue monitoring organs. The problem of among the prior art detect the mode accuracy rate of driver fatigue on a low side, detection speed is slower, can't be in time, accurate, rapid detect out driver's fatigue driving state is solved, realized quick, accurate detection driver whether in the effect of fatigue driving state.
Fig. 2a is a flowchart of a method for detecting fatigue driving in an embodiment of the present application, and the present embodiment further embodies a process of acquiring an organ image, a process of acquiring a state classification result of each organ image, and a process of calculating a long-term classification result of each fatigue detection organ on the basis of the above embodiments.
Correspondingly, as shown in fig. 2a, the technical solution of the embodiment of the present application specifically includes the following steps:
and S210, acquiring a human face image frame matched with a vehicle driver in real time.
And S220, identifying a face area in the face image frame.
In an alternative embodiment of the present application, identifying a face region in the face image frame may include: inputting the human face image frame into a human face frame detection model, and acquiring a plurality of human face frame coordinates output by the human face frame detection model; and determining a face area in the face image frame according to the coordinates of each face frame.
In the embodiment of the application, the face region identification is carried out on the face image frame through the face frame detection model, the face frame detection model outputs the coordinates of a plurality of face frames corresponding to the face image frame, and the range of the face region can be defined according to the coordinates of the face frames.
Optionally, the face frame detection model may be a convolutional neural network trained according to a deep learning method, and the output result is 4 coordinates corresponding to the face frame, but the embodiment does not limit the specific form, training process, and output result of the face frame detection model.
And S230, acquiring a plurality of face key points in the face area.
In an optional embodiment of the present application, acquiring a plurality of face key points in a face region may include: and inputting the face image frame marked with the face area into a face key point detection model, and acquiring a plurality of face key points output by the face key point detection model.
The face key point detection model is used for identifying key points related to face features in a face area of an input face image frame.
In the embodiment of the application, the face key points are obtained by inputting the face image frames marked with the face regions into the face key point detection model. The output of the face keypoint detection model is the coordinates of a plurality of face keypoints corresponding to the face region in the face image frame.
S240, calculating the eye corner distance of the left eye according to the coordinates of the two eye corners corresponding to the left eye in the key points of the human face; and calculating a left-eye affine transformation matrix corresponding to the left eye according to the eye corner distance of the left eye and the coordinates of the center point of the left eye in the key points of the human face, and determining the left-eye image in the portrait acquisition image according to the left-eye affine transformation matrix.
In the embodiment of the application, after affine transformation is performed on a face image frame according to affine transformation matrices corresponding to a left eye, a right eye and a mouth, an image for a left eye, an image for a right eye and an image for a mouth are obtained. The affine transformation matrix is used to represent the transformation relationship between the human face image frame and the left eye image, the right eye image, or the mouth image.
In the embodiment of the application, two eye corner coordinates of the left eye are screened out from the key points of the human face, and the eye corner distance of the left eye and the center point coordinate of the left eye can be calculated according to the two eye corner coordinates of the left eye. And obtaining a left-eye affine transformation matrix according to the eye corner distance and the center point coordinate of the left eye, and carrying out affine transformation on the human face image frame through the left-eye affine transformation matrix to obtain a left-eye image.
S250, calculating the canthus distance of the right eye according to the coordinates of the two canthus corresponding to the right eye in the key points of the face; and calculating a right eye affine transformation matrix corresponding to the right eye according to the canthus distance of the right eye and the center point coordinate of the right eye in the key points of the human face, and determining a right eye image in the human image acquisition image according to the right eye affine transformation matrix.
In the embodiment of the application, two canthus coordinates of the right eye are screened out from the key points of the face, and the canthus distance of the right eye and the center point coordinate of the right eye can be calculated according to the two canthus coordinates of the right eye. And obtaining a right eye affine transformation matrix according to the canthus distance and the center point coordinate of the right eye, and carrying out affine transformation on the human face image frame through the right eye affine transformation matrix to obtain a right eye image.
S260, calculating the mouth angle distance of the mouth according to two mouth angle coordinates corresponding to the mouth in the key points of the face; and calculating a mouth bar affine transformation matrix corresponding to the mouth according to the mouth angle distance of the mouth and the coordinates of the central point of the mouth in the key points of the human face, and determining a mouth image in the image acquisition image according to the mouth bar affine transformation matrix.
In the embodiment of the application, two mouth angle coordinates of the mouth are screened out from the key points of the face, and the mouth angle distance of the mouth and the center point coordinate of the mouth can be calculated according to the two mouth angle coordinates of the mouth. And obtaining a mouth affine transformation matrix according to the mouth angle distance and the center point coordinate of the mouth, and carrying out affine transformation on the human face image frame through the mouth affine transformation matrix to obtain a mouth image.
And S270, respectively inputting the organ images of the human face image frame into a state classification model matched with the organ images, and acquiring a first classification result corresponding to each organ image.
Wherein, the state classification model is used for detecting the state of the fatigue detection organ corresponding to each organ image, and the first classification result can be open or closed.
For example, fig. 2b provides a schematic structural diagram of an eye state classification model, as shown in fig. 2b, the eye state classification model is a convolutional neural network comprising 5 convolutional layers and 3 pooling layers, and the number in fig. 2b is the image size of the left-eye image or the right-eye image when passing through the network.
Fig. 2c provides a schematic structural diagram of a mouth state classification model, as shown in fig. 2c, the mouth state classification model is a convolutional neural network comprising 5 convolutional layers and 3 pooling layers, and the number in fig. 2c is the image size of the mouth image when passing through the network.
S280, acquiring a second classification result corresponding to each organ image according to the relative position relation among a plurality of organ key points corresponding to each organ image.
Wherein the second classification result can be open or closed.
In the embodiment of the present application, the second classification result of each organ image is determined by the position relationship between the key points in each organ image. Illustratively, the eyelid distance corresponding to the left eye is calculated from the upper eyelid point and the lower eyelid point in the left eye image, and whether the left eye is open or closed is determined according to the eyelid distance.
And S290, determining a state classification result corresponding to each organ image according to the first classification result and the second classification result.
In the embodiment of the application, for each organ image, the state classification result of each organ image is comprehensively judged according to the output result of the state classification model and the position relation of the key points. The advantage of setting up like this is that can improve the accuracy of judging each fatigue detection organ state to improve the accuracy that tired driving detected.
S2100, respectively obtaining the state classification results of the organ images from the current face image frame and the set number of historical face image frames.
In the embodiment of the application, the state accumulation result of the current face image frame is determined according to the state classification results of the current face image frame and the historical face image frame. The method has the advantage that the accuracy of judging the image state of each organ in the current human face image frame is improved.
And step S2110 of determining detection weights corresponding to the current face image frame and the historical face image frames respectively.
The detection weight is inversely related to the acquisition time difference of the face image frame, and the acquisition time difference is the time difference between the acquisition time of the face image frame and the current system time.
For example, when a face image frame is obtained every 1s, and the state accumulation result of each organ image of the current face image frame is calculated according to the current face image frame and 4 historical face image frames, the weights of the current face image frame and the 4 historical face image frames can be set as follows: 0.8, 0.08, 0.06, 0.04, and 0.02, but the present embodiment does not limit the specific values of the detection weights of the current face image frame and each historical face image frame.
S2120, calculating a state accumulation result of each organ image according to state classification results of each organ image in the current face image frame and each historical face image frame and detection weights respectively corresponding to the current face image frame and each historical face image frame.
In the embodiment of the application, for each organ image, the sum of the product of the state classification result of the current face image frame and the detection weight and the product of the state classification result of each historical face image frame and the detection weight is used as the state accumulation result.
And S2130, judging whether the processing of all the human face image frames is finished, if so, executing S2140, and otherwise, executing S2100.
S2140, calculating long-term classification results of each fatigue detection organ according to the state accumulation result and preset long-term classification weights.
In the embodiment of the application, after the state accumulation result of each organ image in each face image frame is obtained, the final long-term classification result of each fatigue detection organ corresponding to each organ image is obtained according to the sum of the products of the state accumulation result of each face image frame and the long-term classification weight.
And S2150, performing fatigue detection on the driver according to the long-term classification results respectively corresponding to the fatigue detection organs.
In the embodiment of the application, the long-term classification result can accurately represent the comprehensive state of the fatigue detection organs within a period of time, so that whether the driver is in the fatigue driving state or not can be accurately judged by combining the long-term classification results of all the fatigue detection organs.
In an optional embodiment of the present application, the fatigue detection for the driver according to the long-term classification result respectively corresponding to each fatigue detection organ may include at least one of the following: if the long-term classification result corresponding to the left eye is closed, determining that the driver is in a fatigue driving state; if the long-term classification result corresponding to the right eye is closed, determining that the driver is in a fatigue driving state; if the long-term classification result corresponding to the mouth is open, determining that the driver is in a fatigue driving state; and if the long-term classification results respectively corresponding to the left eye and the right eye are closed, determining that the driver is in a fatigue driving state.
In the embodiment of the present application, if the long-term classification result of the left eye or the long-term classification result of the right eye, or both the long-term classification results of the left eye and the right eye are closed, it indicates that the driver may be in a dozing state for a certain period of time, and it is determined to be driving with fatigue. And if the long-time classification result of the mouth is open, the driver is possibly in a yawning state within a period of time, and fatigue driving is judged.
According to the technical scheme, a plurality of continuous face image frames are obtained, the state classification result of the organ image corresponding to each fatigue monitoring organ in each face image frame is calculated according to the detection weight of each face image frame, the long-term classification result of each fatigue monitoring organ is calculated according to the state classification result and the long-term classification weight of each organ image, and fatigue monitoring is performed on a driver according to the long-term classification result of each fatigue monitoring organ. The problem of among the prior art detect the mode accuracy rate of driver fatigue on a low side, detection speed is slower, can't be in time, accurate, rapid detect out driver's fatigue driving state is solved, realized quick, accurate detection driver whether in the effect of fatigue driving state.
Specific application scenarios
Fig. 2d is a flowchart of a fatigue driving detection method provided in a specific application scenario of the present application, and as shown in fig. 2d, the method includes the steps of:
and S1, carrying out face region detection on the face image frame through the face frame detection model.
The face frame detection model detects face frames of an environment image through a deep learning method, extracts face basic features through six layers of convolution networks, achieves image down-sampling once through each layer of convolution network, performs face frame regression on a fixed number of face frame points with different sizes, and finally obtains coordinates of a plurality of face frames, and determines a face area according to each face frame.
And S2, inputting the face image frame marked with the face area into the face key point detection model to obtain a plurality of face key point coordinates.
S3, according to the key point coordinates of the human face, two canthus coordinates of the left eye and the right eye are respectively obtained, canthus distance and center point coordinates are obtained, according to the canthus distance and the center point coordinates of each eye, an affine transformation matrix corresponding to each eye is respectively calculated, and according to the environment image and the affine transformation matrix corresponding to each eye, the left eye image and the right eye image are obtained.
S4, acquiring two mouth angle coordinates corresponding to the mouth according to the face key point coordinates, and calculating the mouth angle distance of the mouth; and calculating a mouth bar affine transformation matrix corresponding to the mouth according to the mouth angle distance of the mouth and the coordinates of the central point of the mouth in the key points of the face, and determining a mouth image in the face image frame according to the mouth bar affine transformation matrix.
And S5, performing normalization processing on the left-eye image, the right-eye image and the mouth image.
S6, the left-eye image and the right-eye image are input to the eye state classification model, and state classification results corresponding to the left eye and the right eye are obtained.
Wherein the state classification result comprises opening or closing.
And S7, inputting the mouth image into the mouth state classification model, and acquiring a state classification result corresponding to the mouth.
And S8, detecting fatigue driving of the driver according to the state classification results corresponding to the left eye and the right eye respectively and the state classification results corresponding to the mouth.
If the state classification result corresponding to the mouth is on, it is determined that the driver is fatigue driving.
And if the state classification result corresponding to the continuous left eye or right eye is that the number of the closed human face image frames exceeds a preset value, judging that the driver is fatigue driving.
According to the technical scheme of the embodiment of the application, fatigue monitoring is carried out on the driver according to the state classification result of each fatigue monitoring organ by acquiring a plurality of continuous face image frames and calculating the state classification result of the organ image corresponding to each fatigue monitoring organ in each face image frame. The problem of among the prior art detect the mode accuracy rate of driver fatigue on a low side, detection speed is slower, can't be in time, accurate, rapid detect out driver's fatigue driving state is solved, realized quick, accurate detection driver whether in the effect of fatigue driving state.
Fig. 3 is a schematic structural diagram of a fatigue driving detection apparatus provided in an embodiment of the present application, which may be implemented by software and/or hardware, and is generally integrated in an electronic device and used in cooperation with a camera. The device includes: an organ image acquisition module 310, a state classification result acquisition module 320, a long-term classification result acquisition module 330, and a fatigue detection module 340. Wherein:
an organ image acquisition module 310, configured to acquire, in real time, a face image frame matched with a vehicle driver, and acquire an organ image of at least one fatigue detection organ corresponding to the face image frame;
a state classification result obtaining module 320, configured to obtain state classification results corresponding to the organ images of the face image frame, respectively;
a long-term classification result obtaining module 330, configured to calculate long-term classification results corresponding to the fatigue detection organs, according to state classification results of organ images corresponding to multiple continuously obtained face image frames, respectively;
and the fatigue detection module 340 is configured to perform fatigue detection on the driver according to the long-term classification results respectively corresponding to the fatigue detection organs.
According to the technical scheme, the method and the device for monitoring the fatigue of the driver are characterized in that a plurality of continuous face image frames are obtained, the state classification results of the organ images corresponding to the fatigue monitoring organs in the face image frames are calculated, the long-term classification results of the fatigue monitoring organs are calculated, and the fatigue monitoring is performed on the driver according to the long-term classification results of the fatigue monitoring organs. The problem of among the prior art detect the mode accuracy rate of driver fatigue on a low side, detection speed is slower, can't be in time, accurate, rapid detect out driver's fatigue driving state is solved, realized quick, accurate detection driver whether in the effect of fatigue driving state.
On the basis of the foregoing embodiment, the long-term classification result obtaining module 330 includes:
the state classification result acquisition unit is used for respectively acquiring the state classification results of the organ images in the current face image frame and the set number of historical face image frames;
a detection weight determining unit for determining detection weights corresponding to the current face image frame and each of the historical face image frames;
the detection weight is negatively correlated with the acquisition time difference of the face image frame, and the acquisition time difference is the time difference between the acquisition time of the face image frame and the current system time;
a state accumulation result determining unit, configured to calculate a state accumulation result of each organ image according to a state classification result of each organ image in the current face image frame and each historical face image frame, and detection weights respectively corresponding to the current face image frame and each historical face image frame;
and the long-term classification result calculating unit is used for calculating the long-term classification result of each fatigue detection organ according to the state accumulation result and the preset long-term classification weight.
On the basis of the above embodiment, the state classification result obtaining module 320 includes:
a first classification result obtaining unit, configured to input each organ image of the face image frame into a state classification model matched with the organ image, and obtain a first classification result corresponding to each organ image;
a second classification result acquisition unit configured to acquire a second classification result corresponding to each of the organ images based on a relative positional relationship between a plurality of organ key points corresponding to each of the organ images;
a state classification result obtaining unit configured to determine a state classification result corresponding to each of the organ images according to the first classification result and the second classification result.
On the basis of the above embodiment, the organ image obtaining module 310 includes:
the face region identification unit is used for identifying a face region in the face image frame;
a face key point acquisition unit, configured to acquire a plurality of face key points in the face region;
and the organ image acquisition unit is used for acquiring at least one organ image of the fatigue detection organ corresponding to the face image frame according to the plurality of face key points.
On the basis of the above embodiment, the organ image of the fatigue detection organ includes: a left eye image corresponding to the left eye, a right eye image corresponding to the right eye, and a mouth image corresponding to the mouth;
the state classification result comprises: open or closed.
On the basis of the above embodiment, the organ image acquisition unit includes:
the left eye image acquisition subunit is used for calculating the eye corner distance of the left eye according to two eye corner coordinates corresponding to the left eye in the human face key points; calculating a left-eye affine transformation matrix corresponding to the left eye according to the eye corner distance of the left eye and the coordinates of the center point of the left eye in the key points of the human face, and determining a left-eye image in the portrait acquisition image according to the left-eye affine transformation matrix;
the right eye image acquisition subunit is used for calculating the canthus distance of the right eye according to two canthus coordinates corresponding to the right eye in the key points of the human face; calculating a right eye affine transformation matrix corresponding to the right eye according to the canthus distance of the right eye and the coordinates of the center point of the right eye in the key points of the human face, and determining a right eye image in the human image acquisition image according to the right eye affine transformation matrix; and
the mouth image acquisition subunit is used for calculating the mouth angle distance of the mouth according to two mouth angle coordinates corresponding to the mouth in the key points of the face; and calculating a mouth bar affine transformation matrix corresponding to the mouth according to the mouth angle distance of the mouth and the coordinates of the central point of the mouth in the key points of the human face, and determining a mouth image in the image acquisition image according to the mouth bar affine transformation matrix.
On the basis of the above embodiment, the fatigue detection module 340 includes:
a first fatigue driving state determination unit configured to determine that the driver is in a fatigue driving state if the long-term classification result corresponding to the left eye is closed;
a second fatigue driving state determination unit configured to determine that the driver is in a fatigue driving state if the long-term classification result corresponding to the right eye is closed;
a third fatigue driving state determination unit configured to determine that the driver is in a fatigue driving state if the long-term classification result corresponding to the mouth is on; and
and the fourth fatigue driving state determination unit is used for determining that the driver is in the fatigue driving state if the long-term classification results respectively corresponding to the left eye and the right eye are closed.
The detection device for fatigue driving provided by the embodiment of the application can execute the detection method for fatigue driving provided by any embodiment of the application, and has corresponding functional modules and beneficial effects of the execution method.
According to an embodiment of the present application, an electronic device and a readable storage medium are also provided.
As shown in fig. 4, is a block diagram of an electronic device of a method of detecting fatigue driving according to an embodiment of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
As shown in fig. 4, the electronic apparatus includes: one or more processors 401, memory 402, and interfaces for connecting the various components, including high-speed interfaces and low-speed interfaces. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions for execution within the electronic device, including instructions stored in or on the memory to display graphical information of a GUI on an external input/output apparatus (such as a display device coupled to the interface). In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, as desired. Also, multiple electronic devices may be connected, with each device providing portions of the necessary operations (e.g., as a server array, a group of blade servers, or a multi-processor system). In fig. 4, one processor 401 is taken as an example.
Memory 402 is a non-transitory computer readable storage medium as provided herein. The memory stores instructions executable by at least one processor to cause the at least one processor to execute the fatigue driving detection method provided by the application. The non-transitory computer-readable storage medium of the present application stores computer instructions for causing a computer to execute the method of detecting fatigue driving provided by the present application.
The memory 402, as a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the fatigue driving detection method in the embodiment of the present application (for example, the organ image acquisition module 310, the state classification result acquisition module 320, the long-term classification result acquisition module 330, and the fatigue detection module 340 shown in fig. 4). The processor 401 executes various functional applications of the server and data processing by running non-transitory software programs, instructions and modules stored in the memory 402, that is, implements the fatigue driving detection method in the above method embodiment.
The memory 402 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created from use of the electronic device for detection of fatigue driving, and the like. Further, the memory 402 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, memory 402 optionally includes memory located remotely from processor 401, which may be connected to fatigue driving detection electronics over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device of the detection method of fatigue driving may further include: an input device 403 and an output device 404. The processor 401, the memory 402, the input device 403 and the output device 404 may be connected by a bus or other means, and fig. 4 illustrates an example of a connection by a bus.
The input device 403 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the fatigue driving detecting electronic apparatus, such as a touch screen, a keypad, a mouse, a track pad, a touch pad, a pointing stick, one or more mouse buttons, a track ball, a joystick, or the like. The output devices 404 may include a display device, auxiliary lighting devices (e.g., LEDs), and haptic feedback devices (e.g., vibrating motors), among others. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device can be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical host and VPS service are overcome.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, and the present invention is not limited thereto as long as the desired results of the technical solutions disclosed in the present application can be achieved.
The above-described embodiments should not be construed as limiting the scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (12)

1. A method of detecting fatigue driving, comprising:
acquiring a face image frame matched with a vehicle driver in real time, and acquiring an organ image of at least one fatigue detection organ corresponding to the face image frame;
acquiring state classification results respectively corresponding to the organ images of the face image frame;
calculating long-term classification results respectively corresponding to the fatigue detection organs according to state classification results respectively corresponding to the organ images of the plurality of continuously acquired face image frames;
performing fatigue detection on the driver according to the long-term classification results respectively corresponding to the fatigue detection organs; the long-term classification result is a comprehensive state classification result obtained from a plurality of human face image frames which are respectively corresponding to each fatigue detection organ and are obtained within a set time;
wherein the obtaining of the state classification result corresponding to each organ image of the face image frame comprises:
respectively inputting each organ image of the face image frame into a state classification model matched with the organ image, and acquiring a first classification result corresponding to each organ image;
obtaining a second classification result corresponding to each organ image according to the relative position relation among a plurality of organ key points corresponding to each organ image;
determining a state classification result corresponding to each organ image according to the first classification result and the second classification result;
wherein, the calculating the long-term classification result respectively corresponding to each fatigue detection organ according to the state classification result respectively corresponding to each organ image of the plurality of continuously acquired human face image frames comprises:
respectively acquiring state classification results of the organ images from the current face image frame and the set number of historical face image frames;
determining detection weights respectively corresponding to the current face image frame and each historical face image frame;
the detection weight is negatively correlated with the acquisition time difference of the face image frame, and the acquisition time difference is the time difference between the acquisition time of the face image frame and the current system time;
calculating the state accumulation result of each organ image according to the state classification result of each organ image in the current face image frame and each historical face image frame and the detection weight respectively corresponding to the current face image frame and each historical face image frame;
calculating a long-term classification result of each fatigue detection organ according to the state accumulation result and a preset long-term classification weight;
wherein the acquiring of the organ image of the at least one fatigue detection organ corresponding to the face image frame comprises: identifying a face region in a face image frame, specifically:
inputting the human face image frame into a human face frame detection model, and acquiring a plurality of human face frame coordinates output by the human face frame detection model;
and determining a face area in the face image frame according to the coordinates of each face frame.
2. The method of claim 1, wherein obtaining an organ image of at least one fatigue-detecting organ corresponding to the face image frame further comprises:
acquiring a plurality of face key points in the face area;
and acquiring an organ image of at least one fatigue detection organ corresponding to the face image frame according to the plurality of face key points.
3. The method of any one of claims 1-2, wherein fatigue detecting the organ image of the organ comprises: a left eye image corresponding to the left eye, a right eye image corresponding to the right eye, and a mouth image corresponding to the mouth;
the state classification result comprises: open or closed.
4. The method of claim 2, wherein said obtaining an organ image of at least one fatigue-detected organ corresponding to said face image frame from said plurality of face key points comprises at least one of:
calculating the eye corner distance of the left eye according to two eye corner coordinates corresponding to the left eye in the human face key points; calculating a left-eye affine transformation matrix corresponding to the left eye according to the eye corner distance of the left eye and the center point coordinates of the left eye in the key points of the human face, and determining a left-eye image in the portrait acquisition image according to the left-eye affine transformation matrix;
calculating the canthus distance of the right eye according to the coordinates of the two canthus corresponding to the right eye in the key points of the face; calculating a right eye affine transformation matrix corresponding to the right eye according to the canthus distance of the right eye and the coordinates of the center point of the right eye in the key points of the human face, and determining a right eye image in the human image acquisition image according to the right eye affine transformation matrix; and
calculating the mouth angle distance of the mouth according to two mouth angle coordinates corresponding to the mouth in the key points of the face; and calculating a mouth affine transformation matrix corresponding to the mouth according to the mouth angle distance of the mouth and the coordinates of the central point of the mouth in the key points of the human face, and determining a mouth image in the portrait acquisition image according to the mouth affine transformation matrix.
5. The method according to claim 3, wherein the fatigue detection of the driver based on the long-term classification results corresponding to the fatigue detection organs respectively comprises at least one of the following:
if the long-term classification result corresponding to the left eye is closed, determining that the driver is in a fatigue driving state;
if the long-term classification result corresponding to the right eye is closed, determining that the driver is in a fatigue driving state;
if the long-term classification result corresponding to the mouth is open, determining that the driver is in a fatigue driving state; and
and if the long-term classification results respectively corresponding to the left eye and the right eye are closed, determining that the driver is in a fatigue driving state.
6. A detection device for fatigue driving, comprising:
the organ image acquisition module is used for acquiring a face image frame matched with a vehicle driver in real time and acquiring an organ image of at least one fatigue detection organ corresponding to the face image frame;
a state classification result acquisition module for acquiring state classification results corresponding to the organ images of the face image frame respectively;
the long-time classification result acquisition module is used for calculating long-time classification results respectively corresponding to the fatigue detection organs according to state classification results respectively corresponding to the organ images of the plurality of continuously acquired face image frames;
the fatigue detection module is used for carrying out fatigue detection on the driver according to the long-term classification results respectively corresponding to the fatigue detection organs; the long-term classification result is a comprehensive state classification result obtained from a plurality of human face image frames which are respectively corresponding to each fatigue detection organ and are obtained within a set time;
wherein, the state classification result obtaining module includes:
a first classification result obtaining unit, configured to input each organ image of the face image frame into a state classification model matched with the organ image, respectively, and obtain a first classification result corresponding to each organ image;
a second classification result acquisition unit configured to acquire a second classification result corresponding to each of the organ images based on a relative positional relationship between a plurality of organ key points corresponding to each of the organ images;
a state classification result acquisition unit configured to determine a state classification result corresponding to each of the organ images according to the first classification result and the second classification result;
wherein, the long-term classification result obtaining module includes:
the state classification result acquisition unit is used for respectively acquiring the state classification results of the organ images in the current face image frame and the set number of historical face image frames;
a detection weight determining unit for determining detection weights corresponding to the current face image frame and each of the historical face image frames;
the detection weight is negatively correlated with the acquisition time difference of the face image frame, and the acquisition time difference is the time difference between the acquisition time of the face image frame and the current system time;
a state accumulation result determining unit, configured to calculate a state accumulation result of each organ image according to a state classification result of each organ image in the current face image frame and each historical face image frame, and detection weights respectively corresponding to the current face image frame and each historical face image frame;
the long-term classification result calculating unit is used for calculating the long-term classification result of each fatigue detection organ according to the state accumulation result and the preset long-term classification weight;
wherein the official image acquisition module comprises a face region recognition unit for:
inputting the human face image frame into a human face frame detection model, and acquiring a plurality of human face frame coordinates output by the human face frame detection model; and determining a face area in the face image frame according to the coordinates of each face frame.
7. The apparatus of claim 6, wherein the organ image acquisition module further comprises:
a face key point acquisition unit, configured to acquire a plurality of face key points in the face region;
and the organ image acquisition unit is used for acquiring at least one organ image of the fatigue detection organ corresponding to the face image frame according to the plurality of face key points.
8. The apparatus of any one of claims 6-7, wherein the organ image of the fatigue detection organ comprises: a left eye image corresponding to the left eye, a right eye image corresponding to the right eye, and a mouth image corresponding to the mouth;
the state classification result comprises: open or closed.
9. The apparatus of claim 7, wherein the organ image acquisition unit comprises:
the left eye image acquisition subunit is used for calculating the eye corner distance of the left eye according to two eye corner coordinates corresponding to the left eye in the face key points; calculating a left-eye affine transformation matrix corresponding to the left eye according to the eye corner distance of the left eye and the coordinates of the center point of the left eye in the key points of the human face, and determining a left-eye image in the portrait acquisition image according to the left-eye affine transformation matrix;
the right eye image acquisition subunit is used for calculating the canthus distance of the right eye according to two canthus coordinates corresponding to the right eye in the key points of the human face; calculating a right eye affine transformation matrix corresponding to the right eye according to the canthus distance of the right eye and the coordinates of the center point of the right eye in the key points of the human face, and determining a right eye image in the human image acquisition image according to the right eye affine transformation matrix; and
the mouth image acquisition subunit is used for calculating the mouth angle distance of the mouth according to two mouth angle coordinates corresponding to the mouth in the key points of the face; and calculating a mouth affine transformation matrix corresponding to the mouth according to the mouth angle distance of the mouth and the coordinates of the central point of the mouth in the key points of the human face, and determining a mouth image in the human image acquisition image according to the mouth affine transformation matrix.
10. The apparatus of claim 8, wherein the fatigue detection module comprises:
a first fatigue driving state determination unit configured to determine that the driver is in a fatigue driving state if the long-term classification result corresponding to the left eye is closed;
a second fatigue driving state determination unit configured to determine that the driver is in a fatigue driving state if the long-term classification result corresponding to the right eye is closed;
a third fatigue driving state determination unit configured to determine that the driver is in a fatigue driving state if the long-term classification result corresponding to the mouth is on; and
and the fourth fatigue driving state determination unit is used for determining that the driver is in the fatigue driving state if the long-term classification results respectively corresponding to the left eye and the right eye are closed.
11. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of detecting fatigue driving of any one of claims 1-5.
12. A non-transitory computer-readable storage medium storing computer instructions for causing a computer to execute the method for detecting fatigue driving according to any one of claims 1 to 5.
CN202010601635.2A 2020-06-28 2020-06-28 Fatigue driving detection method and device, computer equipment and storage medium Active CN111645695B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010601635.2A CN111645695B (en) 2020-06-28 2020-06-28 Fatigue driving detection method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010601635.2A CN111645695B (en) 2020-06-28 2020-06-28 Fatigue driving detection method and device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111645695A CN111645695A (en) 2020-09-11
CN111645695B true CN111645695B (en) 2022-08-09

Family

ID=72352100

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010601635.2A Active CN111645695B (en) 2020-06-28 2020-06-28 Fatigue driving detection method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111645695B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112232175B (en) * 2020-10-13 2022-06-07 南京领行科技股份有限公司 Method and device for identifying state of operation object
CN112528792A (en) * 2020-12-03 2021-03-19 深圳地平线机器人科技有限公司 Fatigue state detection method, fatigue state detection device, fatigue state detection medium, and electronic device
CN114081496A (en) * 2021-11-09 2022-02-25 中国第一汽车股份有限公司 Test system, method, equipment and medium for driver state monitoring device
CN114663863A (en) * 2022-02-24 2022-06-24 北京百度网讯科技有限公司 Image processing method, image processing device, electronic equipment and computer storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106327801A (en) * 2015-07-07 2017-01-11 北京易车互联信息技术有限公司 Method and device for detecting fatigue driving
CN107194346A (en) * 2017-05-19 2017-09-22 福建师范大学 A kind of fatigue drive of car Forecasting Methodology
CN107704805A (en) * 2017-09-01 2018-02-16 深圳市爱培科技术股份有限公司 method for detecting fatigue driving, drive recorder and storage device
CN108309311A (en) * 2018-03-27 2018-07-24 北京华纵科技有限公司 A kind of real-time doze of train driver sleeps detection device and detection algorithm
CN109840565A (en) * 2019-01-31 2019-06-04 成都大学 A kind of blink detection method based on eye contour feature point aspect ratio
CN109919049A (en) * 2019-02-21 2019-06-21 北京以萨技术股份有限公司 Fatigue detection method based on deep learning human face modeling
CN110532887A (en) * 2019-07-31 2019-12-03 郑州大学 A kind of method for detecting fatigue driving and system based on facial characteristics fusion
CN110674701A (en) * 2019-09-02 2020-01-10 东南大学 Driver fatigue state rapid detection method based on deep learning
CN110826521A (en) * 2019-11-15 2020-02-21 爱驰汽车有限公司 Driver fatigue state recognition method, system, electronic device, and storage medium

Family Cites Families (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9198575B1 (en) * 2011-02-15 2015-12-01 Guardvant, Inc. System and method for determining a level of operator fatigue
CN106203293A (en) * 2016-06-29 2016-12-07 广州鹰瞰信息科技有限公司 A kind of method and apparatus detecting fatigue driving
CN107977607A (en) * 2017-11-20 2018-05-01 安徽大学 A kind of fatigue driving monitoring method based on machine vision
CN109905590B (en) * 2017-12-08 2021-04-27 腾讯科技(深圳)有限公司 Video image processing method and device
CN110688874B (en) * 2018-07-04 2022-09-30 杭州海康威视数字技术股份有限公司 Facial expression recognition method and device, readable storage medium and electronic equipment
CN109117810A (en) * 2018-08-24 2019-01-01 深圳市国脉畅行科技股份有限公司 Fatigue driving behavioral value method, apparatus, computer equipment and storage medium
CN109190600A (en) * 2018-10-18 2019-01-11 知行汽车科技(苏州)有限公司 A kind of driver's monitoring system of view-based access control model sensor
CN111160071B (en) * 2018-11-08 2023-04-07 杭州海康威视数字技术股份有限公司 Fatigue driving detection method and device
CN109815937A (en) * 2019-02-25 2019-05-28 湖北亿咖通科技有限公司 Fatigue state intelligent identification Method, device and electronic equipment
CN109886241A (en) * 2019-03-05 2019-06-14 天津工业大学 Driver fatigue detection based on shot and long term memory network
CN110147742B (en) * 2019-05-08 2024-04-16 腾讯科技(深圳)有限公司 Key point positioning method, device and terminal
CN110399837B (en) * 2019-07-25 2024-01-05 深圳智慧林网络科技有限公司 User emotion recognition method, device and computer readable storage medium
CN110532976B (en) * 2019-09-03 2021-12-31 湘潭大学 Fatigue driving detection method and system based on machine learning and multi-feature fusion
CN111161395B (en) * 2019-11-19 2023-12-08 深圳市三维人工智能科技有限公司 Facial expression tracking method and device and electronic equipment
CN111191573A (en) * 2019-12-27 2020-05-22 中国电子科技集团公司第十五研究所 Driver fatigue detection method based on blink rule recognition
CN111178272B (en) * 2019-12-30 2023-04-18 东软集团(北京)有限公司 Method, device and equipment for identifying driver behavior
CN111209818A (en) * 2019-12-30 2020-05-29 新大陆数字技术股份有限公司 Video individual identification method, system, equipment and readable storage medium
CN111178341B (en) * 2020-04-10 2021-01-26 支付宝(杭州)信息技术有限公司 Living body detection method, device and equipment

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106327801A (en) * 2015-07-07 2017-01-11 北京易车互联信息技术有限公司 Method and device for detecting fatigue driving
CN107194346A (en) * 2017-05-19 2017-09-22 福建师范大学 A kind of fatigue drive of car Forecasting Methodology
CN107704805A (en) * 2017-09-01 2018-02-16 深圳市爱培科技术股份有限公司 method for detecting fatigue driving, drive recorder and storage device
CN108309311A (en) * 2018-03-27 2018-07-24 北京华纵科技有限公司 A kind of real-time doze of train driver sleeps detection device and detection algorithm
CN109840565A (en) * 2019-01-31 2019-06-04 成都大学 A kind of blink detection method based on eye contour feature point aspect ratio
CN109919049A (en) * 2019-02-21 2019-06-21 北京以萨技术股份有限公司 Fatigue detection method based on deep learning human face modeling
CN110532887A (en) * 2019-07-31 2019-12-03 郑州大学 A kind of method for detecting fatigue driving and system based on facial characteristics fusion
CN110674701A (en) * 2019-09-02 2020-01-10 东南大学 Driver fatigue state rapid detection method based on deep learning
CN110826521A (en) * 2019-11-15 2020-02-21 爱驰汽车有限公司 Driver fatigue state recognition method, system, electronic device, and storage medium

Also Published As

Publication number Publication date
CN111645695A (en) 2020-09-11

Similar Documents

Publication Publication Date Title
CN111645695B (en) Fatigue driving detection method and device, computer equipment and storage medium
US11182592B2 (en) Target object recognition method and apparatus, storage medium, and electronic device
CN109145680B (en) Method, device and equipment for acquiring obstacle information and computer storage medium
CN111566612A (en) Visual data acquisition system based on posture and sight line
CN111797657A (en) Vehicle peripheral obstacle detection method, device, storage medium, and electronic apparatus
CN106257489A (en) Expression recognition method and system
CN106845416B (en) Obstacle identification method and device, computer equipment and readable medium
CN110659600B (en) Object detection method, device and equipment
CN112287795B (en) Abnormal driving gesture detection method, device, equipment, vehicle and medium
JPWO2015025704A1 (en) Video processing apparatus, video processing method, and video processing program
CN110765807A (en) Driving behavior analysis method, driving behavior processing method, driving behavior analysis device, driving behavior processing device and storage medium
CN111723768A (en) Method, device, equipment and storage medium for vehicle weight recognition
US20210110168A1 (en) Object tracking method and apparatus
CN108846336B (en) Target detection method, device and computer readable storage medium
US11087224B2 (en) Out-of-vehicle communication device, out-of-vehicle communication method, information processing device, and computer readable medium
US11758096B2 (en) Facial recognition for drivers
WO2015057263A1 (en) Dynamic hand gesture recognition with selective enabling based on detected hand velocity
CN113591573A (en) Training and target detection method and device for multi-task learning deep network model
CN111950348A (en) Method and device for identifying wearing state of safety belt, electronic equipment and storage medium
CN110717933A (en) Post-processing method, device, equipment and medium for moving object missed detection
CN105159452A (en) Control method and system based on estimation of human face posture
CN110595490A (en) Preprocessing method, device, equipment and medium for lane line perception data
CN111652153A (en) Scene automatic identification method and device, unmanned vehicle and storage medium
CN111523515A (en) Method and device for evaluating environment cognitive ability of automatic driving vehicle and storage medium
CN111767831A (en) Method, apparatus, device and storage medium for processing image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant