CN105769120B - Method for detecting fatigue driving and device - Google Patents

Method for detecting fatigue driving and device Download PDF

Info

Publication number
CN105769120B
CN105769120B CN201610056984.4A CN201610056984A CN105769120B CN 105769120 B CN105769120 B CN 105769120B CN 201610056984 A CN201610056984 A CN 201610056984A CN 105769120 B CN105769120 B CN 105769120B
Authority
CN
China
Prior art keywords
driver
image
human
fatigue driving
human eye
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610056984.4A
Other languages
Chinese (zh)
Other versions
CN105769120A (en
Inventor
杨铭
白涛
都大龙
黄畅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Horizon Robotics Science and Technology Co Ltd
Original Assignee
Shenzhen Horizon Robotics Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Horizon Robotics Science and Technology Co Ltd filed Critical Shenzhen Horizon Robotics Science and Technology Co Ltd
Priority to CN201610056984.4A priority Critical patent/CN105769120B/en
Publication of CN105769120A publication Critical patent/CN105769120A/en
Application granted granted Critical
Publication of CN105769120B publication Critical patent/CN105769120B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/68Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
    • A61B5/6801Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be attached to or worn on the body surface
    • A61B5/6813Specially adapted to be attached to a specific body part
    • A61B5/6825Hand
    • A61B5/6826Finger
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2503/00Evaluating a particular growth phase or type of persons or animals
    • A61B2503/20Workers
    • A61B2503/22Motor vehicles operators, e.g. drivers, pilots, captains

Landscapes

  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Animal Behavior & Ethology (AREA)
  • Pathology (AREA)
  • Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Traffic Control Systems (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Emergency Alarm Devices (AREA)
  • Image Analysis (AREA)

Abstract

This application discloses a kind of driver tired driving detection method and detection devices, which comprises receives the direct picture of the driver of acquisition;Face datection is carried out in the image of acquisition;Human eye and/or mouth are further positioned in the face detected;The method also includes: it is based on deep neural network model, the human eye and/or mouth that detect are positioned, identify the state of human eye;The variation for tracking the state of human eye in multiple image, judges whether driver is tired.By above-mentioned detection method and device, can in real time, high robust and accurately detect fatigue driving.

Description

Fatigue driving detection method and device
Technical Field
The disclosure relates generally to the technical field of safe driving of automobiles, and in particular relates to a fatigue driving detection method and device based on a deep neural network.
Background
With the development of socio-economy, the number of motor vehicles is drastically increased, and traffic accidents caused by fatigue driving are on an increasing trend. Various fatigue driving detection techniques have been developed for this phenomenon, including those based on the physiological characteristics of the driver. When the driver is tired, the driver can show physiological characteristics such as head drop and eye closing frequency increase. By detecting these physiological characteristics of the driver through the monitoring device, whether the driver is tired can be judged. The fatigue driving detection technology based on the physiological characteristics of the driver has the characteristics of non-contact, low cost and high accuracy, and therefore, is widely adopted in the current fatigue driving detection device.
Most of the current fatigue driving detection devices based on the physiological characteristics of drivers locate faces through an image processing technology, analyze the state of eyes in the range of the faces and judge whether fatigue occurs. Specifically, there are two methods, one of which is to use a visible light camera to obtain the opening and closing state information of human eyes for detection and identification, and the method is performed according to the characteristics of the eyes, mouth, nose and other organs and the geometric position relationship between the eyes, mouth, nose and other organs. And another algorithm based on a deep learning model is used for constructing a human face mode space for a large number of human face image samples and judging whether human faces exist according to the similarity. Although the accuracy of the method is improved, the method has large calculation amount, cannot run on the embedded equipment in real time and cannot be practical.
Disclosure of Invention
The following presents a simplified summary of the invention in order to provide a basic understanding of some aspects of the invention. It should be understood that this summary is not an exhaustive overview of the invention. It is not intended to determine the key or critical elements of the present invention, nor is it intended to limit the scope of the present invention. Its sole purpose is to present some concepts in a simplified form as a prelude to the more detailed description that is discussed later.
The invention provides a method for detecting fatigue driving accurately in real time and with high robustness.
In a first aspect of the present invention, the present invention provides a fatigue driving detection method, including: receiving a collected front image of a driver; carrying out face detection in the acquired image; further locating the eyes and/or mouth of the person in the detected face; wherein the method further comprises:
based on the deep neural network model, positioning the detected human eyes and/or mouth and identifying the state of the human eyes;
and tracking the state change of human eyes in the multi-frame images and judging whether the driver is tired.
Preferably, the fatigue driving method is based on a deep neural network model, and the positioning the detected human eyes and/or mouth comprises: the face image is input into a face feature point regression convolutional neural network model, and the regression predicts the boundary position of the eyes and/or the mouth in a down-sampled low-resolution image.
Preferably, the above fatigue driving method further comprises cutting out an image of a binocular region from the down-sampled low-resolution image, inputting an eye region segmentation depth neural network model, predicting a probability map belonging to the eye region on another down-sampled low-resolution image, segmenting image pixels included in the eye region, and determining an opening and closing state of the eyes according to the image pixels.
Preferably, in the above fatigue driving method, the change in the state of the human eye includes a change in the size of the human eye. Preferably, in the above fatigue driving method, the change in the state of the human eye includes a change in a closing frequency of the human eye.
Preferably, the fatigue driving detection method further comprises further locating a nose in the detected face, and locating the detected nose based on a deep neural network model.
Preferably, the fatigue driving detection method further comprises image acquisition by using a monocular infrared camera.
Preferably, the fatigue driving detection method further comprises the steps of locating a mouth region from the down-sampled low-resolution image, cutting out a mouth region image, inputting the mouth region into a mouth region segmentation depth neural network model, and classifying and judging the opening and closing state of the mouth.
Preferably, the fatigue driving detection method further comprises making early warning prompts in different levels according to different detected fatigue conditions.
In a second aspect of the present invention, the present invention also provides a driver fatigue driving detecting device, comprising face detecting means for detecting a face of a driver in received moving images of the driver, eye positioning means for positioning eyes of the driver in the face to determine a state of the eyes of the driver, and fatigue judging means for judging whether the driver is fatigued based on the state of the eyes of the driver; wherein,
the human eye positioning device positions the detected human eyes and/or mouths and identifies the states of the human eyes based on the deep neural network model;
the fatigue judging device tracks the change of the state of the human eyes in the multi-frame images and judges whether the driver is tired.
Preferably, the human eye positioning device comprises a human eye rough positioning device, and the human eye rough positioning device is used for inputting the human face image into the human face characteristic point regression convolution neural network model, and performing regression prediction on the boundary position of the human eye and/or the human mouth in a down-sampled low-resolution image so as to identify the state of the human eye.
Preferably, the human eye positioning device further comprises a human eye precise positioning device, and the human eye precise positioning device is configured to cut out a binocular region image from the down-sampled low-resolution image, input an eye region segmentation depth neural network model, predict a probability map belonging to a human eye region on another down-sampled low-resolution image, segment image pixels included in the human eye region, and determine an opening and closing state of human eyes according to the image pixels.
Preferably, the driver fatigue driving detection device further comprises a mouth accurate positioning device, wherein the mouth accurate positioning device is used for positioning a mouth region from the down-sampled low-resolution image, cutting out a mouth region image, inputting a mouth region segmentation depth neural network model, and classifying and judging the opening and closing state of the mouth.
Preferably, the driver fatigue driving detection device further comprises a fatigue driving alarm device, and the fatigue driving alarm device is used for sending alarm information to the driver according to the fatigue driving degree judged by the driver fatigue driving detection device.
Preferably, the driver fatigue driving detection device further comprises an image acquisition device, and the image acquisition device is used for acquiring a frontal image of the driver.
According to the detection method and the detection device for the fatigue driving of the driver, the detection device for the fatigue state of the driver based on the image can be realized quickly, accurately and robustly at low cost, so that the embedded equipment can acquire whether the driver is in the fatigue state in real time without any human-computer interaction, and safety protection prompt or early warning is provided for the driver.
Drawings
The above and other objects, features and advantages of the present invention will be more readily understood by reference to the following description of the embodiments of the present invention taken in conjunction with the accompanying drawings. The components in the figures are meant to illustrate the principles of the present invention. In the drawings, the same or similar technical features or components will be denoted by the same or similar reference numerals.
FIG. 1 is a flow chart of a driver fatigue driving detection method according to the present invention;
FIG. 2 is a schematic diagram of the components of a driver fatigue driving detection apparatus according to the present invention;
fig. 3 and 4 show schematic views of positions where the driver fatigue driving detecting device according to the present invention is installed.
Detailed Description
Embodiments of the present invention are described below with reference to the drawings. Elements and features depicted in one drawing or one embodiment of the invention may be combined with elements and features shown in one or more other drawings or embodiments. It should be noted that the figures and description omit representation and description of components and processes that are not relevant to the present invention and that are known to those of ordinary skill in the art for the sake of clarity.
Fig. 1 is a flowchart of a fatigue driving detection method according to an embodiment of the present invention. The method comprises the following steps:
first, in step S101, a moving image of the front of the driver captured with a camera is received. For example, images are acquired at a resolution of 640 × 480 pixels, 1280 × 720 pixels, 1920 × 1080 pixels.
Here, the camera may be a general camera or a monocular infrared and visible light camera to enhance the definition of image acquisition under weak light or night driving conditions.
Then, in step S102, face detection is performed on the captured driver moving image.
Face detection is to identify the face of the driver in the whole (frame) of the driver image, and is based on further positioning the eyes and mouth. The face detection may use various existing detection and recognition techniques, such as skin color segmentation, shape detection, and the like. Preferably, a multi-stage cascade classification algorithm can be adopted for face detection.
Next, in step S103, the eyes and mouth are further positioned in the detected face image. According to the method of the present invention, in the step of further locating the eyes and the nose, a method based on a deep neural network may be performed to recognize the state of the human eyes from the located eyes.
Finally, in step S104, the change of the eye state in the face images of multiple frames is continuously tracked, and whether the driver is tired and the degree of fatigue are determined.
Preferably, the positioning of the detected human eye and/or mouth, the identifying of the state of the human eye comprising: the human eyes are roughly positioned and precisely positioned. Roughly positioning the human eye comprises: inputting the face image into a face characteristic point regression convolution neural network model, and performing regression prediction on the boundary position of the eyes and/or the mouth in a down-sampled low-resolution image so as to identify the state of the eyes. For example, in a down-sampled low resolution image, the human eye and/or mouth may be an image block with a resolution of 72 x 72. In one embodiment, the face image is input to a convolutional neural network, thereby obtaining two pairs of data comprising two-dimensional coordinates characterizing the positions of both eyes. In another embodiment, the face image is input to a convolutional neural network, thereby obtaining three pairs of data comprising two-dimensional coordinates characterizing the two eyes and the mouth position. The boundary position of the human eye can also be located by using the existing image analysis method, such as pupil segmentation, shape detection and the like, so as to determine the state of the human eye. The state of the human eyes is determined to comprise the area of the pupils of the human eyes, the opening and closing size of the human eyes and the change of the opening and closing size of the human eyes. For example, normalized human eye area, relative motion of pupillary region.
Preferably, the human eye can be further pinpointed on the basis of a coarse localization of the human eye. The accurate positioning of the detected human eye may further comprise: according to the roughly positioned human eye positioning result, cutting out a binocular region image from the down-sampled low-resolution image, inputting an eye region segmentation depth neural network model, predicting a probability map belonging to a human eye region on the other down-sampled low-resolution image, segmenting image pixels contained in the human eye region, and judging the opening and closing state of the human eyes according to the image pixels. For example, on another down-sampled image with a low resolution of 64 × 64, images of both eyes of the driver are cropped to more accurately judge the change of the state of the human eyes.
Preferably, in the process of accurately positioning human eyes, mouth region images are cut out according to mouth region positioning results when the human eyes are roughly positioned, and the mouth region segmentation depth neural network model is input to classify and judge the mouth opening and closing states. The mouth opening and closing state and the human eye opening and closing state are combined and analyzed, and a more accurate fatigue state detection result is obtained.
Preferably, in the method for detecting fatigue driving of the driver, the method further includes tracking changes of states of human eyes in the multi-frame images, so as to analyze changes of opening and closing frequency of the human eyes and assist in detecting changes of fatigue degree of the driver.
In the above detection, optionally, locating a plurality of face feature points such as a nose, a mouth, and the like in the detected face is included, and accuracy of locating the eye detection is improved based on the deep neural network model. This is to facilitate accelerated positioning of the human eye according to the method of neural network model. In this case, a face image is input to the deep neural network model, and four pairs of data including two-dimensional coordinates representing positions of two eyes, a mouth, and a nose are output.
In the above method for detecting fatigue driving, it may further include determining whether fatigue is present in combination with mouth state analysis. For example, by comparing the continuous multi-frame images, the change of the eye state is analyzed, and the change of the mouth opening and closing size or the frequency of yawning are combined to assist in judging whether the driver is tired or not.
The fatigue degree of the driver is judged according to the opening and closing degree and frequency of eyes and/or mouth, the fatigue parameters are extracted, and the fatigue parameters can be extracted according to the prior art standard.
In the process of judging the state of the human eyes, the opening and closing state of the human eyes can be judged only after the human eyes are roughly positioned, so that the judgment result of judging the fatigue driving of the driver can be quickly obtained.
In addition, in the neural network model, the human face detection results or the human eye detection results of a plurality of frames before and after are correlated, so that the false detection can be reduced, and the detection accuracy can be improved.
The fatigue driving detection method also comprises the steps of setting the sensitivity of the fatigue driving detection according to different detected fatigue conditions of the driver, and making early warning prompts of different grades. For example, if the human eye area is too small, namely the human eyes are closed for 10 seconds, or the pupils do not move relatively for 10 seconds, the 'light fatigue' is set, and the voice prompt primary alarm is started; closing human eyes for 20 seconds, setting the human eyes to be in moderate fatigue, and starting to enter a middle-level alarm; the human eye is closed for 30 seconds, set to "heavy fatigue," a maximum volume sustained alarm is initiated, and so on.
In the method for detecting fatigue driving of the driver, when a convolutional neural network model is established, multi-dimensional array data, such as RGB multi-channel image data, is subjected to multi-layer network nonlinear processing, such as convolutional layers, Pooling layers (Pooling) and full-link layers, so that semantic expression characteristics of different stages of the image are obtained and are used for detecting, classifying and identifying the image. For example, in the training stage, a large amount of face data is collected, eye and mouth regions are labeled, model parameters are optimized by adopting a supervised learning and back-propagation algorithm, and a robust and accurate neural network model is extracted.
According to the driver fatigue driving detection method, the low-cost, quick, accurate and robust driver fatigue state detection device based on the image can be realized, so that the embedded equipment can acquire whether the driver is in the fatigue state in real time without any human-computer interaction, and safety protection prompt or early warning is provided for the driver.
The present invention also provides a driver fatigue detection device 1, as shown in fig. 2, including: a face detection device 12, an eye positioning device 13, and a fatigue determination device 14. An image pickup device 11 for picking up a moving image of the driver may be included outside the driver fatigue detecting device 1. The face detection means 12 is for detecting the face of the driver in the received driver moving image, the eye positioning means 13 is for positioning the eyes of the driver in the face to determine the state of the eyes of the driver, and the fatigue determination means 14 is for determining whether the driver is tired based on the state of the eyes of the driver; the human eye positioning device 13 positions the detected human eyes and/or mouths based on the deep neural network model, and identifies the states of the human eyes; the fatigue determination device 14 tracks the change in the state of the human eyes in the multi-frame image, and determines whether the driver is tired.
Preferably, in the above driver fatigue driving detecting device, the human eye positioning device includes: the human eye coarsely positions the device. The human eye rough positioning device is used for inputting a human face image into a human face characteristic point regression convolution neural network model, and performing regression prediction on the boundary position of human eyes and/or human mouth in a down-sampled low-resolution image so as to identify the state of the human eyes.
Preferably, the above driver fatigue driving detection device, the human eye positioning device further comprises a human eye precise positioning device. The accurate positioning device for the human eyes is used for cutting out images of the two eye areas from the down-sampled low-resolution images, inputting a neural network model for dividing the eye areas into depth, predicting a probability map belonging to the human eye areas on the other down-sampled low-resolution image, dividing image pixels contained in the human eye areas, and judging the opening and closing states of the human eyes according to the image pixels.
Preferably, the driver fatigue driving detection device further comprises a mouth accurate positioning device. The mouth accurate positioning device is used for positioning a mouth region from a downsampled low-resolution image, cutting out a mouth region image, inputting a mouth region segmentation convolution depth neural network model, and judging the mouth opening and closing state in a classification mode.
Preferably, the driver fatigue driving detection device further comprises a fatigue driving alarm device, and the fatigue driving alarm device is used for sending alarm information to the driver according to the fatigue driving degree judged by the driver fatigue driving detection device. The alarm information comprises a sound and/or an optical alarm. For example, the warning device includes a horn and a warning lamp, and the horn sounds and/or the warning lamp flashes after the fatigue driving is determined to be appropriate.
Alternatively, the image capturing device may be integrated into the driver fatigue driving detection device.
The image acquisition device, the human face detection device, the human eye positioning device and the fatigue judgment device can be realized by an electronic hardware circuit, and the human face detection device, the human eye positioning device and the fatigue judgment device can also be realized by software which can run on an embedded hardware platform. For example, the face detection device, the eye positioning device, and the fatigue determination device may be implemented by one or more ASICs and/or FPGAs, respectively, or a combination thereof, or may be implemented by software function modules running on an ARM platform or an X86 platform, respectively. Each functional module can also be realized by recombination or integration according to the convenience of hardware or software function division.
The driver fatigue driving detecting apparatus according to the present invention may adopt a split type structure in which the image pickup device is preferably mounted on the windshield of the automobile, on the roof of the automobile at the upper left front or the upper right front of the driver's position. The integrated structure can also be embedded into different positions of the vehicle, such as a rear view mirror, the center of a dashboard and the like. Fig. 3 and 4 show schematic views of positions where the driver fatigue driving detecting device according to the present invention is installed. The driver fatigue driving detecting device is installed at a proper position in the center of the instrument panel through the steering wheel to capture the front of the driver in fig. 3, and the driver fatigue driving detecting device is installed at a position facing the driver at the left side of the rear view mirror in fig. 4. The driver can automatically adjust the early warning sensitivity and the warning volume according to the actual application scene.
The fatigue driving detection method and the device for the driver can quickly judge the fatigue state of the driver in real time, and embedded software and hardware devices can be integrated into various devices for monitoring the driver, such as cars, buses, high-speed rails, airplanes and the like.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (11)

1. A method of detecting fatigue driving, comprising:
receiving a collected front image of a driver;
carrying out face detection in the acquired image;
further positioning human eyes in the detected human face;
characterized in that the method further comprises:
based on the deep neural network model, positioning the detected human eyes and identifying the states of the human eyes; and
tracking the state change of human eyes in the multi-frame images and judging whether a driver is tired or not;
the locating the detected human eye based on the deep neural network model comprises:
inputting the face image into a face characteristic point regression convolution neural network model, and performing regression prediction on the boundary position of the human eyes in a down-sampled low-resolution image; and
cutting out a double-eye area image as another down-sampled low-resolution image from the down-sampled low-resolution image, inputting an eye area segmentation depth neural network model, predicting a probability map belonging to a human eye area on the other down-sampled low-resolution image, segmenting image pixels contained in the human eye area, and judging the opening and closing state of human eyes according to the image pixels.
2. The fatigue driving detection method of claim 1, wherein the change in the state of the human eye comprises a change in the size of the human eye opening.
3. The fatigue driving detection method according to claim 1, wherein the change in the state of the human eye includes a change in a human eye opening and closing frequency.
4. The fatigue driving detection method of claim 1, further comprising further locating a nose in the detected face, and locating the detected nose based on a deep neural network model.
5. The fatigue driving detection method according to claim 1, further comprising image acquisition using a monocular infrared camera.
6. The fatigue driving detection method according to claim 1, further comprising locating a mouth region from the down-sampled low resolution image, cropping out a mouth region image, inputting a mouth region segmentation depth neural network model, and classifying to determine the mouth opening/closing state.
7. The fatigue driving detection method according to claim 1, further comprising making different levels of warning prompts according to different conditions of detected fatigue.
8. A driver fatigue driving detecting device comprising face detecting means for detecting a face of a driver in received driver moving images, eye positioning means for positioning eyes of the driver in the face to determine a state of the eyes of the driver, and fatigue judging means for judging whether the driver is fatigued based on the state of the eyes of the driver; it is characterized in that the preparation method is characterized in that,
the human eye positioning device positions the detected human eyes based on the deep neural network model and identifies the states of the human eyes;
the fatigue judging device tracks the change of the state of human eyes in the multi-frame images and judges whether the driver is tired or not;
the human eye positioning device comprises a human eye rough positioning device, wherein the human eye rough positioning device is used for inputting a human face image into a human face characteristic point regression convolution neural network model, and performing regression prediction on the boundary position of human eyes in a down-sampled low-resolution image so as to identify the state of the human eyes;
the human eye positioning device further comprises a human eye accurate positioning device, wherein the human eye accurate positioning device is used for cutting out a double-eye area image serving as another downsampled low-resolution image from the downsampled low-resolution image, inputting an eye area segmentation depth neural network model, predicting a probability map belonging to a human eye area on the other downsampled low-resolution image, segmenting image pixels contained in the human eye area, and judging the opening and closing state of human eyes according to the image pixels.
9. The driver fatigue driving detection apparatus according to claim 8, further comprising a mouth region pinpointing device for locating a mouth region from the down-sampled low resolution image, cropping out a mouth region image, inputting a mouth region segmentation depth neural network model, and classifying and judging the mouth opening/closing state.
10. The driver fatigue driving detection apparatus according to claim 8, further comprising a fatigue driving warning device for giving warning information to the driver according to the degree of fatigue driving determined by the driver fatigue driving detection apparatus.
11. The driver fatigue driving detecting device according to claim 8, further comprising an image capturing device for capturing a frontal image of the driver.
CN201610056984.4A 2016-01-27 2016-01-27 Method for detecting fatigue driving and device Active CN105769120B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610056984.4A CN105769120B (en) 2016-01-27 2016-01-27 Method for detecting fatigue driving and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610056984.4A CN105769120B (en) 2016-01-27 2016-01-27 Method for detecting fatigue driving and device

Publications (2)

Publication Number Publication Date
CN105769120A CN105769120A (en) 2016-07-20
CN105769120B true CN105769120B (en) 2019-01-22

Family

ID=56402500

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610056984.4A Active CN105769120B (en) 2016-01-27 2016-01-27 Method for detecting fatigue driving and device

Country Status (1)

Country Link
CN (1) CN105769120B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109770925A (en) * 2019-02-03 2019-05-21 闽江学院 A kind of fatigue detection method based on depth time-space network

Families Citing this family (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106446811A (en) * 2016-09-12 2017-02-22 北京智芯原动科技有限公司 Deep-learning-based driver's fatigue detection method and apparatus
CN106599821A (en) * 2016-12-07 2017-04-26 中国民用航空总局第二研究所 Controller fatigue detection method and system based on BP neural network
CN107038422B (en) * 2017-04-20 2020-06-23 杭州电子科技大学 Fatigue state identification method based on space geometric constraint deep learning
CN108932461A (en) * 2017-05-27 2018-12-04 杭州海康威视数字技术股份有限公司 A kind of fatigue detection method and device
CN109803583A (en) * 2017-08-10 2019-05-24 北京市商汤科技开发有限公司 Driver monitoring method, apparatus and electronic equipment
CN107657236A (en) * 2017-09-29 2018-02-02 厦门知晓物联技术服务有限公司 Vehicle security drive method for early warning and vehicle-mounted early warning system
CN107977605B (en) * 2017-11-08 2020-04-24 清华大学 Eye region boundary feature extraction method and device based on deep learning
CN107832721B (en) * 2017-11-16 2021-12-07 百度在线网络技术(北京)有限公司 Method and apparatus for outputting information
CN108162893A (en) * 2017-12-25 2018-06-15 芜湖皖江知识产权运营中心有限公司 A kind of running control system applied in intelligent vehicle
CN108128161A (en) * 2017-12-25 2018-06-08 芜湖皖江知识产权运营中心有限公司 A kind of fatigue driving identification control method applied in intelligent vehicle
CN107958573A (en) * 2017-12-25 2018-04-24 芜湖皖江知识产权运营中心有限公司 A kind of traffic control method being applied in intelligent vehicle
CN108491858A (en) * 2018-02-11 2018-09-04 南京邮电大学 Method for detecting fatigue driving based on convolutional neural networks and system
CN108657185A (en) * 2018-03-28 2018-10-16 贵州大学 A kind of fatigue detecting and control method of vehicle ACC system active safety
CN108888294B (en) * 2018-03-30 2021-02-23 杭州依图医疗技术有限公司 Method and device for detecting width of neck transparent belt
CN109308445B (en) * 2018-07-25 2019-06-25 南京莱斯电子设备有限公司 A kind of fixation post personnel fatigue detection method based on information fusion
CN110855934A (en) * 2018-08-21 2020-02-28 北京嘀嘀无限科技发展有限公司 Fatigue driving identification method, device and system, vehicle-mounted terminal and server
CN109460704B (en) * 2018-09-18 2020-09-15 厦门瑞为信息技术有限公司 Fatigue detection method and system based on deep learning and computer equipment
CN110956061B (en) * 2018-09-27 2024-04-16 北京市商汤科技开发有限公司 Action recognition method and device, and driver state analysis method and device
CN109376649A (en) * 2018-10-20 2019-02-22 张彦龙 A method of likelihood figure, which is reduced, from eye gray level image calculates the upper lower eyelid of identification
CN109614892A (en) * 2018-11-26 2019-04-12 青岛小鸟看看科技有限公司 A kind of method for detecting fatigue driving, device and electronic equipment
CN109598237A (en) * 2018-12-04 2019-04-09 青岛小鸟看看科技有限公司 A kind of fatigue state detection method and device
CN109711309B (en) * 2018-12-20 2020-11-27 北京邮电大学 Method for automatically identifying whether portrait picture is eye-closed
US10713948B1 (en) * 2019-01-31 2020-07-14 StradVision, Inc. Method and device for alerting abnormal driver situation detected by using humans' status recognition via V2V connection
CN110263641A (en) * 2019-05-17 2019-09-20 成都旷视金智科技有限公司 Fatigue detection method, device and readable storage medium storing program for executing
CN110674701A (en) * 2019-09-02 2020-01-10 东南大学 Driver fatigue state rapid detection method based on deep learning
CN110654314A (en) * 2019-09-30 2020-01-07 浙江鸿泉车联网有限公司 Deep learning-based automatic rearview mirror adjusting method and device
CN110826521A (en) * 2019-11-15 2020-02-21 爱驰汽车有限公司 Driver fatigue state recognition method, system, electronic device, and storage medium
CN111428680B (en) * 2020-04-07 2023-10-20 深圳华付技术股份有限公司 Pupil positioning method based on deep learning
CN111521270A (en) * 2020-04-23 2020-08-11 烟台艾睿光电科技有限公司 Body temperature screening alarm system and working method thereof
CN111626221A (en) * 2020-05-28 2020-09-04 四川大学 Driver gazing area estimation method based on human eye information enhancement
CN111724408B (en) * 2020-06-05 2021-09-03 广东海洋大学 Verification experiment method of abnormal driving behavior algorithm model based on 5G communication
CN112036352B (en) * 2020-09-08 2021-09-14 北京嘀嘀无限科技发展有限公司 Training method of fatigue detection model, and fatigue driving detection method and device
CN112528792B (en) * 2020-12-03 2024-05-31 深圳地平线机器人科技有限公司 Fatigue state detection method, device, medium and electronic equipment
CN112966664A (en) * 2021-04-01 2021-06-15 科世达(上海)机电有限公司 Fatigue driving detection method, system and device and readable storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102254151A (en) * 2011-06-16 2011-11-23 清华大学 Driver fatigue detection method based on face video analysis
CN103400471A (en) * 2013-08-12 2013-11-20 电子科技大学 Detecting system and detecting method for fatigue driving of driver
CN103824049A (en) * 2014-02-17 2014-05-28 北京旷视科技有限公司 Cascaded neural network-based face key point detection method
CN103839379A (en) * 2014-02-27 2014-06-04 长城汽车股份有限公司 Automobile and driver fatigue early warning detecting method and system for automobile
CN104688251A (en) * 2015-03-02 2015-06-10 西安邦威电子科技有限公司 Method for detecting fatigue driving and driving in abnormal posture under multiple postures
CN105096528A (en) * 2015-08-05 2015-11-25 广州云从信息科技有限公司 Fatigue driving detection method and system
CN105139070A (en) * 2015-08-27 2015-12-09 南京信息工程大学 Fatigue driving evaluation method based on artificial nerve network and evidence theory

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9129505B2 (en) * 1995-06-07 2015-09-08 American Vehicular Sciences Llc Driver fatigue monitoring system and method
JP2007122362A (en) * 2005-10-27 2007-05-17 Toyota Motor Corp State estimation method using neural network and state estimation apparatus using neural network

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102254151A (en) * 2011-06-16 2011-11-23 清华大学 Driver fatigue detection method based on face video analysis
CN103400471A (en) * 2013-08-12 2013-11-20 电子科技大学 Detecting system and detecting method for fatigue driving of driver
CN103824049A (en) * 2014-02-17 2014-05-28 北京旷视科技有限公司 Cascaded neural network-based face key point detection method
CN103839379A (en) * 2014-02-27 2014-06-04 长城汽车股份有限公司 Automobile and driver fatigue early warning detecting method and system for automobile
CN104688251A (en) * 2015-03-02 2015-06-10 西安邦威电子科技有限公司 Method for detecting fatigue driving and driving in abnormal posture under multiple postures
CN105096528A (en) * 2015-08-05 2015-11-25 广州云从信息科技有限公司 Fatigue driving detection method and system
CN105139070A (en) * 2015-08-27 2015-12-09 南京信息工程大学 Fatigue driving evaluation method based on artificial nerve network and evidence theory

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Extensive Facial Landmark Localization with Coarse-to-fine Convolutional Network Cascade;Erjin Zhou等;《2013 IEEE International Conference on Computer Vision Workshops》;20131208;正文第387页右栏第8行到第389页右栏第14行及图2

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109770925A (en) * 2019-02-03 2019-05-21 闽江学院 A kind of fatigue detection method based on depth time-space network

Also Published As

Publication number Publication date
CN105769120A (en) 2016-07-20

Similar Documents

Publication Publication Date Title
CN105769120B (en) Method for detecting fatigue driving and device
EP1961622B1 (en) Safety-travel assistance device
CN110765807B (en) Driving behavior analysis and processing method, device, equipment and storage medium
EP1553516B1 (en) Pedestrian extracting apparatus
WO2020029444A1 (en) Method and system for detecting attention of driver while driving
US9662977B2 (en) Driver state monitoring system
Doshi et al. A comparative exploration of eye gaze and head motion cues for lane change intent prediction
CN103714659B (en) Fatigue driving identification system based on double-spectrum fusion
CN105654753A (en) Intelligent vehicle-mounted safe driving assistance method and system
CN104013414A (en) Driver fatigue detecting system based on smart mobile phone
KR20190019840A (en) Driver assistance system and method for object detection and notification
JP7290930B2 (en) Occupant modeling device, occupant modeling method and occupant modeling program
CN107924466A (en) Vision system and method for motor vehicles
CN109664889B (en) Vehicle control method, device and system and storage medium
JP2010191793A (en) Alarm display and alarm display method
CN108482367A (en) A kind of method, apparatus and system driven based on intelligent back vision mirror auxiliary
JP2012164026A (en) Image recognition device and display device for vehicle
JP2014146267A (en) Pedestrian detection device and driving support device
US20120189161A1 (en) Visual attention apparatus and control method based on mind awareness and display apparatus using the visual attention apparatus
Xiao et al. Detection of drivers visual attention using smartphone
CN116012822B (en) Fatigue driving identification method and device and electronic equipment
Riera et al. Detecting and tracking unsafe lane departure events for predicting driver safety in challenging naturalistic driving data
EP3480726B1 (en) A vision system and method for autonomous driving and/or driver assistance in a motor vehicle
JP2006010652A (en) Object-detecting device
CN112258813A (en) Vehicle active safety control method and device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant