CN110837815A - Driver state monitoring method based on convolutional neural network - Google Patents
Driver state monitoring method based on convolutional neural network Download PDFInfo
- Publication number
- CN110837815A CN110837815A CN201911119321.2A CN201911119321A CN110837815A CN 110837815 A CN110837815 A CN 110837815A CN 201911119321 A CN201911119321 A CN 201911119321A CN 110837815 A CN110837815 A CN 110837815A
- Authority
- CN
- China
- Prior art keywords
- neural network
- face
- driver
- detection
- layer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/59—Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
- G06V20/597—Recognising the driver's state or behaviour, e.g. attention or drowsiness
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Artificial Intelligence (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a driver state monitoring method based on a convolutional neural network, which comprises the following steps: 1) collecting a video frame of a driver picture in real time, and inputting a collected image into a face detection network to obtain a face area image; 2) with the face region image as a center, enlarging a face region image boundary frame to obtain a dangerous action detection region image, inputting the dangerous action detection region image into a dangerous action detection neural network model, and detecting whether dangerous actions exist in the image; (3) inputting the face region image into a face key point detection neural network model, acquiring face key point coordinates, and judging whether a driver sleeps or yawns or not according to the face key point coordinates; (4) and inputting the face region image into a head pose detection neural network model, acquiring head pose information, and judging whether the driver has distraction behavior or not according to the head pose information. The method can quickly and accurately extract the visual characteristics of the driver, and has high detection precision and good accuracy.
Description
Technical Field
The invention relates to a driver state monitoring method, in particular to a driver state monitoring method based on a convolutional neural network.
Background
With the increasing number of vehicles in recent years, problems caused by traffic are increasingly highlighted. Among them, bad driving behavior of drivers is a main cause of traffic accidents. The driving behaviors comprise playing a mobile phone, smoking, fatigue driving and the like in the driving process. The driver is effectively monitored and early warned in real time, so that traffic accidents caused by unsafe driving behaviors of the driver can be effectively avoided fundamentally, more effective management of the driver is facilitated, driving habits of the driver are changed, and losses caused by the traffic accidents are reduced. With the development of Advanced Driver Assistance Systems (ADAS), vehicles may monitor driver performance, alertness and driving intent by means of so-called Driver State Monitoring (DSM) systems to improve traffic safety. However, the existing driver state monitoring based on bioelectric signals (ECG, EEG, etc.) is not robust enough and cumbersome to implement. The driver state monitoring based on the traditional image algorithm is slow in speed and few in functions, and an effective and simple driver state monitoring method needs to be researched urgently.
Disclosure of Invention
Aiming at the problems and the defects in the prior art, the invention aims to provide a driver state monitoring method based on a convolutional neural network.
In order to realize the purpose of the invention, the technical scheme adopted by the invention is as follows:
a driver state monitoring method based on a convolutional neural network comprises the following steps:
(1) collecting a video frame of a driver picture in real time, inputting a collected video frame image into a face detection network for face positioning, and obtaining a face area image;
(2) taking the face area image as a center, expanding a boundary frame of the face area image to obtain a dangerous action detection area image, inputting the dangerous action detection area image into a dangerous action detection neural network model, detecting whether smoke and a mobile phone exist in the dangerous action detection area image, and judging whether a driver has smoking or calling behaviors according to a detection result;
(3) inputting the face region image into a face key point detection neural network model, acquiring face key point coordinates of a driver, and judging whether the driver has doze and yawning behaviors or not according to the face key point coordinates;
(4) and inputting the face region image into a head pose detection neural network model, acquiring head pose information of the driver, and judging whether the driver has distraction behavior or not according to the head pose information.
According to the above driver state monitoring method based on the convolutional neural network, preferably, the specific structure of the dangerous motion detection neural network model in the step (2) is a combined model of SSD-MobileNetV2 and a feature pyramid network; the input of the dangerous action detection neural network model is a human face area image, and the output is the bounding box coordinates of cigarettes and mobile phones in the human face area image.
According to the above-mentioned method for monitoring driver state based on convolutional neural network, preferably, the training data set of the dangerous motion detection neural network model is a custom data set, which comprises 1451 images of the driver, 775 images of making a call and 676 images of smoking.
According to the above method for monitoring the state of the driver based on the convolutional neural network, preferably, the specific training method and process of the dangerous motion detection neural network model are as follows: images in a training data set are adjusted to be 300 x 300 in resolution and used as input of dangerous motion detection neural network model training, COCO pre-training weight is used as a network initialization parameter, an RMSPROP optimizer is selected, the initial learning rate is 0.04, 24 images are input during each training, and the network completes the training through 15 ten thousand iterations.
According to the above method for monitoring the state of the driver based on the convolutional neural network, preferably, the face key point detection neural network model in the step (3) includes a 1 × 1 convolutional layer, a 3 × 3 deep convolutional layer, 13 inverse residual error modules and a full link layer; the input of the face key point detection neural network model is a face region image, and the output is coordinates of face key points in the face region image; the input face region image is sequentially subjected to feature coding through a 1 × 1 convolutional layer and a 3 × 3 depth convolutional layer, and then sequentially subjected to feature coding through 13 reverse residual modules to obtain a coded feature map, and the coded feature map is input into a full-link layer to obtain face key point coordinates.
According to the above method for monitoring the driver state based on the convolutional neural network, preferably, the inverse residual error module includes a 1 × 1 convolutional layer, a batch normalization layer, an activation layer, a 3 × 3 deep convolutional layer, a batch normalization layer, an activation layer, a 1 × 1 convolutional layer, and a batch normalization layer, which are sequentially arranged.
According to the above method for monitoring the state of the driver based on the convolutional neural network, preferably, the specific training method and process of the face key point detection neural network model are as follows: the training data set is 300W, 16 face key points in each picture in the data set are used as real labels as required, all face images are adjusted to be 112 x 112 in resolution and used as input of network training, the learning rate is 0.0001, the training optimizer is Adam, 256 pictures are input in each training, and the training is completed through 64000 network iterations.
According to the above driver state monitoring method based on the convolutional neural network, preferably, the face key points in step (3) include eye key points and mouth key points, the eye aspect ratio is calculated through the eye key point coordinate information, the mouth aspect ratio is calculated through the mouth key point information, whether the driver has a dozing behavior or not is judged according to the eye aspect ratio, and whether the driver has a yawning behavior or not is judged according to the mouth aspect ratio.
According to the above-mentioned driver state monitoring method based on the convolutional neural network, it is preferable that the number of the eye key points is 12, and each eye has 6 key points. Wherein, the 6 key points of the right eye are respectively the right angle P of the right eye1Right eye left angle P4Upper eyelid P2Upper eyelid P3And P2Lower eyelid P6, and P on the same vertical line3Lower eyelid P on the same vertical line5(ii) a The 6 key points of the left eye are the right angle P of the left eye7Left angle of left eye P10Upper eyelid P8Upper eyelid P9And P8Lower eyelid P on the same vertical line12And P9Lower eyelid P on the same vertical line11. Calculating right Eye Aspect Ratio (EAR) from right eye keypointsRight side) Calculating left Eye Aspect Ratio (EAR) from left eye keypointsLeft side of) When EAR is usedLeft side of、EARRight sideWhen the time is continuously less than a certain threshold value, the driver is judged to have drowsy behavior. EARLeft side ofAnd EARRight sideThe calculation formula of (a) is as follows:
wherein | × | non-conducting phosphor2Representing the euclidean norm.
According to the above-mentioned driver state monitoring method based on convolutional neural network, preferably, there are 4 mouth key points, which are the left mouth angle P respectively13Upper lip center point P14Right mouth angle P15Center point of lower lip P16(ii) a And calculating the Mouth Aspect Ratio (MAR) according to the key points of the mouth, and judging that the driver is in a yawning state when the MAR is continuously larger than a certain threshold value. The calculation formula for MAR is:
wherein | × | non-conducting phosphor2Representing the euclidean norm.
According to the above driver state monitoring method based on the convolutional neural network, preferably, the head pose information in step (4) is represented by euler angles (yaw angle, pitch angle, and roll angle); the head pose detection neural network model is a convolutional neural network comprising a convolutional layer, a batch normalization layer, an activation layer and a full connection layer; the input of the head pose detection neural network model is a human face area image, and the output is normalized yaw angle, pitch angle and roll angle values. The input face region image is sequentially subjected to feature coding through a 3 x 3 convolution layer, an activation layer, a batch normalization layer, a 3 x 3 convolution layer, an activation layer, a batch normalization layer, a pooling layer, a 3 x 3 convolution layer, an activation layer, a batch normalization layer and a 3 x 3 convolution layer, the obtained feature maps are respectively input into three layers including a 1 x 1 convolution layer, an activation layer and a batch normalization layer, and three normalized angle values of a yaw angle, a pitch angle and a roll angle are output. The program code of the head pose detection neural network model is referred to the website as follows:
https://github.com/opencv/open_model_zoo/tree/master/models/intel/head-pose-estimation-adas-0001。
according to the above driver state monitoring method based on convolutional neural network, preferably, the face detection network in step (1) is an SSD-MobileNetV2 neural network model, the input of the face detection network is a driver picture video frame acquired in real time, and the output is a bounding box coordinate (x, y, w, h) of the face region image, where x represents a bounding box center abscissa of the face region image, y represents a bounding box center ordinate of the face region image, w represents a width of a bounding box of the face region image, and h represents a height of the bounding box of the face region image.
According to the above-mentioned driver state monitoring method based on the convolutional neural network, preferably, the yaw angle ranges from-90 degrees to 90 degrees, and the pitch angle and the roll angle ranges from-70 degrees to 70 degrees, based on the real head movement range in step (4). The yaw angle is more than 30 degrees or less than-30 degrees, and the roll angle and the pitch angle are both distraction behaviors when more than 25 degrees or less than minus 25 degrees.
Compared with the prior art, the invention has the following positive beneficial effects:
(1) the method combines a deep learning technology, realizes the extraction of the characteristics of the driver through a lightweight convolutional neural network model, comprises a plurality of lightweight network models such as face detection, face key point detection, head pose detection, dangerous behavior detection and the like, and compared with the traditional visual method or a method based on bioelectricity signals, the method can more quickly and accurately realize the extraction of the visual characteristics, has high detection precision and good accuracy, only needs real-time video frames for input, detects in real time and has no additional complicated limit.
(2) Experiments prove that the detection rate of the driver state monitoring method for the smoking behavior of the driver reaches more than 80%, the detection rate of the driver calling behavior reaches 99%, the detection rate of the dozing behavior reaches more than 80%, the detection rate of the yawning behavior reaches more than 90%, and the detection rate of the driver distraction behavior reaches more than 90%; therefore, the neural network-based driver state detection method is high in detection precision and good in accuracy.
(3) The driver state detection method realizes end-to-end deployment, has the advantages of high efficiency, real-time performance, clearness, high precision and the like, is simple in the whole detection process, easy to operate, capable of being popularized and applied in a large scale, and has good social benefits and popularization values.
Drawings
FIG. 1 is a flow chart of a driver state monitoring method based on a convolutional neural network according to the present invention;
FIG. 2 is a schematic structural diagram of a neural network model for dangerous motion detection according to the present invention;
FIG. 3 is a schematic structural diagram of a face key point detection neural network model according to the present invention;
FIG. 4 is a schematic diagram of key points of a face according to the present invention;
fig. 5 is a schematic view of yaw, pitch and roll angles.
Detailed Description
The present invention will be described in further detail with reference to the following examples, which are not intended to limit the scope of the present invention.
Example 1:
a method for monitoring a driver state based on a convolutional neural network, as shown in fig. 1, includes the following steps:
(1) the method comprises the steps of collecting video frames of a driver picture in real time through a camera, adjusting the resolution of the collected video frame image to 672 multiplied by 384, inputting the adjusted image into a face detection network for face positioning, and obtaining a face area image.
The human face detection network is an SSD-MobileNet V2 neural network model, the input of the human face detection network is a driver picture video frame acquired in real time, and the output is a boundary frame coordinate (x, y, w, h) of a human face region image, wherein x represents the abscissa of the center point of the boundary frame of the human face region image, y represents the ordinate of the center point of the boundary frame of the human face region image, w represents the width of the boundary frame of the human face region image, and h represents the height of the boundary frame of the human face region image.
(2) And taking the face area image as a center, expanding the boundary frame of the face area image by 1.8 times to obtain a dangerous action detection area image, inputting the dangerous action detection area image into a dangerous action detection neural network model, detecting whether smoke and a mobile phone exist in the dangerous action detection area image, and judging whether a driver has smoking or calling behaviors according to a detection result.
The specific structure of the dangerous motion detection neural network model is a combined model of SSD-MobileNet V2 and a feature pyramid network (see FIG. 2); the input of the dangerous action detection neural network model is a human face area image, and the output is the bounding box coordinates of cigarettes and mobile phones in the human face area image. If the dangerous action detection neural network model outputs the coordinates of the boundary box with smoke, the situation that the driver has smoking behavior is shown; if the dangerous action detection neural network model outputs the boundary box coordinates of the mobile phone, the driver is indicated to have a calling behavior.
The specific training process of the dangerous motion detection neural network model comprises the following steps: the training data set of the dangerous motion detection neural network model is a custom data set and comprises 1451 images of a driver, 775 images of making a call and 676 images of smoking; adjusting all images in a training data set to be 300 multiplied by 300 resolution as input of dangerous motion detection neural network model training, using COCO pre-training weight as network initialization parameter, selecting RMSPROP optimizer, inputting 24 images for each training, and completing training by the network through 15 ten thousand iterations.
(3) Inputting a face region image into a face key point detection neural network model, acquiring the coordinates of face key points of a driver, wherein the face key points comprise eye key points and mouth key points, calculating the eye aspect ratio according to the coordinate information of the eye key points, calculating the mouth aspect ratio according to the information of the mouth key points, judging whether the driver has doze behavior according to the eye aspect ratio, and judging whether the driver has yawning behavior according to the mouth aspect ratio.
The face key point detection neural network model comprises a 1 × 1 convolutional layer, a 3 × 3 deep convolutional layer, 13 reverse residual modules and a full connection layer (see fig. 3); the input of the face key point detection neural network model is a face region image, and the output is coordinates of face key points in the face region image; the input face region image is sequentially subjected to feature coding through a 1 × 1 convolutional layer and a 3 × 3 depth convolutional layer, and then sequentially subjected to feature coding through 13 reverse residual modules to obtain a coded feature map, and the coded feature map is input into a full-link layer to obtain face key point coordinates. The reverse residual module comprises a 1 × 1 convolution layer, a batch normalization layer, an activation layer, a 3 × 3 depth convolution layer, a batch normalization layer, an activation layer, a 1 × 1 convolution layer and a batch normalization layer which are sequentially arranged.
The specific training process of the face key point detection neural network model comprises the following steps: the training data set is 300W, 16 face key points in each picture in the training data set are used as real labels as required, all face images are adjusted to be 112 x 112 in resolution and used as input of network training, the learning rate is 0.0001, the training optimizer is Adam, 256 pictures are input in each training, and the training is completed through 64000 network iterations.
The eye aspect ratio is calculated specifically as follows: there were 12 eye keypoints and 6 keypoints per eye (as shown in fig. 4). Wherein, the 6 key points of the right eye are respectively the right angle P of the right eye1Right eye left angle P4Upper eyelid P2Upper eyelid P3And P2Lower eyelid P6, and P on the same vertical line3Lower eyelid P on the same vertical line5(ii) a The 6 key points of the left eye are the right angle P of the left eye7Left angle of left eye P10Upper eyelid P8Upper eyelid P9And P8Lower eyelid P on the same vertical line12And P9Lower eyelid P on the same vertical line11. Calculating right Eye Aspect Ratio (EAR) from right eye keypointsRight side) Calculating left Eye Aspect Ratio (EAR) from left eye keypointsLeft side of) When EAR is usedLeft side of、EARRight sideWhen the time is continuously less than a certain threshold value, the driver is judged to have drowsy behavior. EARLeft side ofAnd EARRight sideThe calculation formula of (a) is as follows:
wherein | × | non-conducting phosphor2Representing the euclidean norm.
The mouth aspect ratio is calculated as follows: the key points of the mouth are 4, and the key points are respectively the left mouth angle P13Upper lip center point P14Right mouth angle P15Center point of lower lip P16(as shown in FIG. 4); and calculating the Mouth Aspect Ratio (MAR) according to the key points of the mouth, and judging that the driver is in a yawning state when the MAR is continuously larger than a certain threshold value. The calculation formula for MAR is:
wherein | × | non-conducting phosphor2Representing the euclidean norm.
(4) Inputting the face region image into the head pose detection neural network model, acquiring the head pose information of the driver, expressing the head pose information by using Euler angles (a yaw angle, a pitch angle and a roll angle) (as shown in figure 5), and judging whether the driver has distraction behavior according to the head pose information. The head pose detection neural network model is a convolutional neural network comprising a convolutional layer, a batch normalization layer, an activation layer and a full connection layer which are sequentially arranged; the input of the head pose detection neural network model is a human face area image, and the output is normalized yaw angle, pitch angle and roll angle values. The yaw angle is more than 30 degrees or less than-30 degrees, and the roll angle and the pitch angle are both distraction behaviors when more than 25 degrees or less than minus 25 degrees.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the present invention, but rather as the following description is intended to cover all modifications, equivalents and improvements falling within the spirit and scope of the present invention.
Claims (7)
1. A driver state monitoring method based on a convolutional neural network is characterized by comprising the following steps:
(1) collecting a video frame of a driver picture in real time, inputting a collected video frame image into a face detection network for face positioning, and obtaining a face area image;
(2) taking the face area image as a center, expanding a boundary frame of the face area image to obtain a dangerous action detection area image, inputting the dangerous action detection area image into a dangerous action detection neural network model, detecting whether smoke and a mobile phone exist in the dangerous action detection area image, and judging whether a driver has smoking or calling behaviors according to a detection result;
(3) inputting the face region image into a face key point detection neural network model, acquiring face key point coordinates of a driver, and judging whether the driver has doze and yawning behaviors or not according to the face key point coordinates;
(4) and inputting the face region image into a head pose detection neural network model, acquiring head pose information of the driver, and judging whether the driver has distraction behavior or not according to the head pose information.
2. The convolutional neural network-based driver status monitoring method as claimed in claim 1, wherein the dangerous motion detection neural network model in step (2) is a combination of SSD-MobileNetV2 and a feature pyramid network; the input of the dangerous action detection neural network model is a human face area image, and the output is the bounding box coordinates of cigarettes and mobile phones in the human face area image.
3. The convolutional neural network-based driver state monitoring method as claimed in claim 1, wherein the face keypoint detection neural network model in step (3) comprises a 1 x 1 convolutional layer, a 3 x 3 deep convolutional layer, thirteen inverse residual error modules and a fully-connected layer; the input of the face key point detection neural network model is a face region image, and the output is coordinates of face key points in the face region image; the input face region image is sequentially subjected to feature coding through a 1 × 1 convolutional layer and a 3 × 3 depth convolutional layer, and then sequentially subjected to feature coding through 13 reverse residual modules to obtain a coded feature map, and the coded feature map is input into a full-link layer to obtain face key point coordinates.
4. The convolutional neural network-based driver state monitoring method as claimed in claim 3, wherein the inverse residual error module comprises a 1 x 1 convolutional layer, a batch normalization layer, an activation layer, a 3 x 3 deep convolutional layer, a batch normalization layer, an activation layer, a 1 x 1 convolutional layer, and a batch normalization layer, which are sequentially arranged.
5. The convolutional neural network-based driver status monitoring method as claimed in claim 3, wherein the face key points in step (3) include eye key points and mouth key points, the eye aspect ratio is calculated through the eye key point coordinate information, the mouth aspect ratio is calculated through the mouth key point information, whether the driver has a dozing behavior or not is judged according to the eye aspect ratio, and whether the driver has a yawning behavior or not is judged according to the mouth aspect ratio.
6. The convolutional neural network-based driver state monitoring method as claimed in any one of claims 1 to 5, wherein the head pose information in step (4) is represented by a yaw angle, a pitch angle and a roll angle; the head pose detection neural network model is a convolutional neural network comprising a convolutional layer, a batch normalization layer, an activation layer and a pooling layer; the input of the head pose detection neural network model is a human face area image, and the output is normalized yaw angle, pitch angle and roll angle values.
7. The convolutional neural network-based driver state monitoring method as claimed in claim 1, wherein the face detection network in step (1) is an SSD-MobileNetV2 network.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911119321.2A CN110837815A (en) | 2019-11-15 | 2019-11-15 | Driver state monitoring method based on convolutional neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911119321.2A CN110837815A (en) | 2019-11-15 | 2019-11-15 | Driver state monitoring method based on convolutional neural network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110837815A true CN110837815A (en) | 2020-02-25 |
Family
ID=69576484
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911119321.2A Pending CN110837815A (en) | 2019-11-15 | 2019-11-15 | Driver state monitoring method based on convolutional neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110837815A (en) |
Cited By (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111274930A (en) * | 2020-04-02 | 2020-06-12 | 成都鼎安华智慧物联网股份有限公司 | Helmet wearing and smoking behavior identification method based on deep learning |
CN111563435A (en) * | 2020-04-28 | 2020-08-21 | 深圳市优必选科技股份有限公司 | Sleep state detection method and device for user |
CN111563468A (en) * | 2020-05-13 | 2020-08-21 | 电子科技大学 | Driver abnormal behavior detection method based on attention of neural network |
CN111723695A (en) * | 2020-06-05 | 2020-09-29 | 广东海洋大学 | Improved Yolov 3-based driver key sub-area identification and positioning method |
CN111814637A (en) * | 2020-06-29 | 2020-10-23 | 北京百度网讯科技有限公司 | Dangerous driving behavior recognition method and device, electronic equipment and storage medium |
CN111814568A (en) * | 2020-06-11 | 2020-10-23 | 开易(北京)科技有限公司 | Target detection method and device for monitoring state of driver |
CN111832526A (en) * | 2020-07-23 | 2020-10-27 | 浙江蓝卓工业互联网信息技术有限公司 | Behavior detection method and device |
CN111985403A (en) * | 2020-08-20 | 2020-11-24 | 中再云图技术有限公司 | Distracted driving detection method based on face posture estimation and sight line deviation |
CN112052815A (en) * | 2020-09-14 | 2020-12-08 | 北京易华录信息技术股份有限公司 | Behavior detection method and device and electronic equipment |
CN112115775A (en) * | 2020-08-07 | 2020-12-22 | 北京工业大学 | Smoking behavior detection method based on computer vision in monitoring scene |
CN112183356A (en) * | 2020-09-28 | 2021-01-05 | 广州市几米物联科技有限公司 | Driving behavior detection method and device and readable storage medium |
CN112380977A (en) * | 2020-11-12 | 2021-02-19 | 深兰人工智能芯片研究院(江苏)有限公司 | Smoking behavior detection method and device |
CN112434612A (en) * | 2020-11-25 | 2021-03-02 | 创新奇智(上海)科技有限公司 | Smoking detection method and device, electronic equipment and computer readable storage medium |
CN112464797A (en) * | 2020-11-25 | 2021-03-09 | 创新奇智(成都)科技有限公司 | Smoking behavior detection method and device, storage medium and electronic equipment |
CN112990069A (en) * | 2021-03-31 | 2021-06-18 | 新疆爱华盈通信息技术有限公司 | Abnormal driving behavior detection method, device, terminal and medium |
CN113033374A (en) * | 2021-03-22 | 2021-06-25 | 开放智能机器(上海)有限公司 | Artificial intelligence dangerous behavior identification method and device, electronic equipment and storage medium |
CN113435267A (en) * | 2021-06-09 | 2021-09-24 | 江苏第二师范学院 | Online education student concentration discrimination method based on improved convolutional neural network |
CN113449656A (en) * | 2021-07-01 | 2021-09-28 | 淮阴工学院 | Driver state identification method based on improved convolutional neural network |
CN113537115A (en) * | 2021-07-26 | 2021-10-22 | 东软睿驰汽车技术(沈阳)有限公司 | Method and device for acquiring driving state of driver and electronic equipment |
CN113591615A (en) * | 2021-07-14 | 2021-11-02 | 广州敏视数码科技有限公司 | Multi-model-based driver smoking detection method |
CN114842498A (en) * | 2021-02-02 | 2022-08-02 | 北京京东振世信息技术有限公司 | Smoking behavior detection method and device |
JP2022544635A (en) * | 2020-06-29 | 2022-10-20 | ベイジン バイドゥ ネットコム サイエンス テクノロジー カンパニー リミテッド | Dangerous driving behavior recognition method, device, electronic device and storage medium |
CN117542104A (en) * | 2024-01-09 | 2024-02-09 | 浙江图讯科技股份有限公司 | Face three-dimensional key point detection method based on self-supervision auxiliary learning |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2011125620A (en) * | 2009-12-21 | 2011-06-30 | Toyota Motor Corp | Biological state detector |
CN108960065A (en) * | 2018-06-01 | 2018-12-07 | 浙江零跑科技有限公司 | A kind of driving behavior detection method of view-based access control model |
US20190065873A1 (en) * | 2017-08-10 | 2019-02-28 | Beijing Sensetime Technology Development Co., Ltd. | Driving state monitoring methods and apparatuses, driver monitoring systems, and vehicles |
CN109875568A (en) * | 2019-03-08 | 2019-06-14 | 北京联合大学 | A kind of head pose detection method for fatigue driving detection |
CN110046560A (en) * | 2019-03-28 | 2019-07-23 | 青岛小鸟看看科技有限公司 | A kind of dangerous driving behavior detection method and camera |
CN110309760A (en) * | 2019-06-26 | 2019-10-08 | 深圳市微纳集成电路与系统应用研究院 | The method that the driving behavior of driver is detected |
-
2019
- 2019-11-15 CN CN201911119321.2A patent/CN110837815A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2011125620A (en) * | 2009-12-21 | 2011-06-30 | Toyota Motor Corp | Biological state detector |
US20190065873A1 (en) * | 2017-08-10 | 2019-02-28 | Beijing Sensetime Technology Development Co., Ltd. | Driving state monitoring methods and apparatuses, driver monitoring systems, and vehicles |
CN109937152A (en) * | 2017-08-10 | 2019-06-25 | 北京市商汤科技开发有限公司 | Driving condition supervision method and apparatus, driver's monitoring system, vehicle |
CN108960065A (en) * | 2018-06-01 | 2018-12-07 | 浙江零跑科技有限公司 | A kind of driving behavior detection method of view-based access control model |
CN109875568A (en) * | 2019-03-08 | 2019-06-14 | 北京联合大学 | A kind of head pose detection method for fatigue driving detection |
CN110046560A (en) * | 2019-03-28 | 2019-07-23 | 青岛小鸟看看科技有限公司 | A kind of dangerous driving behavior detection method and camera |
CN110309760A (en) * | 2019-06-26 | 2019-10-08 | 深圳市微纳集成电路与系统应用研究院 | The method that the driving behavior of driver is detected |
Non-Patent Citations (2)
Title |
---|
XINXING TANG等: "Real-time image-based driver fatigue detection and monitoring system for monitoring driver vigilance", 《2016 35TH CHINESE CONTROL CONFERENCE (CCC)》 * |
张洲: "基于 TensorFlow 的 Android 平台实时车辆和交通标志牌检测的研究", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 * |
Cited By (32)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111274930A (en) * | 2020-04-02 | 2020-06-12 | 成都鼎安华智慧物联网股份有限公司 | Helmet wearing and smoking behavior identification method based on deep learning |
CN111563435A (en) * | 2020-04-28 | 2020-08-21 | 深圳市优必选科技股份有限公司 | Sleep state detection method and device for user |
CN111563468B (en) * | 2020-05-13 | 2023-04-07 | 电子科技大学 | Driver abnormal behavior detection method based on attention of neural network |
CN111563468A (en) * | 2020-05-13 | 2020-08-21 | 电子科技大学 | Driver abnormal behavior detection method based on attention of neural network |
CN111723695A (en) * | 2020-06-05 | 2020-09-29 | 广东海洋大学 | Improved Yolov 3-based driver key sub-area identification and positioning method |
CN111814568A (en) * | 2020-06-11 | 2020-10-23 | 开易(北京)科技有限公司 | Target detection method and device for monitoring state of driver |
CN111814568B (en) * | 2020-06-11 | 2022-08-02 | 开易(北京)科技有限公司 | Target detection method and device for monitoring state of driver |
CN111814637A (en) * | 2020-06-29 | 2020-10-23 | 北京百度网讯科技有限公司 | Dangerous driving behavior recognition method and device, electronic equipment and storage medium |
WO2022001091A1 (en) * | 2020-06-29 | 2022-01-06 | 北京百度网讯科技有限公司 | Dangerous driving behavior recognition method and apparatus, and electronic device and storage medium |
JP2022544635A (en) * | 2020-06-29 | 2022-10-20 | ベイジン バイドゥ ネットコム サイエンス テクノロジー カンパニー リミテッド | Dangerous driving behavior recognition method, device, electronic device and storage medium |
CN111832526A (en) * | 2020-07-23 | 2020-10-27 | 浙江蓝卓工业互联网信息技术有限公司 | Behavior detection method and device |
CN111832526B (en) * | 2020-07-23 | 2024-06-11 | 浙江蓝卓工业互联网信息技术有限公司 | Behavior detection method and device |
CN112115775B (en) * | 2020-08-07 | 2024-06-07 | 北京工业大学 | Smoke sucking behavior detection method based on computer vision under monitoring scene |
CN112115775A (en) * | 2020-08-07 | 2020-12-22 | 北京工业大学 | Smoking behavior detection method based on computer vision in monitoring scene |
CN111985403A (en) * | 2020-08-20 | 2020-11-24 | 中再云图技术有限公司 | Distracted driving detection method based on face posture estimation and sight line deviation |
CN112052815A (en) * | 2020-09-14 | 2020-12-08 | 北京易华录信息技术股份有限公司 | Behavior detection method and device and electronic equipment |
CN112052815B (en) * | 2020-09-14 | 2024-02-20 | 北京易华录信息技术股份有限公司 | Behavior detection method and device and electronic equipment |
CN112183356A (en) * | 2020-09-28 | 2021-01-05 | 广州市几米物联科技有限公司 | Driving behavior detection method and device and readable storage medium |
CN112380977A (en) * | 2020-11-12 | 2021-02-19 | 深兰人工智能芯片研究院(江苏)有限公司 | Smoking behavior detection method and device |
CN112464797B (en) * | 2020-11-25 | 2024-04-02 | 创新奇智(成都)科技有限公司 | Smoking behavior detection method and device, storage medium and electronic equipment |
CN112464797A (en) * | 2020-11-25 | 2021-03-09 | 创新奇智(成都)科技有限公司 | Smoking behavior detection method and device, storage medium and electronic equipment |
CN112434612A (en) * | 2020-11-25 | 2021-03-02 | 创新奇智(上海)科技有限公司 | Smoking detection method and device, electronic equipment and computer readable storage medium |
CN114842498A (en) * | 2021-02-02 | 2022-08-02 | 北京京东振世信息技术有限公司 | Smoking behavior detection method and device |
CN113033374A (en) * | 2021-03-22 | 2021-06-25 | 开放智能机器(上海)有限公司 | Artificial intelligence dangerous behavior identification method and device, electronic equipment and storage medium |
CN112990069A (en) * | 2021-03-31 | 2021-06-18 | 新疆爱华盈通信息技术有限公司 | Abnormal driving behavior detection method, device, terminal and medium |
CN113435267B (en) * | 2021-06-09 | 2023-06-23 | 江苏第二师范学院 | Online education student concentration discriminating method based on improved convolutional neural network |
CN113435267A (en) * | 2021-06-09 | 2021-09-24 | 江苏第二师范学院 | Online education student concentration discrimination method based on improved convolutional neural network |
CN113449656A (en) * | 2021-07-01 | 2021-09-28 | 淮阴工学院 | Driver state identification method based on improved convolutional neural network |
CN113591615A (en) * | 2021-07-14 | 2021-11-02 | 广州敏视数码科技有限公司 | Multi-model-based driver smoking detection method |
CN113537115A (en) * | 2021-07-26 | 2021-10-22 | 东软睿驰汽车技术(沈阳)有限公司 | Method and device for acquiring driving state of driver and electronic equipment |
CN117542104A (en) * | 2024-01-09 | 2024-02-09 | 浙江图讯科技股份有限公司 | Face three-dimensional key point detection method based on self-supervision auxiliary learning |
CN117542104B (en) * | 2024-01-09 | 2024-04-30 | 浙江图讯科技股份有限公司 | Face three-dimensional key point detection method based on self-supervision auxiliary learning |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110837815A (en) | Driver state monitoring method based on convolutional neural network | |
US20220092882A1 (en) | Living body detection method based on facial recognition, and electronic device and storage medium | |
CN109815881A (en) | Training method, the Activity recognition method, device and equipment of Activity recognition model | |
US20220277558A1 (en) | Cascaded Neural Network-Based Attention Detection Method, Computer Device, And Computer-Readable Storage Medium | |
CN110135476A (en) | A kind of detection method of personal safety equipment, device, equipment and system | |
CN112381061B (en) | Facial expression recognition method and system | |
CN111401188B (en) | Traffic police gesture recognition method based on human body key point characteristics | |
CN112270381B (en) | People flow detection method based on deep learning | |
CN112069863B (en) | Face feature validity determination method and electronic equipment | |
CN112560649A (en) | Behavior action detection method, system, equipment and medium | |
CN111277751B (en) | Photographing method and device, storage medium and electronic equipment | |
CN112614136A (en) | Infrared small target real-time instance segmentation method and device | |
CN114677754A (en) | Behavior recognition method and device, electronic equipment and computer readable storage medium | |
CN112633387A (en) | Safety reminding method, device, equipment, system and storage medium | |
CN113553893A (en) | Human body falling detection method and device based on deep neural network and electronic equipment | |
CN112215041B (en) | End-to-end lane line detection method and system | |
CN112700568B (en) | Identity authentication method, equipment and computer readable storage medium | |
CN114399813A (en) | Face shielding detection method, model training method and device and electronic equipment | |
CN111052127A (en) | System and method for fatigue detection | |
Poon et al. | Driver distracted behavior detection technology with YOLO-based deep learning networks | |
CN116597612A (en) | Drowning prevention alarm method and system based on human body key point distance matrix | |
CN115641564A (en) | Lightweight parking space detection method | |
CN114170431A (en) | Complex scene vehicle frame number identification method and device based on edge features | |
CN113538193A (en) | Traffic accident handling method and system based on artificial intelligence and computer vision | |
CN111967579A (en) | Method and apparatus for performing convolution calculation on image using convolution neural network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200225 |
|
RJ01 | Rejection of invention patent application after publication |