CN110728241A - Driver fatigue detection method based on deep learning multi-feature fusion - Google Patents
Driver fatigue detection method based on deep learning multi-feature fusion Download PDFInfo
- Publication number
- CN110728241A CN110728241A CN201910974764.3A CN201910974764A CN110728241A CN 110728241 A CN110728241 A CN 110728241A CN 201910974764 A CN201910974764 A CN 201910974764A CN 110728241 A CN110728241 A CN 110728241A
- Authority
- CN
- China
- Prior art keywords
- fatigue detection
- deep learning
- eye
- detection method
- mouth
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/59—Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
- G06V20/597—Recognising the driver's state or behaviour, e.g. attention or drowsiness
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Multimedia (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computing Systems (AREA)
- Molecular Biology (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Computational Linguistics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Biophysics (AREA)
- Human Computer Interaction (AREA)
- Biomedical Technology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a driver fatigue detection method based on deep learning multi-feature fusion, which relates to the technical field of driver fatigue detection methods and comprises the following steps: data preprocessing: continuously shooting by utilizing acquisition equipment such as a camera, a pick-up head and the like, and preprocessing the obtained image; face recognition and feature extraction: obtaining a face area, eyes and a mouth area on the preprocessed image through a multi-task level convolution network model; and (3) parallel detection: respectively carrying out head posture estimation, eye posture estimation and mouth posture estimation; and (3) fatigue detection: compared with the prior art, the method avoids a complex image processing process in practical application, has better effects on real-time performance, accuracy and robustness, has good detection performance and has higher fatigue detection accuracy.
Description
Technical Field
The invention relates to the field of driver fatigue detection, in particular to a driver fatigue detection method based on deep learning multi-feature fusion.
Background
In recent decades, with the improvement of living standard, the number of automobiles is gradually increasing. With the increase of automobiles, people can live quickly and conveniently, frequent traffic accidents also cause disastrous economic losses, and lives of people are threatened. Therefore, the research on the fatigue detection technology of the driver has important significance for preventing traffic accidents.
The fatigue of the driver is detected and monitored mainly from three aspects, including:
method based on vehicle state: detecting fatigue through the steering wheel angle degree, the steering grip strength of the steering wheel, the vehicle speed, the vehicle deviation, the brake pedal force, the accelerator pedal force and the like;
physiological signal measurement: when a driver is in a fatigue state, physiological indexes of the driver, such as brain waves [ electroencephalogram signals ], electrical activity hearts [ electrocardiosignals ], electrical activity [ electromyogram signals ] related to muscles deviate from normal values;
computer vision detection: when the driver is drowsy, the facial features thereof are different from those of the awake state.
However, the analysis result of the vehicle state-based method is easily affected by external environmental factors such as individual driving habits, weather, vehicle characteristics, road conditions, and the like, is not robust, and can detect an abnormality only when a driver is about to have a traffic accident, and cannot early warn, and meanwhile, most of onboard physiological sensors are complicated and must be attached to the surface of human skin, which may cause discomfort to the driver, and may cause a traffic safety accident by affecting the normal operation of the driver.
In the computer vision detection method, the extracted characteristic parameters mainly have eye movement characteristics (blink frequency, PERCLOS, eye opening and closing degree, gazing direction and the like), and the fatigue detection based on the computer vision is easy to achieve better performance due to obvious change of facial characteristics and excellent contribution of Deep Learning (DL) in the aspect of image processing.
However, the existing fatigue detection method based on computer vision only uses eye features or mouth features as a judgment basis, and a default camera can shoot a clear front face of a driver, so that some difficulties exist at present: (1) the data set related to fatigue detection is relatively scarce; (2) the interference resistance is not strong: in actual driving, the fatigue detection is influenced by the phenomena of nonuniform illumination distribution and the like of the acquired image possibly caused by the influence of various factors; (3) changes in facial orientation and head pose can affect driver facial image acquisition and fatigue detection; (4) the existing detection system has low real-time performance and accuracy.
Disclosure of Invention
The invention aims to provide a driver fatigue detection method based on deep learning multi-feature fusion aiming at the defects of the prior art, which comprises the following steps: data preprocessing: continuously shooting by utilizing acquisition equipment such as a camera, a pick-up head and the like, and preprocessing the obtained image; face recognition and feature extraction: obtaining a face area, eyes and a mouth area on the preprocessed image through a multi-task level convolution network model; and (3) parallel detection: respectively carrying out head posture estimation, eye posture estimation and mouth posture estimation; and (3) fatigue detection: whether fatigue exists is judged according to the PERCLOS parameter, the yawning parameter and the nodding parameter so as to solve the problems that in the prior art, the anti-interference performance is not strong, the face orientation and the change of the head posture influence the face image acquisition of a driver to influence the fatigue detection and the like.
A driver fatigue detection method based on deep learning multi-feature fusion comprises the following steps:
1) data preprocessing: continuously shooting by utilizing acquisition equipment such as a camera, a pick-up head and the like, and preprocessing the obtained image;
2) face recognition and feature extraction: obtaining a face area, eyes and a mouth area on the preprocessed image through a multi-task level convolution network model;
3) and (3) parallel detection: respectively carrying out head posture estimation, eye posture estimation and mouth posture estimation;
4) and (3) fatigue detection: and judging whether fatigue occurs according to the PERCLOS parameter, the yawning parameter and the nodding parameter.
As a further aspect of the present invention, in step 1), the preprocessing includes filtering denoising and image histogram equalization.
As a further aspect of the present invention, in step 2), the eye and mouth region includes a left eye, a right eye, a nose, a left lip end and a right lip end.
As a further aspect of the present invention, in step 3), the head state estimation is to input a face region into a hopenet model for judgment, and the eye pose estimation and the mouth pose estimation are to determine an eye region and a mouth region by using a shallow convolutional neural network.
As a further aspect of the present invention, in step 4), the calculation formula of the PERCLOS parameter is: f. ofperN/N × 100%, where N is the number of closed-eye frames and N is the total number of frames.
As a further scheme of the present invention, in step 4), the calculation formula of the yawning parameter is: f. ofMN/N, wherein N is the total number of frames of the mouth opening state in the statistical time, and N is the total number of frames in the statistical time.
As a further scheme of the invention, in the step 4), the calculation formula of the nodding parameter is FnAnd N/m, wherein N is the total number of frames judged to be in the nodding state in the statistical time, and N is the total number of frames in the statistical time.
In conclusion, compared with the prior art, the invention has the following beneficial effects:
the invention trains the neural network on a large amount of data, thereby avoiding the complex image processing process in practical application; the method has good effects on real-time performance, accuracy and robustness, reaches 99.05% on the own data set, and has good detection performance; in addition, under the condition of poor light conditions, the eye features can be accurately extracted, and compared with a fatigue detection method of the PERCLOS standard only detecting eyes, the fatigue detection method has the advantages that after the nodding posture is detected, higher fatigue detection accuracy is obtained, and a better detection result is obtained for fatigue driving of a driver.
Drawings
FIG. 1 is a detection flow chart of a driver fatigue detection method based on deep learning multi-feature fusion.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments, and all other embodiments obtained by a person of ordinary skill in the art without creative efforts based on the embodiments of the present invention belong to the protection scope of the present invention.
Example 1
As shown in fig. 1, a method for detecting fatigue of a driver based on deep learning multi-feature fusion includes the following steps:
1) data preprocessing: continuously shooting by utilizing acquisition equipment such as a camera, a pick-up head and the like, and preprocessing the obtained image;
2) face recognition and feature extraction: obtaining a face area, eyes and a mouth area on the preprocessed image through a multi-task level convolution network model;
3) and (3) parallel detection: respectively carrying out head posture estimation, eye posture estimation and mouth posture estimation;
4) and (3) fatigue detection: and judging whether fatigue occurs according to the PERCLOS parameter, the yawning parameter and the nodding parameter.
Further, in step 1), the preprocessing includes filtering denoising and image histogram equalization.
In the filtering and denoising process, the median filtering replaces the pixel value of the central point with the median in the neighborhood of the central point, and the isolated spots with pixel values of 0 or 255 are easily eliminated, and the principle is as follows:
F(x,y)=Med(S(x,y))
wherein S (x, y) is a set of pixel points in the NXN template (N is generally an odd number) neighborhood at the point (x, y), Med () represents a median function, the gray values of the respective pixel points in the neighborhood with the pixel point (x, y) as the center are arranged from large to small, and the gray value F (x, y) of the pixel at the middle position is output to represent a new pixel value after median filtering processing at the point (x, y).
In the process of image histogram equalization, the gray value of an image is changed point by point, each gray level has the same number of pixel points as much as possible, the histogram tends to be balanced, and the histogram equalization is carried out according to the following formula:
wherein N represents the gray level of the image, M represents the number of pixel points, ha(n) is a histogram of the input image a (x, y), and the image b (x, y) is an output of the input image after histogram equalization.
Further, in step 2), the eye and mouth region includes a left eye, a right eye, a nose, a left lip and a right lip.
The multitask convolutional network Model (MTCNN) consists of 3 network structures (P-Net, R-Net, O-Net):
the Network structure mainly obtains regression vectors of a candidate window and a boundary box of a face region, uses the boundary box for regression, calibrates the candidate window, and combines highly overlapped candidate boxes through non-maximum suppression;
the Network structure removes false positive areas through boundary box regression and non-maximum value inhibition, but because the Network structure has difference with the P-Net Network structure and has one more full connection layer, a better effect of inhibiting false positive can be obtained;
output Network (O-Net) which has one more convolution layer than R-Net Network structure, so the processing result is more delicate and the function is similar to that of R-Net Network structure, but the Network structure has more supervision on the face area and obtains 5 coordinates which respectively represent the left eye, the right eye, the nose, the left lip end and the right lip end.
Further, in step 3), the head state estimation is to input a face region into a hopenet model for judgment, and the eye pose estimation and the mouth pose estimation are to judge an eye region and a mouth region by using a shallow convolutional neural network.
The shallow convolutional neural network directly takes a two-dimensional image of eyes or a mouth as input, automatically learns the implicit relation between image characteristics and data, and has the advantages of displacement, scaling and distortion invariance. The structure of the device comprises a convolution layer, a pooling layer, a full-connection layer and a softmax classification:
taking eye state training as an example, the input image is RGB color, and its resize is 48 × 48, after passing through 6 5 × 5 convolution kernels and relu layer maxporoling layer, the output size is 22 × 22 × 6, then continues to pass through 5 × 5 convolution kernels (12), the output size of this layer is 18 × 18 × 12, then after passing through relu layer and max pooling, the output size becomes 9 × 9 × 12. After that, the feature map is converted into a one-dimensional vector and fed to the fully-connected layer, in which the length of the input layer is 972 and the hidden layer is 3 layers, and finally, the output is divided into two types by the softmax layer: open eyes and close eyes.
Hopenet uses the resnet50 as a skeleton, and is followed by three full-connection layers (FC), each layer predicts separately and predicts the Euler angles (Yaw, Pitch, Roll) of the human face respectively, wherein the full-connection number of the FC layer is bin, namely, every three of all-90 to +90 values which are 180 values are divided into one group, and the FC connection number is 60.
The results of the full connection layer are sorted by softmax, the value of fc is mapped into probability values, all the mapping results are added to be 1, the expectation can be conveniently obtained by mapping into the probability, namely the output of the network is mapped into the interval range of [0, 60], then the output is multiplied by 3 to subtract 90, the interval range is mapped into the interval range of [ -90, +90], namely the required regression, the loss value of the regression is calculated, and the loss value is used as the square mean error loss.
And weighting and summing the cross-loss function with the previous classification according to a certain weight, and then reversing the final loss gradient to complete the whole training process.
After the face picture is input into the trained hopenet, three angles of the face posture direction can be obtained.
An LSTM network is introduced to decide whether or not to generate the nodding behavior, and the network passes through a current frame and a series of video frame sequences before the current frame (here, the LSTM network takes the three angles obtained as the characteristics to be input into the networkSelecting 15 frames) to determine whether the current frame is in the process of nodding, the time network used is a single LSTM layer with 2 hidden units, followed by a new prediction layer (weight sharing across time) to predict the nodding score Y of each frametd(t is 0, 1), that is, whether the current frame belongs to one frame in the doze nod.
Further, in step 4), the calculation formula of the PERCLOS parameter is as follows: f. ofperN/N × 100%, where N is the number of closed-eye frames and N is the total number of frames.
The PERCLOS parameter can be used to quantify the degree to which the driver closes his eyes.
Further, in step 4), the calculation formula of the yawning parameter is as follows: f. ofMN/N, wherein N is the total number of frames of the mouth opening state in the statistical time, and N is the total number of frames in the statistical time.
The fatigue degree is larger when the yawning parameter value is larger.
Further, in step 4), the calculation formula of the nodding parameter is FnAnd N/m, wherein N is the total number of frames judged to be in the nodding state in the statistical time, and N is the total number of frames in the statistical time.
The fatigue threshold value is set through the PERCLOS parameter, the yawning parameter and the nodding parameter, and the whole system can be used for fatigue detection.
In summary, the working principle of the invention is as follows:
firstly, continuously shooting a driver by utilizing acquisition equipment, and preprocessing an obtained picture; secondly, extracting eye and mouth areas by using the MTCNN to obtain the face and key points of each frame of preprocessed picture; then, detecting the eye state and the mouth state of each frame by using SC-Net, inputting a face picture into a hopenet neural network to obtain three angles of the face posture of the current frame, inputting the face picture into an LSTM network, and judging whether the current frame is in a doze nodding action interval or not; finally, a drowsy state value is calculated according to the time-series values of the eye-mouth state and the nodding state, and if the calculation result exceeds a threshold value, the system prompts and warns the driver that the drowsy or tired state is existed.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.
Claims (7)
1. A driver fatigue detection method based on deep learning multi-feature fusion is characterized by comprising the following steps:
1) data preprocessing: continuously shooting by utilizing acquisition equipment such as a camera, a pick-up head and the like, and preprocessing the obtained image;
2) face recognition and feature extraction: obtaining a face area, eyes and a mouth area on the preprocessed image through a multi-task level convolution network model;
3) and (3) parallel detection: respectively carrying out head posture estimation, eye posture estimation and mouth posture estimation;
4) and (3) fatigue detection: and judging whether fatigue occurs according to the PERCLOS parameter, the yawning parameter and the nodding parameter.
2. The deep learning multi-feature fusion based driver fatigue detection method as claimed in claim 1, wherein: in the step 1), the preprocessing includes filtering and denoising and image histogram equalization.
3. The deep learning multi-feature fusion based driver fatigue detection method as claimed in claim 1, wherein: in step 2), the eye and mouth regions include left eye, right eye, nose, left lip and right lip.
4. The deep learning multi-feature fusion based driver fatigue detection method as claimed in claim 1, wherein: in step 3), the head state estimation is to input a face region into a hopenet model for judgment, and the eye posture estimation and the mouth posture estimation are to judge an eye region and a mouth region by using a shallow convolutional neural network.
5. The deep learning multi-feature fusion based driver fatigue detection method as claimed in any one of claims 1 to 4, characterized in that: in step 4), the calculation formula of the PERCLOS parameter is as follows: f. ofperN/N × 100%, where N is the number of closed-eye frames and N is the total number of frames.
6. The deep learning multi-feature fusion based driver fatigue detection method according to claim 5, characterized in that: in step 4), the calculation formula of the yawning parameter is as follows: f. ofMN/N, wherein N is the total number of frames of the mouth opening state in the statistical time, and N is the total number of frames in the statistical time.
7. The deep learning multi-feature fusion based driver fatigue detection method according to claim 6, characterized in that: in the step 4), the calculation formula of the nodding parameter is FnAnd N/m, wherein N is the total number of frames judged to be in the nodding state in the statistical time, and N is the total number of frames in the statistical time.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910974764.3A CN110728241A (en) | 2019-10-14 | 2019-10-14 | Driver fatigue detection method based on deep learning multi-feature fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910974764.3A CN110728241A (en) | 2019-10-14 | 2019-10-14 | Driver fatigue detection method based on deep learning multi-feature fusion |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110728241A true CN110728241A (en) | 2020-01-24 |
Family
ID=69221114
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910974764.3A Pending CN110728241A (en) | 2019-10-14 | 2019-10-14 | Driver fatigue detection method based on deep learning multi-feature fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110728241A (en) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111680546A (en) * | 2020-04-26 | 2020-09-18 | 北京三快在线科技有限公司 | Attention detection method, attention detection device, electronic equipment and storage medium |
CN111753674A (en) * | 2020-06-05 | 2020-10-09 | 广东海洋大学 | Fatigue driving detection and identification method based on deep learning |
CN112070927A (en) * | 2020-08-28 | 2020-12-11 | 浙江省机电设计研究院有限公司 | Highway vehicle microscopic driving behavior analysis system and analysis method |
CN112163470A (en) * | 2020-09-11 | 2021-01-01 | 高新兴科技集团股份有限公司 | Fatigue state identification method, system and storage medium based on deep learning |
CN112528843A (en) * | 2020-12-07 | 2021-03-19 | 湖南警察学院 | Motor vehicle driver fatigue detection method fusing facial features |
CN112686161A (en) * | 2020-12-31 | 2021-04-20 | 遵义师范学院 | Fatigue driving detection method based on neural network |
CN112712671A (en) * | 2020-12-18 | 2021-04-27 | 济南浪潮高新科技投资发展有限公司 | Intelligent alarm system and method based on 5G |
CN112733628A (en) * | 2020-12-28 | 2021-04-30 | 杭州电子科技大学 | Fatigue driving state detection method based on MobileNet-V3 |
CN113361452A (en) * | 2021-06-24 | 2021-09-07 | 中国科学技术大学 | Driver fatigue driving real-time detection method and system based on deep learning |
CN113591699A (en) * | 2021-07-30 | 2021-11-02 | 西安电子科技大学 | Online visual fatigue detection system and method based on deep learning |
CN114298189A (en) * | 2021-12-20 | 2022-04-08 | 深圳市海清视讯科技有限公司 | Fatigue driving detection method, device, equipment and storage medium |
CN116912808A (en) * | 2023-09-14 | 2023-10-20 | 四川公路桥梁建设集团有限公司 | Bridge girder erection machine control method, electronic equipment and computer readable medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109308445A (en) * | 2018-07-25 | 2019-02-05 | 南京莱斯电子设备有限公司 | A kind of fixation post personnel fatigue detection method based on information fusion |
CN110119676A (en) * | 2019-03-28 | 2019-08-13 | 广东工业大学 | A kind of Driver Fatigue Detection neural network based |
CN110276273A (en) * | 2019-05-30 | 2019-09-24 | 福建工程学院 | Merge the Driver Fatigue Detection of facial characteristics and the estimation of image pulse heart rate |
-
2019
- 2019-10-14 CN CN201910974764.3A patent/CN110728241A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109308445A (en) * | 2018-07-25 | 2019-02-05 | 南京莱斯电子设备有限公司 | A kind of fixation post personnel fatigue detection method based on information fusion |
CN110119676A (en) * | 2019-03-28 | 2019-08-13 | 广东工业大学 | A kind of Driver Fatigue Detection neural network based |
CN110276273A (en) * | 2019-05-30 | 2019-09-24 | 福建工程学院 | Merge the Driver Fatigue Detection of facial characteristics and the estimation of image pulse heart rate |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111680546A (en) * | 2020-04-26 | 2020-09-18 | 北京三快在线科技有限公司 | Attention detection method, attention detection device, electronic equipment and storage medium |
CN111753674A (en) * | 2020-06-05 | 2020-10-09 | 广东海洋大学 | Fatigue driving detection and identification method based on deep learning |
CN112070927A (en) * | 2020-08-28 | 2020-12-11 | 浙江省机电设计研究院有限公司 | Highway vehicle microscopic driving behavior analysis system and analysis method |
CN112163470A (en) * | 2020-09-11 | 2021-01-01 | 高新兴科技集团股份有限公司 | Fatigue state identification method, system and storage medium based on deep learning |
CN112528843A (en) * | 2020-12-07 | 2021-03-19 | 湖南警察学院 | Motor vehicle driver fatigue detection method fusing facial features |
CN112712671A (en) * | 2020-12-18 | 2021-04-27 | 济南浪潮高新科技投资发展有限公司 | Intelligent alarm system and method based on 5G |
CN112733628A (en) * | 2020-12-28 | 2021-04-30 | 杭州电子科技大学 | Fatigue driving state detection method based on MobileNet-V3 |
CN112733628B (en) * | 2020-12-28 | 2024-07-16 | 杭州电子科技大学 | MobileNet-V3-based fatigue driving state detection method |
CN112686161A (en) * | 2020-12-31 | 2021-04-20 | 遵义师范学院 | Fatigue driving detection method based on neural network |
CN113361452A (en) * | 2021-06-24 | 2021-09-07 | 中国科学技术大学 | Driver fatigue driving real-time detection method and system based on deep learning |
CN113591699A (en) * | 2021-07-30 | 2021-11-02 | 西安电子科技大学 | Online visual fatigue detection system and method based on deep learning |
CN113591699B (en) * | 2021-07-30 | 2024-02-09 | 西安电子科技大学 | Online visual fatigue detection system and method based on deep learning |
CN114298189A (en) * | 2021-12-20 | 2022-04-08 | 深圳市海清视讯科技有限公司 | Fatigue driving detection method, device, equipment and storage medium |
CN116912808A (en) * | 2023-09-14 | 2023-10-20 | 四川公路桥梁建设集团有限公司 | Bridge girder erection machine control method, electronic equipment and computer readable medium |
CN116912808B (en) * | 2023-09-14 | 2023-12-01 | 四川公路桥梁建设集团有限公司 | Bridge girder erection machine control method, electronic equipment and computer readable medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110728241A (en) | Driver fatigue detection method based on deep learning multi-feature fusion | |
CN108596087B (en) | Driving fatigue degree detection regression model based on double-network result | |
CN108875642A (en) | A kind of method of the driver fatigue detection of multi-index amalgamation | |
CN104123549B (en) | Eye positioning method for real-time monitoring of fatigue driving | |
CN109977930B (en) | Fatigue driving detection method and device | |
CN109389806A (en) | Fatigue driving detection method for early warning, system and medium based on multi-information fusion | |
CN109740477B (en) | Driver fatigue detection system and fatigue detection method thereof | |
CN111753674A (en) | Fatigue driving detection and identification method based on deep learning | |
CN112131981B (en) | Driver fatigue detection method based on skeleton data behavior recognition | |
CN105117681A (en) | Multi-characteristic fatigue real-time detection method based on Android | |
CN111553214B (en) | Method and system for detecting smoking behavior of driver | |
CN107563346A (en) | One kind realizes that driver fatigue sentences method for distinguishing based on eye image processing | |
CN116503794A (en) | Fatigue detection method for cockpit unit | |
CN107967944A (en) | A kind of outdoor environment big data measuring of human health method and platform based on Hadoop | |
Sharma et al. | Development of a drowsiness warning system based on the fuzzy logic | |
CN113408389A (en) | Method for intelligently recognizing drowsiness action of driver | |
Zhou et al. | Development of a camera-based driver state monitoring system for cost-effective embedded solution | |
CN115937828A (en) | Fatigue driving detection method and device and vehicle | |
Guo et al. | Monitoring and detection of driver fatigue from monocular cameras based on Yolo v5 | |
CN113361452B (en) | Driver fatigue driving real-time detection method and system based on deep learning | |
CN114492656A (en) | Fatigue degree monitoring system based on computer vision and sensor | |
CN113989887A (en) | Equipment operator fatigue state detection method based on visual characteristic information fusion | |
CN114241452A (en) | Image recognition-based driver multi-index fatigue driving detection method | |
CN103955695B (en) | The method that computer is based on human eye state in gray level co-occurrence matrixes energy variation Intelligent Recognition video | |
Suresh et al. | Analysis and Implementation of Deep Convolutional Neural Network Models for Intelligent Driver Drowsiness Detection System |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200124 |