CN117333853A - Driver fatigue monitoring method and device based on image processing and storage medium - Google Patents

Driver fatigue monitoring method and device based on image processing and storage medium Download PDF

Info

Publication number
CN117333853A
CN117333853A CN202311402204.3A CN202311402204A CN117333853A CN 117333853 A CN117333853 A CN 117333853A CN 202311402204 A CN202311402204 A CN 202311402204A CN 117333853 A CN117333853 A CN 117333853A
Authority
CN
China
Prior art keywords
driver
fatigue
behavior
state
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311402204.3A
Other languages
Chinese (zh)
Inventor
于红超
唐如意
徐开庭
赵国志
胡德民
孙洪福
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Seres New Energy Automobile Design Institute Co Ltd
Original Assignee
Chongqing Seres New Energy Automobile Design Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Seres New Energy Automobile Design Institute Co Ltd filed Critical Chongqing Seres New Energy Automobile Design Institute Co Ltd
Priority to CN202311402204.3A priority Critical patent/CN117333853A/en
Publication of CN117333853A publication Critical patent/CN117333853A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application provides a driver fatigue monitoring method and device based on image processing and a storage medium. The method comprises the following steps: performing image optimization on visual image data of a driver to obtain an optimized image to be processed; performing deep processing on the image to be processed, and identifying specific behavior characteristics related to the face of the driver and other human body parts; evaluating the fatigue level of the driver based on the identified features related to the face of the driver, generating a fatigue status report; based on the recognized characteristics related to the face and other human body parts of the driver, evaluating the behavior mode of the driver to generate a behavior state report; and generating a notification signal according to the fatigue level and the behavior mode of the estimated driver, and screening the notification signal by utilizing a signal processing mechanism to obtain a screened driver state notification signal. The method and the device can comprehensively evaluate the state of the driver, and can provide more accurate and reliable monitoring results, so that the driving safety is improved.

Description

Driver fatigue monitoring method and device based on image processing and storage medium
Technical Field
The application relates to the technical field of new energy automobiles, in particular to a driver fatigue monitoring method and device based on image processing and a storage medium.
Background
With the development of the automobile industry and the progress of technology, ensuring driving safety has become an important goal of development. Driver fatigue and distraction are one of the main causes of traffic accidents. To solve this problem, a driver monitoring system (Driver Monitoring System, abbreviated as DMS) has been developed to monitor and evaluate the state and behavior of the driver in real time. The DMS collects data using various sensors and cameras and detects the driver's attention, fatigue, distraction, and other behavioral characteristics through analysis algorithms. When the system detects that the driver is inattentive or tired, it will issue alerts and prompts to ensure the driver's alertness and safety.
However, current driver monitoring methods have several key issues. First, many existing systems focus primarily on monitoring the driver's facial expression and eye movements, while ignoring other body parts related to fatigue and attention, such as gestures or postures. This results in an incomplete assessment, possibly missing some key signs of fatigue or distraction. Second, some algorithms may not be accurate enough in processing images and data, resulting in poor monitoring results. This may lead to false positives or false negatives, which reduce the driver's confidence in the system. Finally, due to insufficiently comprehensive evaluation and poor monitoring effect, the existing DMS may not be able to effectively improve driving safety.
Disclosure of Invention
In view of this, the embodiment of the application provides a driver fatigue monitoring method, device and storage medium based on image processing, so as to solve the problems of incomplete assessment, inaccurate and reliable monitoring results and reduced driving safety in the prior art.
In a first aspect of the embodiments of the present application, there is provided a driver fatigue monitoring method based on image processing, including: capturing visual image data of a driver in the cockpit in real time in response to an activation operation of the driver monitoring system; performing preliminary image optimization on the acquired visual image data of the driver by using a DMS camera to obtain an optimized image to be processed; performing deep processing on the image to be processed by using a predetermined driver state analysis algorithm so as to identify specific behavior characteristics related to the face of the driver and other human body parts from the image to be processed; evaluating the fatigue level of the driver based on the identified features related to the face of the driver and generating a fatigue status report; evaluating a behavior pattern of the driver based on the identified features related to the face and other body parts of the driver and generating a behavior state report; data integration is carried out on the fatigue state report and the behavior state report, and a complete driver state and behavior comprehensive report is generated; generating corresponding notification signals according to the fatigue level and the behavior mode of the estimated driver, and screening the notification signals by utilizing a preset signal processing mechanism to obtain screened driver state notification signals; and sending the screened driver state notification signal to a vehicle-mounted display system of the vehicle according to a preset time point or a preset time interval so as to display the current fatigue degree and the behavior state of the driver at the vehicle-mounted side in real time.
In a second aspect of the embodiments of the present application, there is provided a driver fatigue monitoring device based on image processing, including: a capturing module configured to capture visual image data of a driver in the cockpit in real time in response to an activation operation of the driver monitoring system; the optimization module is configured to perform preliminary image optimization on the acquired visual image data of the driver by using the DMS camera to obtain an optimized image to be processed; an identification module configured to perform a deep processing on the image to be processed using a predetermined driver state analysis algorithm so as to identify specific behavior features related to the face of the driver and other human body parts from the image to be processed; a fatigue evaluation module configured to evaluate a fatigue level of the driver based on the identified features related to the face of the driver and generate a fatigue status report; a behavior evaluation module configured to evaluate a behavior pattern of the driver based on the identified features related to the face and other human body parts of the driver and generate a behavior status report; the integration module is configured to integrate the fatigue state report and the behavior state report in data to generate a complete driver state and behavior comprehensive report; the screening module is configured to generate corresponding notification signals according to the fatigue level and the behavior mode of the estimated driver, and screen the notification signals by utilizing a preset signal processing mechanism to obtain screened driver state notification signals; the sending module is configured to send the screened driver state notification signal to a vehicle-mounted display system of the vehicle according to a preset time point or a preset time interval so as to display the current fatigue degree and the behavior state of the driver at the vehicle-mounted side in real time.
In a third aspect of the embodiments of the present application, there is provided an electronic device including a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the above method when executing the program.
In a fourth aspect of the embodiments of the present application, there is provided a computer readable storage medium storing a computer program which, when executed by a processor, implements the steps of the above method.
The above-mentioned at least one technical scheme that this application embodiment adopted can reach following beneficial effect:
capturing visual image data of a driver in the cockpit in real time by responding to an activation operation of the driver monitoring system; performing preliminary image optimization on the acquired visual image data of the driver by using a DMS camera to obtain an optimized image to be processed; performing deep processing on the image to be processed by using a predetermined driver state analysis algorithm so as to identify specific behavior characteristics related to the face of the driver and other human body parts from the image to be processed; evaluating the fatigue level of the driver based on the identified features related to the face of the driver and generating a fatigue status report; evaluating a behavior pattern of the driver based on the identified features related to the face and other body parts of the driver and generating a behavior state report; data integration is carried out on the fatigue state report and the behavior state report, and a complete driver state and behavior comprehensive report is generated; generating corresponding notification signals according to the fatigue level and the behavior mode of the estimated driver, and screening the notification signals by utilizing a preset signal processing mechanism to obtain screened driver state notification signals; and sending the screened driver state notification signal to a vehicle-mounted display system of the vehicle according to a preset time point or a preset time interval so as to display the current fatigue degree and the behavior state of the driver at the vehicle-mounted side in real time. The method and the device can comprehensively evaluate the state of the driver, and can provide more accurate and reliable monitoring results, so that the driving safety is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the following description will briefly introduce the drawings that are needed in the embodiments or the description of the prior art, it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of a driver fatigue monitoring method based on image processing according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of a driver fatigue monitoring device based on image processing according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system configurations, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
In the technical field of new energy automobiles, the following dangerous driving situations easily occur to a common vehicle driver:
1. driver fatigue driving: without a monitoring system to detect the driver's fatigue state, the driver may be fatigued or dozed off excessively, increasing the risk of accident.
2. Distraction: lack of monitoring of the driver may make the driver more distracting, such as using a cell phone, talking to passengers, or other unrelated actions, thereby affecting the concern over road and traffic conditions.
3. Drunk driving and drunk driving: without the monitoring system it is not possible to detect whether the driver is drinking or is affected by the medication, which may lead to driving after drinking or taking the medication, increasing the risk of accidents.
4. Illegal behaviour: the lack of a monitoring system may make it easier for the driver to violate traffic rules, such as overspeed, red light running, etc., increasing the probability of occurrence of traffic accidents.
5. Unsafe driving behavior: the lack of a monitoring system is unable to timely discover and correct dangerous driving behaviors, such as frequent lane changes, sudden braking, etc., which may lead to accidents.
In summary, in order to improve driving safety and ensure personal and property safety of the driver, a set of monitoring system needs to be provided to monitor various behaviors and states of the driver. A driver monitoring system is a technical system mounted on a vehicle, aiming at monitoring and evaluating the behavior and state of a driver. It uses various sensors and cameras to collect data and through analysis algorithms to detect driver's attention, fatigue, distraction and other behavioral characteristics. The system may provide real-time alerts and alarms to ensure that the driver remains alert, attentive and safe driving. Some common functions include detecting driver eye movements, facial expressions, gestures, fatigue levels, and lane departure. Driver monitoring systems are widely used in commercial vehicles and over-the-road transportation to improve driver safety and driving efficiency.
A driver monitoring system (Driver Monitoring System, abbreviated as DMS) is a technical system mounted on a vehicle, intended to monitor and evaluate the behavior and state of a driver. It uses various sensors and cameras to collect data and through analysis algorithms to detect driver's attention, fatigue, distraction and other behavioral characteristics. The system may provide real-time alerts and alarms to ensure that the driver remains alert, attentive and safe driving. Some common functions include detecting driver eye movements, facial expressions, gestures, fatigue levels, and lane departure. Driver monitoring systems are widely used in commercial vehicles and over-the-road transportation to improve driver safety and driving efficiency. Related technical words are as follows:
OMS Occupancy Monitoring System whole-vehicle passenger monitoring system
ICU Instruments Cluster Unit instrument panel unit
IVI In-Vehicle Infotainment vehicle-mounted information entertainment system
HOD hands off detection double hand off detection
The functions of the DMS include: fatigue detection, distraction detection, driving behavior detection, gaze area tracking, occlusion detection, on-duty detection, and emotion detection.
DMS alarm function priority specification: distraction monitoring > fatigue monitoring > dangerous behavior identification.
The premise of using the DMS is that a FACEID function is needed, a vehicle system is logged in through face recognition, and a driver face image is continuously acquired in real time through a camera. The user is required to be reminded at the function switch interface, the camera permission is required to be called for face recognition, the face feature codes of the driver are acquired for face recognition, the images are only processed locally and cannot be stored or uploaded to the server, and the switch is turned on, so that the agreement of the content is indicated. The user sets the FACEID on by IVI or voice.
The following describes the technical scheme of the present application in detail with reference to the accompanying drawings and specific embodiments.
Fig. 1 is a flowchart of a driver fatigue monitoring method based on image processing according to an embodiment of the present application. The image processing based driver fatigue monitoring method of fig. 1 may be performed by a driver monitoring system. As shown in fig. 1, the method for monitoring fatigue of a driver based on image processing specifically may include:
s101, in response to the activation operation of a driver monitoring system, capturing visual image data of a driver in a cockpit in real time;
s102, performing preliminary image optimization on acquired visual image data of a driver by using a DMS camera to obtain an optimized image to be processed;
S103, performing deep processing on the image to be processed by using a predetermined driver state analysis algorithm so as to identify specific behavior characteristics related to the face of the driver and other human body parts from the image to be processed;
s104, evaluating the fatigue level of the driver based on the recognized characteristics related to the face of the driver, and generating a fatigue state report;
s105, evaluating the behavior mode of the driver based on the recognized characteristics related to the face and other human body parts of the driver, and generating a behavior state report;
s106, integrating the fatigue state report and the behavior state report to generate a complete driver state and behavior comprehensive report;
s107, generating corresponding notification signals according to the fatigue level and the behavior mode of the estimated driver, and screening the notification signals by utilizing a preset signal processing mechanism to obtain screened driver state notification signals;
s108, sending the screened driver state notification signal to a vehicle-mounted display system of the vehicle according to a preset time point or a preset time interval so as to display the current fatigue degree and the behavior state of the driver at the vehicle-mounted side in real time.
In some embodiments, the performing preliminary image optimization on the acquired visual image data of the driver by using the DMS camera to obtain an optimized image to be processed includes:
noise filtering is carried out on the acquired visual image data so as to remove random noise and environmental interference in the visual image data;
adjusting the brightness and the contrast of the visual image data according to the current ambient light conditions by utilizing a self-adaptive brightness and contrast adjustment algorithm so as to enable the face and other relevant parts of the driver to be clearly distinguished;
enhancing edge and contour information in the visual image data using edge detection techniques to identify facial and body part features of the driver;
performing scale normalization on the visual image data to adjust the size and proportion of the image to be uniform;
and adjusting the color balance of the visual image data by utilizing a color correction algorithm, and performing format conversion and compression on the optimized visual image data to obtain an image to be processed.
Specifically, the real-time monitoring of the status and behavior of the driver is achieved by a Driver Monitoring System (DMS) that employs a specially designed DMS camera. When the driver enters the vehicle and starts the driver monitoring system, the DMS camera starts capturing visual image data of the driver in the cockpit in real time.
Further, in order to improve the quality of the image and the accuracy of the analysis, a series of optimization processes are performed on the acquired visual image data:
noise filtering: advanced noise filtering techniques, such as median filtering or gaussian filtering, are used to remove random noise in the image due to sensors, ambient light changes, or other sources of interference.
Brightness and contrast adjustment: according to the ambient light conditions of the acquired image, adaptive brightness and contrast adjustment algorithms, such as histogram equalization, are used to ensure that the driver's face and other relevant parts in the image are clearly discernable.
Edge detection: by using Sobel, canny or other edge detection techniques, edge and contour information in the image is enhanced, which is critical for subsequent identification of the driver's facial and body part features.
Scale normalization: for unified analysis and processing, the images are scale normalized, and the size and scale of the images are adjusted to a preset unified size, such as 640x480 pixels.
Color correction: color correction algorithms, such as white balance, are used to adjust the color balance of the image, ensuring the authenticity and consistency of the image colors.
Format conversion and compression: for efficient storage and fast processing, the optimized visual image data is format-converted, e.g. from RAW to JPEG, and compressed appropriately.
After the processing of the embodiment, the image to be processed with high quality and high definition can be obtained, and a solid foundation is provided for subsequent advanced processing and analysis.
In some embodiments, the image to be processed is deeply processed using a predetermined driver state analysis algorithm to identify specific behavioral characteristics related to the face of the driver and other human body parts from the image to be processed, including:
extracting features of the image to be processed by using the deep learning model, automatically identifying facial key points of the driver, and identifying facial expressions of the driver based on the facial key points so as to judge emotion and fatigue states of the driver;
and analyzing a non-face area in the image to be processed by using a target detection algorithm, identifying a behavior mode of the driver, and identifying the dynamic behavior of the driver by using time sequence analysis and combining continuous image frames so as to judge the fatigue degree and the distraction degree of the driver.
Specifically, in order to more comprehensively and accurately analyze the state and behavior of a driver, the embodiment of the application further carries out advanced treatment on the image to be processed. In practical applications, the advanced processing of the image to be processed may include the following:
first, a deep learning model, such as a Convolutional Neural Network (CNN) or a Recurrent Neural Network (RNN), is used to perform feature extraction on an image to be processed. Through the trained model, the system can automatically identify key points of the face of the driver, such as eyes, nose, mouth and the like. Based on these facial key points, the system further identifies the driver's facial expression. For example, eye closure or eyelid sagging may be indicative of driver fatigue; while frowning or closing the lips may indicate that the driver is in tension or unpleasant mood. In this way, the system can evaluate the driver's emotional and fatigue state in real time in order to take corresponding safety measures.
Next, in addition to facial features, the embodiments of the present application also use an object detection algorithm, such as YOLO or SSD, to analyze non-facial regions in the image to be processed. For example, by analyzing the position and motion of the driver's hand, the system can determine whether the driver is using a cell phone or adjusting in-vehicle equipment; by analyzing the body posture of the driver, the system can determine whether the driver is leaning forward or turning his/her head to talk to other passengers. These information are all important bases for assessing the degree of distraction of the driver.
In addition, in order to more accurately judge the state of the driver, the embodiment of the application also utilizes a time sequence analysis method to identify the dynamic behavior of the driver by combining continuous image frames. For example, if the driver does not turn the steering wheel or frequently turn his head to look at other parts of the vehicle for several seconds, the system may determine that the driver's attention has been distracted or fatigued.
In some embodiments, evaluating the fatigue level of the driver based on the identified features related to the face of the driver and generating the fatigue status report includes:
identifying eye features of the driver's face to determine whether the driver is closed, dozing or distraction;
calculating the eye closing time and frequency of the driver through the face key points so as to judge the fatigue condition of the driver according to the eye closing time and frequency;
identifying the mouth characteristics of a driver, evaluating the fatigue degree of the driver, and analyzing the facial expression change of the driver to be used as a fatigue judgment basis;
combining the identified facial features, inputting the facial features into a pre-trained decision tree or random forest model to obtain a predicted result of the fatigue of the driver, wherein the predicted result of the fatigue of the driver is subjected to fatigue evaluation by integrating various facial features;
And generating a fatigue state report according to the fatigue prediction result of the driver, wherein the fatigue state report is used for describing the fatigue level of the driver, and sending an alarm notification according to the fatigue level of the driver.
Specifically, the embodiment of the application also discloses a technical scheme how to evaluate the fatigue level of the driver based on the characteristics of the face of the driver and generate a fatigue state report according to the fatigue level. The generation of the fatigue status report may include the following:
eye feature identification and analysis: first, the system identifies eye features of the driver's face, including the position of the eyelid, the movement of the eyeball, the shape of the eye, etc. These features may be used to determine whether the driver is closed-eye, dozing or distraction. For example, a sustained eyelid closure may indicate that the driver has been dozing or distracted. In addition, the system further calculates the eye closure duration and frequency of the driver through the facial key points so as to accurately judge the fatigue state of the driver according to the data.
Mouth feature and expression analysis: in addition to the ocular features, the system also identifies the driver's mouth features, such as opening and closing of the mouth, shape of the lips, etc., for assessing the driver's fatigue. For example, frequent yawing may be a sign of driver fatigue. Meanwhile, by analyzing facial expression changes of a driver, such as frowning, mouth opening and the like, additional basis can be provided for fatigue judgment.
Fatigue prediction and report generation: in combination with the above identified facial features, the resulting data is input into a pre-trained decision tree or random forest model. These models have been trained to identify different fatigue levels and to enable fatigue assessment based on integrated facial features. For example, if the driver's eye closure time exceeds a certain threshold, and with frequent yawning, the model may predict that the driver is in a high fatigue state.
Alarm notification: based on the driver fatigue prediction, the system generates detailed fatigue status reports describing the driver's fatigue level, such as "light fatigue", "medium fatigue" or "high fatigue". If the system judges that the fatigue degree of the driver reaches the dangerous level, the system can automatically send an alarm notification to remind the driver to rest or take other necessary measures.
In some embodiments, evaluating a behavioral pattern of a driver based on identified features related to the face and other human body parts of the driver and generating a behavioral state report includes:
identifying other body part features other than the driver's face for use in determining whether the driver has stretch, grasp an object, or other non-driving related behavior;
Analyzing the position, holding state and moving mode of the hand on the steering wheel to judge whether the driver is performing direction conversion, adjusting the speed of the vehicle or other driving operations;
identifying the inclination and rotation angle of the upper body of the driver for evaluating whether the driver is looking at other locations in the vehicle, talking to other passengers, or taking items around;
evaluating whether the driver is likely to be performing distraction based on the identified hand and upper body features;
combining the identified human body part characteristics, inputting the human body part characteristics into a pre-trained decision tree or random forest model to obtain a prediction result of the driver behavior, wherein the prediction result of the driver behavior is subjected to behavior evaluation by integrating various human body part characteristics;
and generating a behavior state report according to the predicted result of the driver behavior, wherein the behavior state report is used for describing the behavior mode of the driver, and sending an alarm notification according to the behavior mode of the driver.
Specifically, the embodiment of the application also discloses a technical scheme how to evaluate the behavior mode of the driver based on the recognized characteristics related to the face of the driver and other human body parts and generate a behavior state report according to the behavior mode. The generation process of the behavior state report may include the following:
Identifying other body part features other than face: the system recognizes other key parts of the driver's body, such as the hands, arms, upper body, etc. By analyzing these locations, the system can determine whether the driver is performing stretch, gripping objects, or other non-driving related activities. For example, if the system detects that the driver's hand is moving away from the steering wheel and toward other locations in the vehicle, it may indicate that the driver is stretching his hand to take something.
Analyzing hand behaviors: the system further analyzes the position, grip and movement pattern of the hand on the steering wheel. For example, if the hand grips the steering wheel and makes a quick turn, it may indicate that the driver is making a sharp turn. And if the hand is placed on a side of the steering wheel and remains stationary, it may indicate that the driver is cruising.
Identifying upper body behavior: the system will identify the upper body of the driver, such as the tilt and swivel angle, to assess the driver's attention and behavior. For example, if the driver's upper body is turned significantly sideways or rearward, it may indicate that he is talking to a rear passenger or looking at other locations in the vehicle.
Potential distraction behavior was evaluated: in combination with the above identified hand and upper body features, the system will evaluate whether the driver may be performing distracting activities such as looking into a cell phone, adjusting a car audio system, or talking to a passenger.
Prediction and report generation of behavior patterns: all of the above identified and evaluated data is input into a pre-trained decision tree or random forest model. These models are trained to integrate various body part features for behavioral assessment. For example, if both the driver's hands and upper body show signs of distraction, the model may predict that the driver is doing some distraction.
Generating an alarm notification: based on the predicted outcome of the driver's behavior, the system will generate detailed behavior status reports describing the driver's behavior patterns, such as "normal driving", "possibly distracted" or "tight turns". If the system judges that the behavior mode of the driver possibly threatens the road safety, the system automatically sends an alarm notice to remind the driver to adjust the behavior in time.
In some embodiments, according to the assessed fatigue level and behavior pattern of the driver, generating a corresponding notification signal, screening the notification signal by using a predetermined signal processing mechanism, and obtaining a screened driver state notification signal, including:
setting a weight value for each estimated state and behavior according to the fatigue state report and the behavior state report of the driver, setting a threshold value for different notification signals, and generating the notification signals when the weight value of the estimated result exceeds the threshold value;
The signal processing module is used for carrying out signal screening according to the duration, frequency and intensity of the notification signals, classifying the screened notification signals, and determining the output form of the signals according to the classification of the notification signals; the filtered and classified notification signals are stored in a queue and sorted according to the priority of the notification signals to ensure that important notification signals are prioritized.
Specifically, based on the fatigue status report and the behavior status report of the driver, the system sets a weight value for each of the evaluated states and behaviors. For example, the duration of eye closure may have a higher weight because it is directly related to fatigue; while a simple hand movement may have a lower weight because it may be just a normal driving operation. Meanwhile, the system sets a threshold value for each notification signal, and only when the accumulated weight value of the evaluation result exceeds the set threshold value, the system generates the corresponding notification signal.
Further, the generated notification signal is sent to a special signal processing module. The module will filter based on the duration, frequency and strength of the notification signal. For example, if the driver's eye-closure time is short but frequent, or the eye-closure time is long, these signals may be identified as high priority notification signals. While occasional hand movements or brief distraction may be identified as low priority notification signals.
Further, the filtered notification signals are further classified. For example, a notification signal related to fatigue may be classified as "fatigue warning", and a notification signal related to distraction may be classified as "attention warning". Based on these classifications, the system determines the output form of the signal. For example, fatigue warnings may be presented simultaneously in an audio and visual manner, while attention warnings may be presented only in an audio manner.
Further, the filtered and categorized notification signals are stored in a queue. To ensure that important notification signals are prioritized, the system ranks the signals according to weight and urgency of each signal. For example, signals directly related to driver fatigue may be placed in the front of the queue, while signals related to occasional distraction of the driver may be placed in the rear of the queue.
In some embodiments, the method includes sending the filtered driver state notification signal to a vehicle-end display system of the vehicle according to a predetermined time point or a predetermined time interval, so as to display the current fatigue degree and behavior state of the driver on the vehicle-end in real time, including:
setting a timer module, wherein the timer module is used for triggering the sending operation of the notification signal according to a preset time point or time interval; inquiring a notification signal queue when the timer module is triggered, and checking whether a notification signal to be sent exists or not;
A receiving module is set for the vehicle-mounted display system and is used for receiving the notification signal sent by the notification system and converting the notification signal into a visual prompt or alarm; according to the fatigue and the severity of the behavior state of the driver, determining the display form of the prompt or the alarm, and setting a region or an icon for each state on a vehicle-mounted display system for displaying the corresponding fatigue or behavior state.
Specifically, a timer module is set in the system, and the module can be triggered at a predetermined time interval or at a specific time point. For example, a notification signal queue may be set to check every 10 seconds. When a predetermined time point or a set time interval is reached, the timer module is automatically triggered to start inquiring the notification signal queue.
Further, once the timer module is triggered, the system begins to query the notification signal queue, checking whether there are notification signals to be sent in the queue. These signals may include fatigue alarms, distraction alarms, etc. for the driver. If the notification signals to be sent are in the queue, the system selects and sends the notification signal with the highest priority according to the set priority order.
Further, the vehicle-mounted display system is provided with a special receiving module, and the function of the special receiving module is to receive and process notification signals from the notification system. When the receiving module receives the notification signal, it will convert the signal into a corresponding visual cue or alarm.
Further, depending on the severity of the driver's fatigue and behavior state, the system may determine the presentation of the alert or prompt. For example, a slight fatigue may trigger only a simple visual cue, while a severe fatigue may trigger a striking alarm. The vehicle-mounted display system sets a special area or icon for each state. For example, fatigue alarms may be presented above the screen, while distraction alarms may be presented on the right side of the screen.
For example, in one example, when driver fatigue reaches a moderate level, the vehicle-side display system may display a yellow warning icon in the center area, accompanied by a mild audible cue. As fatigue increases further to a high level, the warning icon may become red with a more urgent audible alert. In addition, for distraction, such as a driver turning around to talk to a rear passenger, the vehicle-side display system may present a turning head icon on the right side to indicate that the driver should focus.
Through the design scheme of the embodiment, the system can accurately display the current fatigue degree and behavior state of the system to the driver in real time, so that the driver is helped to make timely response, and the road safety is ensured.
According to the technical scheme provided by the embodiment of the application, the driver monitoring system can monitor the behaviors of the driver in real time, and the system can accurately identify the facial fatigue characteristics, the hand behaviors, the inclination and rotation of the upper body and the like. The real-time monitoring ensures the instant feedback of the driver behavior, and lays a foundation for preventing possible dangerous behaviors in advance. According to different behaviors of a driver, the system formulates a corresponding early warning strategy for each behavior. The refined strategy can more accurately judge the actual condition of the driver and give corresponding early warning according to the severity of the condition, thereby avoiding the problems of excessive intervention or missing report. When the system judges that the early warning condition is reached, the system gives early warning based on the behavior of the driver, and can intelligently select a proper early warning mode by combining the actual driving behavior and road scene. The intelligent early warning method ensures the accuracy and the practicability of early warning, and reduces the risk of dissatisfaction or neglect of early warning of drivers caused by frequent or inaccurate early warning. According to the technical scheme, the driving safety can be greatly improved. When the driver is tired, distracted or otherwise likely to influence safety, the system can timely and accurately give early warning, help the driver to correct the behavior and avoid possible traffic accidents, thereby ensuring the safety of the driver and other road users.
The following are device embodiments of the present application, which may be used to perform method embodiments of the present application. For details not disclosed in the device embodiments of the present application, please refer to the method embodiments of the present application.
Fig. 2 is a schematic structural diagram of a driver fatigue monitoring device based on image processing according to an embodiment of the present application. As shown in fig. 2, the image processing-based driver fatigue monitoring device includes:
a capturing module 201 configured to capture visual image data of a driver in the cockpit in real time in response to an activation operation of the driver monitoring system;
the optimizing module 202 is configured to perform preliminary image optimization on the acquired visual image data of the driver by using the DMS camera to obtain an optimized image to be processed;
an identification module 203 configured to perform a depth processing on the image to be processed using a predetermined driver state analysis algorithm so as to identify specific behavior features related to the face of the driver and other human body parts from the image to be processed;
a fatigue evaluation module 204 configured to evaluate a fatigue level of the driver based on the identified features related to the face of the driver and generate a fatigue status report;
A behavior evaluation module 205 configured to evaluate a behavior pattern of the driver based on the identified features related to the face and other human body parts of the driver and generate a behavior status report;
an integration module 206 configured to integrate the fatigue status report with the behavioral status report data to generate a complete driver status and behavioral comprehensive report;
a screening module 207 configured to generate a corresponding notification signal according to the assessed fatigue level and behavior pattern of the driver, and screen the notification signal by using a predetermined signal processing mechanism to obtain a screened driver status notification signal;
the sending module 208 is configured to send the filtered driver status notification signal to a vehicle-mounted display system of the vehicle according to a predetermined time point or a predetermined time interval, so as to display the current fatigue and the behavior status of the driver on the vehicle-mounted side in real time.
In some embodiments, the optimization module 202 of fig. 2 performs noise filtering on the acquired visual image data to remove random noise and environmental interference in the visual image data; adjusting the brightness and the contrast of the visual image data according to the current ambient light conditions by utilizing a self-adaptive brightness and contrast adjustment algorithm so as to enable the face and other relevant parts of the driver to be clearly distinguished; enhancing edge and contour information in the visual image data using edge detection techniques to identify facial and body part features of the driver; performing scale normalization on the visual image data to adjust the size and proportion of the image to be uniform; and adjusting the color balance of the visual image data by utilizing a color correction algorithm, and performing format conversion and compression on the optimized visual image data to obtain an image to be processed.
In some embodiments, the recognition module 203 of fig. 2 performs feature extraction on the image to be processed using a deep learning model, automatically recognizes facial key points of the driver, and recognizes facial expressions of the driver based on the facial key points so as to determine emotion and fatigue states of the driver; and analyzing a non-face area in the image to be processed by using a target detection algorithm, identifying a behavior mode of the driver, and identifying the dynamic behavior of the driver by using time sequence analysis and combining continuous image frames so as to judge the fatigue degree and the distraction degree of the driver.
In some embodiments, fatigue evaluation module 204 of fig. 2 identifies eye features of the driver's face to determine whether the driver is closed, dozing, or distraction; calculating the eye closing time and frequency of the driver through the face key points so as to judge the fatigue condition of the driver according to the eye closing time and frequency; identifying the mouth characteristics of a driver, evaluating the fatigue degree of the driver, and analyzing the facial expression change of the driver to be used as a fatigue judgment basis; combining the identified facial features, inputting the facial features into a pre-trained decision tree or random forest model to obtain a predicted result of the fatigue of the driver, wherein the predicted result of the fatigue of the driver is subjected to fatigue evaluation by integrating various facial features; and generating a fatigue state report according to the fatigue prediction result of the driver, wherein the fatigue state report is used for describing the fatigue level of the driver, and sending an alarm notification according to the fatigue level of the driver.
In some embodiments, the behavior assessment module 205 of FIG. 2 identifies other body part features other than the driver's face for use in determining whether the driver has stretch, grasp objects, or other non-driving related behaviors; analyzing the position, holding state and moving mode of the hand on the steering wheel to judge whether the driver is performing direction conversion, adjusting the speed of the vehicle or other driving operations; identifying the inclination and rotation angle of the upper body of the driver for evaluating whether the driver is looking at other locations in the vehicle, talking to other passengers, or taking items around; evaluating whether the driver is likely to be performing distraction based on the identified hand and upper body features; combining the identified human body part characteristics, inputting the human body part characteristics into a pre-trained decision tree or random forest model to obtain a prediction result of the driver behavior, wherein the prediction result of the driver behavior is subjected to behavior evaluation by integrating various human body part characteristics; and generating a behavior state report according to the predicted result of the driver behavior, wherein the behavior state report is used for describing the behavior mode of the driver, and sending an alarm notification according to the behavior mode of the driver.
In some embodiments, the screening module 207 of fig. 2 sets a weight value for each of the estimated states and behaviors based on the fatigue status report and the behavior status report of the driver, and sets a threshold value for different notification signals, and generates notification signals when the weight value of the estimated result exceeds the threshold value; the signal processing module is used for carrying out signal screening according to the duration, frequency and intensity of the notification signals, classifying the screened notification signals, and determining the output form of the signals according to the classification of the notification signals; the filtered and classified notification signals are stored in a queue and sorted according to the priority of the notification signals to ensure that important notification signals are prioritized.
In some embodiments, the sending module 208 of fig. 2 sets a timer module for triggering the sending operation of the notification signal at a predetermined point in time or interval; inquiring a notification signal queue when the timer module is triggered, and checking whether a notification signal to be sent exists or not; a receiving module is set for the vehicle-mounted display system and is used for receiving the notification signal sent by the notification system and converting the notification signal into a visual prompt or alarm; according to the fatigue and the severity of the behavior state of the driver, determining the display form of the prompt or the alarm, and setting a region or an icon for each state on a vehicle-mounted display system for displaying the corresponding fatigue or behavior state.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic of each process, and should not limit the implementation process of the embodiment of the present application in any way.
Fig. 3 is a schematic structural diagram of the electronic device 3 provided in the embodiment of the present application. As shown in fig. 3, the electronic apparatus 3 of this embodiment includes: a processor 301, a memory 302 and a computer program 303 stored in the memory 302 and executable on the processor 301. The steps of the various method embodiments described above are implemented when the processor 301 executes the computer program 303. Alternatively, the processor 301, when executing the computer program 303, performs the functions of the modules/units in the above-described apparatus embodiments.
Illustratively, the computer program 303 may be partitioned into one or more modules/units, which are stored in the memory 302 and executed by the processor 301 to complete the present application. One or more of the modules/units may be a series of computer program instruction segments capable of performing a specific function for describing the execution of the computer program 303 in the electronic device 3.
The electronic device 3 may be an electronic device such as a desktop computer, a notebook computer, a palm computer, or a cloud server. The electronic device 3 may include, but is not limited to, a processor 301 and a memory 302. It will be appreciated by those skilled in the art that fig. 3 is merely an example of the electronic device 3 and does not constitute a limitation of the electronic device 3, and may include more or fewer components than shown, or may combine certain components, or different components, e.g., the electronic device may also include an input-output device, a network access device, a bus, etc.
The processor 301 may be a central processing unit (Central Processing Unit, CPU) or other general purpose processor, digital signal processor (Digital Signal Processor, DSP), application specific integrated circuit (Application Specific Integrated Circuit, ASIC), field programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 302 may be an internal storage unit of the electronic device 3, for example, a hard disk or a memory of the electronic device 3. The memory 302 may also be an external storage device of the electronic device 3, for example, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the electronic device 3. Further, the memory 302 may also include both an internal storage unit and an external storage device of the electronic device 3. The memory 302 is used to store computer programs and other programs and data required by the electronic device. The memory 302 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in this application, it should be understood that the disclosed apparatus/computer device and method may be implemented in other ways. For example, the apparatus/computer device embodiments described above are merely illustrative, e.g., the division of modules or elements is merely a logical functional division, and there may be additional divisions of actual implementations, multiple elements or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated modules/units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application implements all or part of the flow in the methods of the above embodiments, or may be implemented by a computer program to instruct related hardware, and the computer program may be stored in a computer readable storage medium, where the computer program may implement the steps of the respective method embodiments described above when executed by a processor. The computer program may comprise computer program code, which may be in source code form, object code form, executable file or in some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth.
The above embodiments are only for illustrating the technical solution of the present application, and are not limiting thereof; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.

Claims (10)

1. A driver fatigue monitoring method based on image processing, comprising:
capturing visual image data of a driver in the cockpit in real time in response to an activation operation of the driver monitoring system;
performing preliminary image optimization on the acquired visual image data of the driver by using a DMS camera to obtain an optimized image to be processed;
performing deep processing on the image to be processed by using a predetermined driver state analysis algorithm so as to identify specific behavior characteristics related to the face of the driver and other human body parts from the image to be processed;
Evaluating the fatigue level of the driver based on the identified features related to the face of the driver and generating a fatigue status report;
evaluating a behavior pattern of the driver based on the identified features related to the face and other body parts of the driver and generating a behavior state report;
data integration is carried out on the fatigue state report and the behavior state report, and a complete driver state and behavior comprehensive report is generated;
generating corresponding notification signals according to the fatigue level and the behavior mode of the estimated driver, and screening the notification signals by utilizing a preset signal processing mechanism to obtain screened driver state notification signals;
and sending the screened driver state notification signal to a vehicle-mounted display system of the vehicle according to a preset time point or a preset time interval so as to display the current fatigue degree and the behavior state of the driver at the vehicle-mounted side in real time.
2. The method according to claim 1, wherein the performing preliminary image optimization on the acquired visual image data of the driver by using the DMS camera to obtain an optimized image to be processed includes:
noise filtering is carried out on the acquired visual image data so as to remove random noise and environmental interference in the visual image data;
Adjusting the brightness and the contrast of the visual image data according to the current ambient light conditions by utilizing an adaptive brightness and contrast adjustment algorithm so as to enable the face and other relevant parts of the driver to be clearly distinguished;
enhancing edge and contour information in the visual image data using edge detection techniques to identify facial and body part features of the driver;
performing scale normalization on the visual image data to adjust the size and proportion of the image to be uniform;
and adjusting the color balance of the visual image data by utilizing a color correction algorithm, and performing format conversion and compression on the optimized visual image data to obtain an image to be processed.
3. The method according to claim 1, wherein said subjecting the image to be processed to a depth processing using a predetermined driver state analysis algorithm to identify specific behavioral characteristics related to the face of the driver and other human body parts from the image to be processed comprises:
extracting features of the image to be processed by using a deep learning model, automatically identifying facial key points of a driver, and identifying facial expressions of the driver based on the facial key points so as to judge emotion and fatigue states of the driver;
And analyzing the non-face area in the image to be processed by using a target detection algorithm, identifying the behavior mode of the driver, and identifying the dynamic behavior of the driver by using time sequence analysis and combining continuous image frames so as to judge the fatigue degree and the distraction degree of the driver.
4. A method according to claim 3, wherein the evaluating the fatigue level of the driver based on the identified characteristics relating to the face of the driver and generating a fatigue status report comprises:
identifying eye features of the driver's face to determine whether the driver is closed, dozing or distraction;
calculating the eye closing time and frequency of a driver through the face key points so as to judge the fatigue condition of the driver according to the eye closing time and frequency;
identifying the mouth characteristics of a driver, evaluating the fatigue degree of the driver, and analyzing the facial expression change of the driver to be used as a fatigue judgment basis;
combining the identified facial features, and inputting the facial features into a pre-trained decision tree or random forest model to obtain a predicted result of the fatigue of the driver, wherein the predicted result of the fatigue of the driver is subjected to fatigue evaluation by integrating various facial features;
And generating a fatigue state report according to the fatigue prediction result of the driver, wherein the fatigue state report is used for describing the fatigue level of the driver, and sending an alarm notification according to the fatigue level of the driver.
5. A method according to claim 3, wherein the evaluating the driver's behavioral patterns and generating behavioral state reports based on the identified characteristics relating to the driver's face and other body parts comprises:
identifying other body part features other than the driver's face for use in determining whether the driver has stretch, grasp an object, or other non-driving related behavior;
analyzing the position, holding state and moving mode of the hand on the steering wheel to judge whether the driver is performing direction conversion, adjusting the speed of the vehicle or other driving operations;
identifying the inclination and rotation angle of the upper body of the driver for evaluating whether the driver is looking at other locations in the vehicle, talking to other passengers, or taking items around;
evaluating whether the driver is likely to be performing distraction based on the identified hand and upper body features;
combining the identified human body part characteristics, and inputting the human body part characteristics into a pre-trained decision tree or random forest model to obtain a prediction result of the driver behavior, wherein the prediction result of the driver behavior is subjected to behavior evaluation by integrating various human body part characteristics;
And generating a behavior state report according to the predicted result of the driver behavior, wherein the behavior state report is used for describing the behavior mode of the driver, and sending an alarm notification according to the behavior mode of the driver.
6. The method according to claim 1, wherein the generating a corresponding notification signal according to the assessed fatigue level and behavior pattern of the driver, screening the notification signal using a predetermined signal processing mechanism, to obtain a screened driver status notification signal, comprises:
setting a weight value for each estimated state and behavior according to the fatigue state report and the behavior state report of the driver, setting a threshold value for different notification signals, and generating the notification signals when the weight value of the estimated result exceeds the threshold value;
the signal processing module is used for carrying out signal screening according to the duration, frequency and intensity of the notification signals, classifying the screened notification signals, and determining the output form of the signals according to the classification of the notification signals; the filtered and classified notification signals are stored in a queue and sorted according to the priority of the notification signals to ensure that important notification signals are processed preferentially.
7. The method according to claim 6, wherein the step of transmitting the screened driver status notification signal to a vehicle-side display system of the vehicle at a predetermined time point or time interval so as to display the current fatigue and behavior status of the driver at the vehicle-side in real time comprises:
setting a timer module, wherein the timer module is used for triggering the sending operation of the notification signal according to a preset time point or time interval; inquiring a notification signal queue when the timer module is triggered, and checking whether a notification signal to be sent exists or not;
a receiving module is set for the vehicle-mounted display system and is used for receiving the notification signal sent by the notification system and converting the notification signal into a visual prompt or alarm; according to the fatigue and the severity of the behavior state of the driver, determining the display form of the prompt or the alarm, and setting a region or an icon for each state on a vehicle-mounted display system for displaying the corresponding fatigue or behavior state.
8. A driver fatigue monitoring device based on image processing, characterized by comprising:
a capturing module configured to capture visual image data of a driver in the cockpit in real time in response to an activation operation of the driver monitoring system;
The optimization module is configured to perform preliminary image optimization on the acquired visual image data of the driver by using the DMS camera to obtain an optimized image to be processed;
an identification module configured to perform a depth processing on the image to be processed using a predetermined driver state analysis algorithm so as to identify specific behavior features related to the face of the driver and other human body parts from the image to be processed;
a fatigue evaluation module configured to evaluate a fatigue level of the driver based on the identified features related to the face of the driver and generate a fatigue status report;
a behavior evaluation module configured to evaluate a behavior pattern of the driver based on the identified features related to the face and other human body parts of the driver and generate a behavior status report;
the integration module is configured to integrate the fatigue state report and the behavior state report in data to generate a complete driver state and behavior comprehensive report;
the screening module is configured to generate corresponding notification signals according to the fatigue level and the behavior mode of the estimated driver, and screen the notification signals by utilizing a preset signal processing mechanism to obtain screened driver state notification signals;
The sending module is configured to send the screened driver state notification signal to a vehicle-mounted display system of the vehicle according to a preset time point or a preset time interval so as to display the current fatigue degree and the behavior state of the driver at the vehicle-mounted side in real time.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method of any one of claims 1 to 7 when the computer program is executed.
10. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the method according to any one of claims 1 to 7.
CN202311402204.3A 2023-10-26 2023-10-26 Driver fatigue monitoring method and device based on image processing and storage medium Pending CN117333853A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311402204.3A CN117333853A (en) 2023-10-26 2023-10-26 Driver fatigue monitoring method and device based on image processing and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311402204.3A CN117333853A (en) 2023-10-26 2023-10-26 Driver fatigue monitoring method and device based on image processing and storage medium

Publications (1)

Publication Number Publication Date
CN117333853A true CN117333853A (en) 2024-01-02

Family

ID=89275369

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311402204.3A Pending CN117333853A (en) 2023-10-26 2023-10-26 Driver fatigue monitoring method and device based on image processing and storage medium

Country Status (1)

Country Link
CN (1) CN117333853A (en)

Similar Documents

Publication Publication Date Title
CN111079476B (en) Driving state analysis method and device, driver monitoring system and vehicle
CN108960065B (en) Driving behavior detection method based on vision
CN108791299B (en) Driving fatigue detection and early warning system and method based on vision
CN109584507B (en) Driving behavior monitoring method, device, system, vehicle and storage medium
WO2019232972A1 (en) Driving management method and system, vehicle-mounted intelligent system, electronic device and medium
Hossain et al. IOT based real-time drowsy driving detection system for the prevention of road accidents
CN111274881A (en) Driving safety monitoring method and device, computer equipment and storage medium
KR101276770B1 (en) Advanced driver assistance system for safety driving using driver adaptive irregular behavior detection
CN110766912B (en) Driving early warning method, device and computer readable storage medium
CN104183091A (en) System for adjusting sensitivity of fatigue driving early warning system in self-adaptive mode
CN105788176A (en) Fatigue driving monitoring and prompting method and system
CN111434553A (en) Brake system, method and device, and fatigue driving model training method and device
Charniya et al. Drunk driving and drowsiness detection
CN109801475A (en) Method for detecting fatigue driving, device and computer readable storage medium
Lashkov et al. Driver dangerous state detection based on OpenCV & dlib libraries using mobile video processing
Guria et al. Iot-enabled driver drowsiness detection using machine learning
Rani et al. Development of an Automated Tool for Driver Drowsiness Detection
Sharma et al. Development of a drowsiness warning system based on the fuzzy logic
Kannan et al. Driver drowsiness detection and alert system
JP3036319B2 (en) Driver status monitoring device
KR102401607B1 (en) Method for analyzing driving concentration level of driver
CN117333853A (en) Driver fatigue monitoring method and device based on image processing and storage medium
Santhiya et al. Improved Authentication and Drowsiness Detection from Facial Features using Deep Learning Framework in Real Time Environments
Manjula et al. Driver inattention monitoring system based on the orientation of the face using convolutional neural network
CN109770922A (en) Embedded fatigue detecting system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination