CN110147738B - Driver fatigue monitoring and early warning method and system - Google Patents

Driver fatigue monitoring and early warning method and system Download PDF

Info

Publication number
CN110147738B
CN110147738B CN201910352155.4A CN201910352155A CN110147738B CN 110147738 B CN110147738 B CN 110147738B CN 201910352155 A CN201910352155 A CN 201910352155A CN 110147738 B CN110147738 B CN 110147738B
Authority
CN
China
Prior art keywords
fatigue
driver
head
human body
hand
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910352155.4A
Other languages
Chinese (zh)
Other versions
CN110147738A (en
Inventor
张建
王川
王志鹏
彭军
徐胜航
武光江
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chinese Peoples Liberation Army Naval Characteristic Medical Center
Original Assignee
Chinese Peoples Liberation Army Naval Characteristic Medical Center
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chinese Peoples Liberation Army Naval Characteristic Medical Center filed Critical Chinese Peoples Liberation Army Naval Characteristic Medical Center
Priority to CN201910352155.4A priority Critical patent/CN110147738B/en
Publication of CN110147738A publication Critical patent/CN110147738A/en
Application granted granted Critical
Publication of CN110147738B publication Critical patent/CN110147738B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention provides a driver fatigue monitoring and early warning method and a system, wherein the method comprises the steps of collecting images and outputting fatigue early warning information, and further comprises the following steps: carrying out human body detection on the acquired image; detecting a human skeleton model; performing head and/or hand positioning; judging the position of the head and/or the hand; and judging fatigue according to the human skeleton model and/or the head and/or hand positions. The method and the system for monitoring and early warning the fatigue of the driver can effectively monitor and early warn the fatigue of the driver aiming at the characteristic of the closed space driving of the deep sea long-distance navigation equipment, and overcome the defect that the simple fatigue monitoring technology based on facial image processing cannot be suitable for the fatigue monitoring of the deep sea long-distance navigation personnel. Meanwhile, the neural network in the invention is repeatedly trained by abundant samples, so that the detection precision and speed of relevant parts of a human body are higher, and the robustness is better.

Description

Driver fatigue monitoring and early warning method and system
Technical Field
The invention relates to the field of safe driving, in particular to a method and a system for monitoring and early warning fatigue of a driver.
Background
The driver is easy to fatigue after keeping the driving state for a long time, serious accidents are easy to happen when fatigue driving is carried out, whether the driver is in the fatigue state or not is effectively detected, and the driver is reminded when fatigue driving is carried out, so that the accidents can be effectively prevented. Currently, there are various technologies for detecting fatigue driving of a driver, such as a fatigue driving detection technology based on facial expression recognition, a fatigue driving detection technology based on continuous driving time monitoring, a fatigue driving detection technology based on vehicle data, and the like.
The invention patent application No. 2016100569844 discloses a fatigue driving detection method and apparatus, the method includes receiving a captured frontal image of a driver; carrying out face detection in the acquired image; further positioning human eyes in the detected human face; based on the deep neural network model, positioning the detected human eyes and identifying the states of the personnel; and tracking the state change of human eyes in the multi-frame images and judging whether the driver is tired. For a driver in a deep sea long-range equipment closed space, semi-automatic or even full-automatic driving can be realized by the equipment without the need of constantly focusing attention of the driver and grabbing a steering wheel, so that the driver is allowed to drive for a longer time or leave a driving area for a short time, and under the condition, when the driver is tired, the change of body posture of the driver is easier to appear, such as the driver leans on the head or lies on the driving table. The device and the method for only acquiring and processing the face of the driver are not suitable for detecting the fatigue of the driver in the closed space of the deep-sea long-range navigation equipment.
Disclosure of Invention
In order to solve the technical problems, the invention provides a driver fatigue monitoring and early warning method and a driver fatigue monitoring and early warning system, which can effectively monitor the driving state of a driver and carry out fatigue driving early warning aiming at the characteristic of the deep sea long-range equipment in closed space driving.
A driver fatigue monitoring and early warning method comprises the steps of collecting images and outputting fatigue early warning information, and is characterized in that: further comprising:
carrying out human body detection on the acquired image;
detecting a human skeleton model;
performing head and/or hand positioning;
judging the position of the head and/or the hand;
and judging fatigue according to the human skeleton model and/or the head and/or hand positions.
Preferably, the human body detection of the collected image is based on a deep learning technology, and a lightweight convolutional neural network is adopted.
Preferably, in any of the above schemes, the lightweight convolutional neural network is trained end-to-end in a single stage.
Preferably, in any of the above schemes, the lightweight convolutional neural network has 2 different convolution kernels of 3 × 3 to convolve the feature map.
Preferably, in any of the above schemes, the 2 different 3 × 3 convolution kernels are used for outputting a class probability of a human body and outputting position information of a human body frame.
In any of the above embodiments, preferably, the results output by the 3 × 3 convolution kernels are combined using a non-maximum suppression method to form a human detection result.
In any of the above schemes, preferably, when a human body is detected in the acquired image, human body skeleton model detection is performed on the image.
Preferably, in any of the above schemes, the detecting the human skeleton model includes:
processing the input picture by utilizing a deep convolutional neural network, and outputting the positions of all parts of the upper body of the human body in the picture and the corresponding confidence coefficients;
predicting the relevance vector field between the parts of the upper part of the human body to express the connection relation between the parts;
and (4) reasoning the positions of all the parts and the associated vector field among the parts by adopting a greedy algorithm to obtain the human skeleton model.
In any of the above schemes, preferably, the outputting positions of the parts of the upper body of the human body in the picture includes:
extracting the central position of at least one part of eyes, ears, nose, neck, shoulder joints, elbow joints, wrist joints and hip joints of a human body as a key point;
and outputting the positions of the extracted key points.
In any of the above embodiments, it is preferable that whether or not the driver is in a fatigue state is determined by a change in posture of the skeleton model.
In any of the above embodiments, preferably, when a human body is detected in the captured image, the head and/or hand of the captured image is positioned.
Preferably, in any of the above schemes, the head and/or hand positioning detection is realized by a lightweight target detection network by using a visual detection technology.
Preferably, in any of the above schemes, the head and/or hand positioning detection includes bounding box regression and classification confidence regression.
In any of the above schemes, preferably, when head and/or hand positioning detection is performed on the acquired image, the image is used as an input of a target detection network, and is divided into a plurality of grids, and a plurality of frames and corresponding classification confidence degrees are predicted for each grid.
In any of the above schemes, preferably, the grid frame includes four parameters, which are denoted as a bounding box (x, y, w, h), where (x, y) represents the center of the frame associated with the grid, and (w, h) is the width and height of the frame associated with the full-image information.
Preferably, in any of the above schemes, the classification confidence is a product of a probability of each class, a probability of an object, and an overlap degree (IOU).
Preferably, in any of the above schemes, the training of the lightweight target detection network, and the establishing of the sample set for training the lightweight target detection network includes:
selecting enough images containing targets and manually labeling the targets in the images;
training, detecting and evaluating pictures which comprise targets and contain different complex backgrounds;
according to the training and detection evaluation results, updating processing including adding and deleting is carried out on the selected images;
and training, detecting, evaluating and updating the updated image again until the most appropriate image forming sample set is selected.
Preferably, in any of the above schemes, the sample set and the lightweight target detection network are adopted to train a detection model.
Preferably, in any of the above schemes, the trained detection model is trained and detected for multiple times, and a detection model with high detection evaluation confidence and correct result is selected.
Preferably, in any of the above schemes, the original training model and the training parameters are finely adjusted according to the training and detection results to obtain the most suitable detection model, thereby achieving the best detection effect.
Preferably, in any of the above aspects, the determining the position of the head and/or the hand comprises:
calculating the center position of the head of the person under normal conditions;
detecting whether hand information appears in the image;
the center position of the hand appearing in the image is calculated.
Any of the above schemes preferably calculates the center position of the human head under normal conditions by detecting the position information of the head in each frame of image.
In any of the above embodiments, it is preferable that the distance between the center positions of the head and the hands is less than a set distance threshold for a continuous period of time, and it is determined that the driver is in a fatigue state.
In any of the above aspects, it is preferable that the range of motion of the hand center position for a continuous period of time is smaller than a set threshold value, and it is determined that the driver is in a fatigue state.
In any of the above embodiments, it is preferable that the head center position is continuously in a range of motion smaller than a set threshold value, and it is determined that the driver is in a fatigue state.
Preferably, in any of the above schemes, the warning information is output when the driver is judged to be in a fatigue state.
The invention also provides a driver fatigue monitoring and early warning system, which comprises: the system is used for implementing the driver fatigue monitoring and early warning method, and the processing device executes the steps of the method:
carrying out human body detection on the acquired image;
detecting a human skeleton model;
performing head and/or hand positioning;
judging the position of the head and/or the hand;
and judging fatigue according to the human skeleton model and/or the head and/or hand positions.
Preferably, the human body detection of the collected image is realized by adopting a lightweight convolutional neural network based on a deep learning technology.
In any of the above schemes, preferably, when a human body is detected in the acquired image, human body skeleton model detection is performed on the image.
Preferably, in any of the above schemes, the detecting the human skeleton model includes:
processing the input picture by utilizing a deep convolutional neural network, and outputting the positions of all parts of the upper body of the human body in the picture and the corresponding confidence coefficients;
predicting the relevance vector field between the parts of the upper part of the human body to express the connection relation between the parts;
and (4) reasoning the positions of all the parts and the associated vector field among the parts by adopting a greedy algorithm to obtain the human skeleton model.
In any of the above embodiments, it is preferable that whether or not the driver is in a fatigue state is determined by a change in posture of the skeleton model.
In any of the above embodiments, preferably, when a human body is detected in the captured image, the head and/or hand of the captured image is positioned.
Preferably, in any of the above schemes, the head and/or hand positioning detection is realized by a lightweight target detection network by using a visual detection technology.
Preferably, in any of the above aspects, the determining the position of the head and/or the hand comprises:
calculating the center position of the head of the person under normal conditions;
detecting whether hand information appears in the image;
the center position of the hand appearing in the image is calculated.
Any of the above schemes preferably calculates the center position of the human head under normal conditions by detecting the position information of the head in each frame of image.
In any of the above embodiments, it is preferable that the distance between the center positions of the head and the hands is less than a set distance threshold for a continuous period of time, and it is determined that the driver is in a fatigue state.
In any of the above aspects, it is preferable that the range of motion of the hand center position for a continuous period of time is smaller than a set threshold value, and it is determined that the driver is in a fatigue state.
In any of the above embodiments, it is preferable that the head center position is continuously in a range of motion smaller than a set threshold value, and it is determined that the driver is in a fatigue state.
Preferably, in any of the above aspects, the processing device outputs the warning information through the output device when determining that the driver is in a fatigue state.
The driver fatigue monitoring and early warning method and the driver fatigue monitoring and early warning system firstly judge whether a human body image exists in the acquired image through the lightweight convolutional neural network, extract a human body skeleton model and judge whether the driver is in a fatigue state through the posture change of the skeleton model when the human body image exists, and/or realize head and/or hand positioning detection through the lightweight target detection network, judge whether the driver is in the fatigue state according to the position information of the head and/or the hand, and output early warning information to remind the driver when the driver is judged to be in the fatigue state. The method and the system for monitoring and early warning the fatigue of the driver can effectively monitor and early warn the fatigue of the driver aiming at the characteristic of the closed space driving of the deep sea long-distance navigation equipment, and overcome the defect that the simple fatigue monitoring technology based on facial image processing cannot be suitable for the fatigue monitoring of the deep sea long-distance navigation personnel. Meanwhile, the neural network in the invention is repeatedly trained by abundant samples, so that the detection precision and speed of relevant parts of a human body are higher, and the robustness is better.
Drawings
Fig. 1 is a flowchart illustrating a driver fatigue monitoring and warning method according to a preferred embodiment of the present invention.
Fig. 2A-2C are diagrams illustrating the detection effect of the human skeleton model in three different fatigue states according to the embodiment shown in fig. 1 of the driver fatigue monitoring and warning method of the present invention.
Fig. 3A-3C are diagrams illustrating the positioning effect of the human head and/or hand in three different fatigue states according to the embodiment of fig. 1 of the driver fatigue monitoring and warning method of the present invention.
Fig. 4 is a schematic structural diagram of a preferred embodiment of the driver fatigue monitoring and warning system according to the present invention.
Detailed Description
For a better understanding of the present invention, reference will now be made in detail to the following examples.
Example 1
A driver fatigue detection early warning method comprises the steps of collecting images and outputting fatigue early warning information, and further comprises the following steps:
carrying out human body detection on the acquired image;
detecting a human skeleton model;
performing head and/or hand positioning;
judging the position of the head and/or the hand;
and judging fatigue according to the human skeleton model and/or the head and/or hand positions.
The specific flow chart is shown in figure 1: the method starts, and step S1 is executed: and collecting an image. Executing step 21: and carrying out human body detection on the acquired image. Step S22 is executed: and judging whether the human body is detected in the acquired image according to the human body detection result.
In step S21, the collected image is subjected to human body detection based on deep learning technology, a lightweight convolution neural network Fast R-CNN is adopted, the lightweight convolution neural network Fast R-CNN is subjected to end-to-end single-stage training, the neural network has 2 different 3x3 convolution check characteristic graphs for convolution, one 3x3 convolution kernel is used for outputting the class probability of a human body, the other 3x3 convolution kernel is used for outputting the position information of a human body frame, and finally the results output by the 3x3 convolution kernels are integrated by using a non-maximum suppression method to form a human body detection result.
The step of detecting human body by Fast R-CNN is as follows:
determining 1000-2000 candidate boxes in the image using a selective search;
inputting CNN into the whole picture to obtain a feature map;
finding a mapping range (patch) of each candidate box on the feature map, and inputting the patch into an SPP layer (spatial pyramid pooling layer) and subsequent layers as a convolution feature of each candidate box;
judging whether the features extracted from the candidate frames belong to a specific class by using a classifier;
for a candidate box belonging to a feature, its position is further adjusted with a regressor.
When it is judged in step S22 that a human body is detected in the captured image, step S31 is performed: and carrying out human skeleton model detection on the image.
In step S31, the human skeleton model detection includes the steps of:
s311, processing the input picture by using a deep convolutional neural network, and outputting the positions of all parts of the upper body of the human body in the picture and the corresponding confidence coefficients;
s312, predicting the relevance vector field between the parts of the upper part of the human body to express the connection relation between the parts;
s313, a greedy algorithm is adopted to carry out reasoning on the positions of all the parts and the associated vector field among the parts to obtain the human skeleton model.
In step S311: outputting the positions of all parts of the upper part of the human body in the picture comprises the following steps:
extracting the central position of at least one part of eyes, ears, nose, neck, shoulder joints, elbow joints, wrist joints and hip joints of a human body as a key point;
and outputting the positions of the extracted key points.
As shown in fig. 2A-2C, the human skeleton model testing effect is obtained according to the above method in three different fatigue states, wherein fig. 2A shows the fatigue state of the tested person leaning against the chair, and fig. 2B shows the fatigue state of the tested person holding the head with one hand; fig. 2C shows another fatigue state of the tested person, wherein the tested person is holding the head with one hand.
When it is determined in step S22 that a human body is detected in the captured image, step S32 is performed, and step S321 is first performed: the head and/or hand positioning is performed on the acquired image, and then step S322 is performed: the position of the head and/or hand is determined.
In step S321, the head and/or hand positioning of the acquired image is performed by using a visual detection technology and through a lightweight target detection network, and the positioning detection of the head and/or hand includes bounding box regression and classification confidence regression. When head and/or hand positioning detection is carried out on an acquired image, the image is used as the input of a target detection network and is divided into a plurality of grids, a plurality of frames and corresponding classification confidence coefficients are predicted for each grid, the grid frame comprises four parameters and is represented as a bounding box (x, y, w, h), wherein (x, y) represents the center of the frame related to the grid, and (w, h) is the width and height of the frame related to the full-image information. The classification confidence is the product of the probability of each class, the probability of the object, and the degree of overlap (IOU).
The lightweight target neural network is trained, and the establishment of the sample set for training the lightweight target neural network comprises the following steps:
selecting enough images containing targets and manually labeling the targets in the images;
training, detecting and evaluating pictures which comprise targets and contain different complex backgrounds;
according to the training and detection evaluation results, updating processing including adding and deleting is carried out on the selected images;
and training, detecting, evaluating and updating the updated image again until the most appropriate image forming sample set is selected.
In view of the fact that the driver may be wearing a training cap, when selecting the image containing the target, a part of the image with the training cap needs to be selected for training.
Training a detection model by adopting the sample set and the lightweight target detection network, training and detecting the trained detection model for multiple times, selecting the detection model with high detection evaluation confidence and correct result, and finely adjusting the original training model and training parameters according to the training and detection results to obtain the most appropriate detection model and achieve the best detection effect.
Fig. 3A-3B are diagrams illustrating the positioning effect of the head and/or the hand of the human body in three different fatigue states obtained by the positioning method. FIG. 3A shows the fatigue state of the tested person lying on the driving platform, the detection model successfully detects the head position of the human body, and the confidence value is given to be 0.94; FIG. 3B shows that the tested person is not in fatigue state, the detection model successfully detects the head position of the human body, and the confidence coefficient is 0.86; fig. 3C shows the fatigue state of the tested person with the single-hand chin, and the detection model successfully detects the positions of the head and the hand of the human body.
In step S322, determining the position of the head and/or the hand according to the positioning result of the head and/or the hand of the human body in step S321, wherein step S322 includes:
counting the center position of the head of a person under normal conditions by detecting the position information of the head in each frame of image;
detecting whether hand information appears in the image;
the center position of the hand appearing in the image is calculated.
Step S4 is executed: and judging whether the human body is in a fatigue state or not according to the human skeleton model and/or the head and/or hand positions. Judging whether the driver is in a fatigue state or not through the posture change of the skeleton model, if so, determining the distance between the hand and the head and judging whether fatigue action of the cheek support occurs or not through the detected human body skeleton model; and judging whether the fatigue action of the driver occurs or not by determining the position relation of the head and the shoulder joint. The distance between the center positions of the head and the hands is continuously less than a set distance threshold value for a period of time, and the driver is judged to be in a fatigue state; the moving range of the central position of the hand for a period of time is smaller than a set threshold value, and the driver is judged to be in a fatigue state; and the moving range of the head center position for a continuous period of time is smaller than a set threshold value, and the driver is judged to be in a fatigue state.
When it is determined in step S4 that the driver is in a fatigue state, step S5 is executed to output fatigue warning information.
Example 2
As shown in fig. 4, a driver fatigue monitoring and warning system is used for implementing the driver fatigue monitoring and warning method, and the system includes: an image acquisition device 21, a processing device 22 and an output device 23, the processing device 22 performing the steps of the method:
carrying out human body detection on the acquired image;
detecting a human skeleton model;
performing head and/or hand positioning;
judging the position of the head and/or the hand;
and judging fatigue according to the human skeleton model and/or the head and/or hand positions.
The image acquisition device 21 is a high-definition camera, is installed on the upper side of the driving position, faces the driving position, and is used for acquiring an image of the driver and sending acquired image information to the processing device 22.
When the processing device 22 executes the steps in the driver fatigue monitoring and early warning method, the human body detection of the acquired image is realized by adopting a lightweight convolutional neural network based on a deep learning technology.
And when a human body is detected in the acquired image, human body skeleton model detection is carried out on the image. The human skeleton model detection comprises the following steps: processing the input picture by utilizing a deep convolutional neural network, and outputting the positions of all parts of the upper body of the human body in the picture and the corresponding confidence coefficients; predicting the relevance vector field between the parts of the upper part of the human body to express the connection relation between the parts; and (4) reasoning the positions of all the parts and the associated vector field among the parts by adopting a greedy algorithm to obtain the human skeleton model.
When a human body is detected in the acquired image, the acquired image is subjected to head and/or hand positioning. And the head and/or hand positioning detection is realized by adopting a visual detection technology through a lightweight target detection network. The step of judging the position of the head and/or the hand according to the head and/or hand positioning detection result comprises the following steps: calculating the center position of the head of the person under normal conditions; detecting whether hand information appears in the image; the center position of the hand appearing in the image is calculated. Because the center position of the head deflects along with different conversion angles of the head, the center position of the head of a person under normal conditions is counted by detecting the position information of the head in each frame of image.
And judging whether the driver is in a fatigue state or not through the posture change of the skeleton model. And the distance between the center positions of the head and the hands is continuously less than a set distance threshold value for a period of time, and the driver is judged to be in a fatigue state. The continuous movement range of the center position of the hand for a period of time is smaller than a set threshold value, the continuous movement range of the center position of the head for a period of time is judged to be in a fatigue state, and the driver is judged to be in the fatigue state.
When the processing device 22 judges that the driver is in a fatigue state, the output device 23 outputs the early warning information. The early warning information is at least one of voice prompt, flashing early warning lamp and touch prompt.
Example 3
A driver fatigue monitoring and early warning method is provided, the method carries out driver behavior feature analysis and driver facial feature analysis on collected images, the driver behavior feature analysis comprises human body skeleton model detection on a driver, head and/or hand positioning on the driver and position judgment; the facial feature analysis of the driver comprises at least one of eye feature analysis and mouth feature analysis of the driver, whether the typical fatigue facial features such as blink frequency change, overlong eye closing time or yawning appear on the driver or not is judged, and whether the fatigue state appears on the driver or not is comprehensively judged according to the behavior feature analysis of the driver and the facial feature analysis of the driver. In the system for executing the driver fatigue monitoring and early warning method, an image acquisition module comprises at least 2 cameras, one camera is arranged on the side of a driving position and faces the driving position, and is used for acquiring images of the behavior of a driver; the other one is arranged right ahead the driver and faces the face of the driver, and is used for acquiring images of the face of the driver, and the images acquired by the two image acquisition devices are transmitted to the processing device for processing.
It should be noted that the above embodiments are only used for illustrating the technical solution of the present invention, and not for limiting the same; although the foregoing embodiments illustrate the invention in detail, those skilled in the art will appreciate that: it is possible to modify the technical solutions described in the foregoing embodiments or to substitute some or all of the technical features thereof, without departing from the scope of the technical solutions of the present invention.

Claims (9)

1. The driver fatigue monitoring and early warning method is used for carrying out fatigue monitoring and early warning on a deep sea long-range navigation equipment closed space driver, and comprises the steps of collecting images and outputting fatigue early warning information, and is characterized in that: further comprising:
carrying out human body detection on the acquired image; when a human body is detected in the acquired image, the acquired image is subjected to:
detecting a human skeleton model;
positioning the head and the hands;
judging the positions of the head and the hands, including calculating the center position of the head of a person under normal conditions, detecting whether hand information appears in the image, and calculating the center position of the hands in the image;
fatigue judgment is carried out according to the human skeleton model and the positions of the head and the hands,
the fatigue judgment according to the human skeleton model comprises the following steps: judging whether a driver is in a fatigue state or not through the posture change of the skeleton model, determining the distance between a hand and a head through the detected human body skeleton model, and judging whether fatigue action of a cheek is generated or not; judging whether fatigue action occurs to the driver by determining the position relation of the head and the shoulder joint;
the fatigue judgment according to the head and hand positions comprises: the distance between the center positions of the head and the hands is continuously less than a set distance threshold value for a period of time, and the driver is judged to be in a fatigue state; the moving range of the central position of the hand for a period of time is smaller than a set threshold value, and the driver is judged to be in a fatigue state; and the moving range of the head center position for a continuous period of time is smaller than a set threshold value, and the driver is judged to be in a fatigue state.
2. The driver fatigue monitoring and warning method as claimed in claim 1, wherein: the human body detection of the collected image is based on a deep learning technology and adopts a lightweight convolutional neural network.
3. The driver fatigue monitoring and warning method as claimed in claim 2, wherein: the lightweight convolutional neural network is trained end-to-end in a single stage.
4. The driver fatigue monitoring and warning method as claimed in claim 3, wherein: the lightweight convolutional neural network has 2 different 3x3 convolutional cores to convolve the feature map.
5. The driver fatigue monitoring and warning method as claimed in claim 4, wherein: the 2 different 3x3 convolution kernels are used for outputting the class probability of the human body and outputting the position information of the human body frame.
6. The driver fatigue monitoring and warning method as claimed in claim 5, wherein: and (3) integrating the results output by the 3x3 convolution kernels by using a non-maximum suppression method to form a human body detection result.
7. The driver fatigue monitoring and warning method as claimed in claim 1, wherein: the human skeleton model detection comprises the following steps:
processing the input picture by utilizing a deep convolutional neural network, and outputting the positions of all parts of the upper body of the human body in the picture and the corresponding confidence coefficients;
predicting the relevance vector field between the parts of the upper part of the human body to express the connection relation between the parts;
and (4) reasoning the positions of all the parts and the associated vector field among the parts by adopting a greedy algorithm to obtain the human skeleton model.
8. The driver fatigue monitoring and warning method as claimed in claim 7, wherein: the position of each part of the upper part of the human body in the picture is output by the following steps:
extracting the central position of at least one part of eyes, ears, nose, neck, shoulder joints, elbow joints, wrist joints and hip joints of a human body as a key point;
and outputting the positions of the extracted key points.
9. The utility model provides a driver fatigue monitoring early warning system for carry out fatigue monitoring early warning to deep sea long-range navigation equipment confined space driver, include: image acquisition device, processing apparatus and output device, its characterized in that: the system is used for implementing the driver fatigue monitoring and early warning method according to any one of claims 1-8, and the processing device executes the steps of the method:
carrying out human body detection on the acquired image; when a human body is detected in the acquired image, the acquired image is subjected to:
detecting a human skeleton model;
positioning the head and the hands;
judging the positions of the head and the hands, including calculating the center position of the head of a person under normal conditions, detecting whether hand information appears in the image, and calculating the center position of the hands in the image;
carry out fatigue judgement according to human skeleton model and head and hand position, carry out fatigue judgement according to human skeleton model and include: judging whether a driver is in a fatigue state or not through the posture change of the skeleton model, determining the distance between a hand and a head through the detected human body skeleton model, and judging whether fatigue action of a cheek is generated or not; judging whether fatigue action occurs to the driver by determining the position relation of the head and the shoulder joint;
the fatigue judgment according to the head and hand positions comprises: the distance between the center positions of the head and the hands is continuously less than a set distance threshold value for a period of time, and the driver is judged to be in a fatigue state; the moving range of the central position of the hand for a period of time is smaller than a set threshold value, and the driver is judged to be in a fatigue state; and the moving range of the head center position for a continuous period of time is smaller than a set threshold value, and the driver is judged to be in a fatigue state.
CN201910352155.4A 2019-04-29 2019-04-29 Driver fatigue monitoring and early warning method and system Active CN110147738B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910352155.4A CN110147738B (en) 2019-04-29 2019-04-29 Driver fatigue monitoring and early warning method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910352155.4A CN110147738B (en) 2019-04-29 2019-04-29 Driver fatigue monitoring and early warning method and system

Publications (2)

Publication Number Publication Date
CN110147738A CN110147738A (en) 2019-08-20
CN110147738B true CN110147738B (en) 2021-01-22

Family

ID=67593830

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910352155.4A Active CN110147738B (en) 2019-04-29 2019-04-29 Driver fatigue monitoring and early warning method and system

Country Status (1)

Country Link
CN (1) CN110147738B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110717461A (en) * 2019-10-12 2020-01-21 广东电网有限责任公司 Fatigue state identification method, device and equipment
CN111243236A (en) * 2020-01-17 2020-06-05 南京邮电大学 Fatigue driving early warning method and system based on deep learning
CN111325872B (en) * 2020-01-21 2021-03-16 和智信(山东)大数据科技有限公司 Driver driving abnormity detection method based on computer vision
CN111476114A (en) * 2020-03-20 2020-07-31 深圳追一科技有限公司 Fatigue detection method, device, terminal equipment and storage medium
CN113743279B (en) * 2021-08-30 2023-10-13 山东大学 Ship operator state monitoring method, system, storage medium and equipment
CN115035502A (en) * 2022-07-08 2022-09-09 北京百度网讯科技有限公司 Driver behavior monitoring method and device, electronic equipment and storage medium
CN115471826B (en) * 2022-08-23 2024-03-26 中国航空油料集团有限公司 Method and device for judging safe driving behavior of aviation fueller and safe operation and maintenance system
CN116311181B (en) * 2023-03-21 2023-09-12 重庆利龙中宝智能技术有限公司 Method and system for rapidly detecting abnormal driving

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015156877A (en) * 2012-05-18 2015-09-03 日産自動車株式会社 Driver's physical state adaptation apparatus, and road map information construction method

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104013414B (en) * 2014-04-30 2015-12-30 深圳佑驾创新科技有限公司 A kind of Study in Driver Fatigue State Surveillance System based on intelligent movable mobile phone
CN104574817A (en) * 2014-12-25 2015-04-29 清华大学苏州汽车研究院(吴江) Machine vision-based fatigue driving pre-warning system suitable for smart phone
CN106218405A (en) * 2016-08-12 2016-12-14 深圳市元征科技股份有限公司 Fatigue driving monitoring method and cloud server
CN106845430A (en) * 2017-02-06 2017-06-13 东华大学 Pedestrian detection and tracking based on acceleration region convolutional neural networks
CN107886069A (en) * 2017-11-10 2018-04-06 东北大学 A kind of multiple target human body 2D gesture real-time detection systems and detection method
CN108038453A (en) * 2017-12-15 2018-05-15 罗派智能控制技术(上海)有限公司 A kind of driver's state-detection and identifying system based on RGBD
CN108038469B (en) * 2017-12-27 2019-10-25 百度在线网络技术(北京)有限公司 Method and apparatus for detecting human body
CN108229390A (en) * 2018-01-02 2018-06-29 济南中维世纪科技有限公司 Rapid pedestrian detection method based on deep learning
CN108460362B (en) * 2018-03-23 2021-11-30 成都品果科技有限公司 System and method for detecting human body part

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015156877A (en) * 2012-05-18 2015-09-03 日産自動車株式会社 Driver's physical state adaptation apparatus, and road map information construction method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Recognizing driver inattention by convolutional neural networks;Chao Yan 等;《2015 8th International Congress on Image and Signal Processing (CISP)》;20160118;第680-685页 *

Also Published As

Publication number Publication date
CN110147738A (en) 2019-08-20

Similar Documents

Publication Publication Date Title
CN110147738B (en) Driver fatigue monitoring and early warning method and system
CN110210323B (en) Drowning behavior online identification method based on machine vision
CN112906604B (en) Behavior recognition method, device and system based on skeleton and RGB frame fusion
Chang et al. A pose estimation-based fall detection methodology using artificial intelligence edge computing
CN114550027A (en) Vision-based motion video fine analysis method and device
CN108664887A (en) Prior-warning device and method are fallen down in a kind of virtual reality experience
CN111966217A (en) Unmanned aerial vehicle control method and system based on gestures and eye movements
CN113378649A (en) Identity, position and action recognition method, system, electronic equipment and storage medium
CN114140745A (en) Method, system, device and medium for detecting personnel attributes of construction site
CN115937830A (en) Special vehicle-oriented driver fatigue detection method
Bhandarkar et al. Neural Network Based Detection of Driver's Drowsiness
CN114639168B (en) Method and system for recognizing running gesture
CN113408435B (en) Security monitoring method, device, equipment and storage medium
WO2020016963A1 (en) Information processing device, control method, and program
Guo et al. Monitoring and detection of driver fatigue from monocular cameras based on Yolo v5
CN115171189A (en) Fatigue detection method, device, equipment and storage medium
CN107832698A (en) Learning interest testing method and device based on array lens
Li et al. Motion fatigue state detection based on neural networks
CN114255509A (en) Student supervises appurtenance based on OpenPose
Wang et al. Spatial-temporal feature representation learning for facial fatigue detection
CN113408434B (en) Intelligent monitoring expression recognition method, device, equipment and storage medium
CN117423138B (en) Human body falling detection method, device and system based on multi-branch structure
CN111274854A (en) Human body action recognition method and vision enhancement processing system
Pachouly et al. Driver Drowsiness Detection using Machine Learning
CN115565016B (en) Comprehensive operator safety detection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant