CN112241645A - Fatigue driving detection method and system and electronic equipment - Google Patents

Fatigue driving detection method and system and electronic equipment Download PDF

Info

Publication number
CN112241645A
CN112241645A CN201910638379.1A CN201910638379A CN112241645A CN 112241645 A CN112241645 A CN 112241645A CN 201910638379 A CN201910638379 A CN 201910638379A CN 112241645 A CN112241645 A CN 112241645A
Authority
CN
China
Prior art keywords
frame
eye
face image
mouth
opening degree
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910638379.1A
Other languages
Chinese (zh)
Inventor
尹苍穹
裴锋
肖友清
闫春香
王玉龙
邓胜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Automobile Group Co Ltd
Original Assignee
Guangzhou Automobile Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Automobile Group Co Ltd filed Critical Guangzhou Automobile Group Co Ltd
Priority to CN201910638379.1A priority Critical patent/CN112241645A/en
Publication of CN112241645A publication Critical patent/CN112241645A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Geometry (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention provides a fatigue driving detection method, a system thereof and electronic equipment, wherein the method comprises the following steps: acquiring a plurality of frames of face images in the current time period, wherein the current plurality of frames of face images are face images of a driver at the current moment; labeling key points of the human face in the multi-frame human face image; detecting the marked multiple frames of face images by using a target detection network to obtain eye data and mouth data of each frame of face image; the eye data comprises a left eye detection frame, a right eye detection frame, the length and the width of the left eye detection frame and the right eye detection frame, and the mouth data comprises a mouth detection frame, the length and the width of the mouth detection frame; calculating the eye opening degree and the mouth opening degree of each frame of face image according to the eye data and the mouth data of each frame of face image; determining the state of a driver according to the eye opening degree and the mouth opening degree of each frame of face image; the driver state includes driving fatigue and non-driving fatigue. The invention can improve the detection precision of fatigue driving of the driver and avoid the uncomfortable feeling of the driver caused by wearing the detection equipment.

Description

Fatigue driving detection method and system and electronic equipment
Technical Field
The invention relates to the technical field of intelligent human-computer interaction, in particular to a fatigue driving detection method and system and electronic equipment.
Background
In the daily life of a driver, long-distance driving is inevitable, and particularly, the driver is in a single driving environment for a long time when driving on a tolway, so that the driver is easy to disorder the psychology and the physiological function, and the phenomenon of timely skill decline occurs. Currently, driver monitoring equipment is required to be installed at the driving positions of buses and transport fleets. In addition, when the driver drives at night, due to the fact that a normal work and rest mechanism is violated, and due to the fact that landscapes on two sides of a road in the dark have small stimulation on human senses, the driver is easy to feel tired. At present, the detection method of the fatigue state of the driver mainly comprises a detection method based on the physiological signal of the driver and based on the operation behavior of the driver. The detection method based on the physiological signals of the driver, such as the detection methods for measuring electroencephalogram (EEG) signals, electrocardio signals (ECG) and the like, needs to wear equipment for detecting the physiological signals, has a high degree of dependence on individuals, and has many limitations when being actually used for monitoring fatigue of the driver; the estimation of the driver's fatigue state based on the driver's operation behavior, such as the steering wheel operation, is influenced by personal habits, traveling speed, road environment, and operation skills, and the estimation accuracy of the driver's state is low because the traveling state of the vehicle is also related to many environmental factors, such as vehicle characteristics and roads.
Disclosure of Invention
The invention aims to provide a fatigue driving detection method, a system and electronic equipment thereof, which judge the fatigue degree of a driver by identifying the face characteristics of the driver so as to improve the fatigue driving detection precision of the driver and avoid the discomfort brought to the driver by wearing detection equipment.
To achieve the object, according to a first aspect of the present invention, an embodiment of the present invention provides a fatigue driving detection method, including the steps of:
acquiring a plurality of frames of face images in the current time period, wherein the plurality of frames of face images are face images of a driver in the current time period;
labeling the key points of the human face in the multi-frame human face image;
detecting the marked multiple frames of face images by using a target detection network to obtain eye data and mouth data of each frame of face image; wherein the eye data comprises a left eye detection frame, a right eye detection frame, a length and a width of the left eye detection frame and the right eye detection frame, and the mouth data comprises a mouth detection frame, a length and a width of the mouth detection frame;
calculating the eye opening degree and the mouth opening degree of each frame of face image according to the eye data and the mouth data of each frame of face image;
determining the state of a driver according to the eye opening degree and the mouth opening degree of each frame of face image; wherein the driver state includes driving fatigue and non-driving fatigue.
Preferably, the labeling the face key points in the plurality of frames of face images includes:
acquiring coordinate information of face key points in each frame of face image in the face image;
marking face key points in each frame of face image according to the coordinate information;
the human face key points comprise a left eye corner point, a right eye corner point, an upper eyelid point and a lower eyelid point of a left eye, a left eye corner point, a right eye corner point, an upper eyelid point and a lower eyelid point of a right eye, a left mouth corner point, a right mouth corner point, an upper lip point and a lower lip point.
Preferably, the detecting the plurality of frames of face images after the labeling by using the target detection network to obtain the eye data and the mouth data of each frame of face image includes:
and generating a left eye detection frame, a right eye detection frame and a mouth detection frame according to the face key points in each frame of face image, and determining the length and the width of each detection frame according to the coordinate information of the face key points in the face image.
Specifically, the determining, according to the coordinate information of the face key point in the face image, the length and the width of each detection frame is specifically determined by calculating according to the following formula:
L1=XA2-XA1
W1=XA4-XA3
wherein L1 is the length of the left eye detection frame, W1 is the width of the left eye detection frame, XA1Is the abscissa, X, of the corner point of the left eyeA2Is the abscissa, X, of the corner point of the right eye of the left eyeA3Is the ordinate, X, of the eyelid point on the left eyeA4Is the ordinate of the eyelid point under the left eye;
L2=XB2-XB1
W2=XB4-XB3
wherein L2 is the length of the right eye detection frame, W2 is the width of the right eye detection frame, XB1Is the abscissa, X, of the corner point of the left eye of the right eyeB2Is the abscissa, X, of the corner point of the right eyeB3Is the ordinate, X, of the eyelid point on the right eyeB4Is the ordinate of the lower eyelid point of the right eye;
L3=XC2-XC1
W3=XC4-XC3
wherein L3 is the length of the mouth detection frame, W3 is the width of the mouth detection frame, XC1Is the abscissa, X, of the left mouth corner pointC2Is the abscissa, X, of the right mouth corner pointC3Is the ordinate, X, of the upper lip pointC4The ordinate of the lower lip point.
Preferably, the determining the driver state according to the eye opening degree and the mouth opening degree of each frame of face image comprises:
calculating the eye opening degree of each frame of face image according to the left eye opening degree and the right eye opening degree of each frame of face image, wherein the eye opening degree of the face image is equal to the average value of the left eye opening degree and the right eye opening degree;
calculating the average eye opening degree of the current time period according to the eye opening degree of each frame of face image;
determining the number of the human face image frames of the closed eyes of the driver according to the eye opening degree of each frame of human face image and the average eye opening degree; if the eye opening degree of one frame of face image is smaller than the average eye opening degree multiplied by a preset proportionality coefficient, closing the eyes of the driver in the frame of face image;
counting the number of human face image frames of the eyes closed by the driver, and calculating the eye closing proportion according to the number of the human face image frames of the eyes closed by the driver and the total number of the human face image frames in the current time period; wherein, the eye closing proportion is equal to the ratio of the number of the human face image frames of the closed eyes of the driver to the total number of the human face image frames in the current time period;
and determining the state of the driver according to the comparison result of the eye closing ratio and a preset first threshold value.
Preferably, the determining the driver state according to the eye opening degree and the mouth opening degree of each frame of face image comprises:
determining the state of the driver according to the comparison result of the opening degree of the mouth of each frame of face image and a preset second threshold value; and if the mouth opening degrees of all the frame face images in the current time period are less than or equal to the preset second threshold value, determining that the driver state is the non-driving fatigue.
Preferably, the method further comprises the steps of:
generating a fatigue driving signal in response to the driver state being driving fatigue;
and carrying out fatigue driving prompt according to the fatigue driving signal.
According to a second aspect of the present invention, an embodiment of the present invention provides a fatigue driving detection system, including:
the image acquisition unit is configured to acquire a plurality of frames of face images in a current time period, wherein the current frame of face images are face images of a driver at the current moment;
the image annotation unit is configured to annotate the key points of the human faces in the plurality of frames of human face images;
the image processing unit is configured to detect the plurality of frames of labeled human face images by using a target detection network to obtain left eye data, right eye data and mouth data of each frame of human face image; wherein the left eye data comprises a left eye detection frame and a length and a width thereof, the right eye data comprises a right eye detection frame and a length and a width thereof, and the mouth data comprises a mouth detection frame and a length and a width thereof;
the data processing unit is configured to calculate the left eye opening degree, the right eye opening degree and the mouth opening degree of each frame of face image according to the left eye data, the right eye data and the mouth data of each frame of face image; the left eye opening degree is equal to the ratio of the width to the length of the left eye detection frame, the right eye opening degree is equal to the ratio of the width to the length of the right eye detection frame, and the mouth opening degree is equal to the ratio of the width to the length of the mouth detection frame;
the fatigue judging unit is configured to determine the state of a driver according to the left eye opening degree, the right eye opening degree and the mouth opening degree of each frame of face image; wherein the driver state includes driving fatigue and non-driving fatigue.
Preferably, the method comprises the following steps:
a signal generation unit configured to generate a fatigue driving signal in response to a driver state being driving fatigue;
a prompt unit configured to perform a fatigue driving prompt according to the fatigue driving signal.
According to a third aspect of the present invention, an electronic device is provided in an embodiment of the present invention, including a processor, a memory, and a communication bus, where the processor and the memory complete communication with each other through the communication bus;
the memory is used for storing a computer program;
the processor is configured to implement the method steps of the embodiment of the first aspect when executing the computer program stored in the memory.
In the embodiment of the invention, the key points of the eyes and the mouth in the face image of the driver are labeled, the eyes and the mouth are detected according to the labeled key points, the opening and closing degrees of the eyes and the mouth are calculated according to the detection result, and finally whether the driver is in a fatigue driving state or not is determined according to the opening and closing degrees of the eyes and the mouth. Therefore, compared with a detection method based on the physiological signals of the driver, the scheme of the embodiment of the invention is used for detecting fatigue driving, the driver does not need to wear detection equipment for detecting the physiological signals (such as electroencephalogram (EEG) signals, electrocardio signals (ECG) and the like), the degree of dependence on individuals is reduced, and the discomfort brought to the driver by wearing the detection equipment is avoided; in addition, compared with a method for deducing the fatigue state of a driver based on the operation behavior (such as the operation of a steering wheel) of the driver, the fatigue driving detection method and the system thereof for detecting the fatigue driving are not influenced by a plurality of environmental factors such as personal habits, driving speed, road environment, operation skills, vehicle characteristics, roads and the like, the detection precision is greatly improved, the end-to-end detection of whether the driver is fatigued or not is realized, and the use of a lightweight network can reduce the memory consumption on the basis of ensuring the accuracy, automatically match the eye opening degree of people with different eye sizes and reduce the false detection rate of the method.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings. Of course, not all of the advantages described above need to be achieved at the same time in the practice of any one product or method of the invention.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart of a method for detecting fatigue driving according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of key points in a face image according to an embodiment of the present invention.
Fig. 3 is a schematic diagram of a left eye detection frame, a right eye detection frame, and a mouth detection frame in an embodiment of the invention.
Fig. 4 is a flowchart of another fatigue driving detection method according to an embodiment of the present invention.
Fig. 5 is a schematic diagram of a fatigue driving detection system according to an embodiment of the present invention.
FIG. 6 is a schematic diagram of another fatigue driving detection system according to an embodiment of the present invention.
Fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
In addition, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present invention. It will be understood by those skilled in the art that the present invention may be practiced without some of these specific details. In some instances, well known means have not been described in detail so as not to obscure the present invention.
As shown in fig. 1, a method for detecting fatigue driving in an embodiment of the present invention includes the following steps:
and step S1, obtaining a plurality of frames of face images in the current time period, wherein the plurality of frames of face images are the face images of the driver in the current time period.
Specifically, in this step, a plurality of frames of face images in the current time period are acquired through a camera, wherein the camera may be arranged right above an instrument panel, face image data in a normal driving scene of a driver is acquired, including face video stream data in a closed-eye state when the driver rotates at ± 60 degrees left and right, rotates at ± 30 degrees up and down, wears glasses and does not wear glasses, and is tired, and the video stream data is analyzed into image data of each frame and a face part in the image is cut to acquire each frame of face image. Wherein each time period is preferably, but not limited to, set to 10 seconds.
And step S2, labeling the key points of the human face in the multi-frame human face image.
As shown in fig. 2, the face key points include left eye key points including a left eye corner point a1, a right eye corner point a2, an upper eyelid point A3, and a lower eyelid point a4 of the left eye, right eye key points including a left eye corner point B1, a right eye corner point B2, an upper eyelid point B3, and a lower eyelid point B4 of the right eye, and mouth key points including a left mouth corner point C1, a right mouth corner point C2, an upper lip point C3, and a lower lip point C4. The labeling of the key points can be performed through a face + + face key point detection API of an open artificial intelligence platform, the detection result returns the coordinate information of the 12 key points A1-C4 in the face image, and the labeling is performed on the face image according to the coordinate information of the 12 key points A1-C4 in the face image, for example, the point corresponding to the coordinate in the face image is labeled as a red point.
Wherein, left eye data still includes left eye detection frame central point coordinate, right eye data still includes right eye detection frame central point coordinate, mouth data still includes mouth detection frame central point coordinate.
Step S3, detecting the marked multiple frames of face images by using a target detection network to obtain eye data and mouth data of each frame of face images; wherein the eye data includes left and right eye detection frames and lengths and widths thereof, and the mouth data includes a mouth detection frame and lengths and widths thereof. Namely, the coordinate information of the key points of the human face is determined, the key points are located on four sides of the detection frame, the difference between the vertical coordinates of the upper and lower points is the width, the difference between the horizontal coordinates of the left and right points is the length, in the target detection process, an anchor frame is generated by taking all the pixel points of the image as the center, the size of the anchor frame is dynamically changed until the anchor frame detects a proper target, namely, when the key points are located on the four sides of the anchor frame, the anchor frame is the generated detection frame at the moment, and therefore, the central coordinate of each detection frame is also determined.
Step S4, calculating the eye opening degree and the mouth opening degree of each frame of face image according to the eye data and the mouth data of each frame of face image;
specifically, the left eye opening degree is equal to the ratio of the width to the length of the left eye detection frame, the right eye opening degree is equal to the ratio of the width to the length of the right eye detection frame, and the mouth opening degree is equal to the ratio of the width to the length of the mouth detection frame.
Step S5, determining the state of the driver according to the eye opening degree and the mouth opening degree of each frame of face image; wherein the driver state includes driving fatigue and non-driving fatigue.
In some embodiments, as shown in fig. 3, the step S3 includes:
and generating a left eye detection frame, a right eye detection frame and a mouth detection frame according to the face key points in each frame of face image, and determining the length and the width of each detection frame according to the coordinate information of the face key points in the face image.
Specifically, in this embodiment, the target detection network is a network model based on yolov3 open source algorithm, the 640 × 480 image is input, the weight of the model is continuously adjusted by calculating the loss of the model, and a model with a relatively small loss and a relatively high accuracy is finally obtained. The yolov3 network structure comprises 252 layers including Add, BatchNormalization, Concatenate, Conv2D, InputLayer, LeakyReLU, UpSampling, ZeroPadding2D and other network layers, in the embodiment, on the basis of the yolov3 network structure, a lightweight model is realized by reducing a channel number (filters) parameter and appropriately reducing the number of convolution (Conv) layers, a training parameter is set, for example, a proper batch size (batchsize) is selected, the class number (filters) is 3, the channel number (filters) is calculated by multiplying (number of filters +5) by a formula 3, and the value of an anchor frame (anchors) is adjusted to a proper size after the network model is compressed.
Specifically, the left eye detection frame 10, the right eye detection frame 20, and the mouth detection frame 30 are all cuboids, and in a process of detecting eyes and mouths by using a target detection network, an anchor frame is correspondingly generated by each pixel point of a face image, and the center of the pixel point is an anchor frame center, wherein the size of the anchor frame is dynamically changed, and anchor frames with multiple sizes can be preset, and when an anchor frame detects a suitable target, that is, when key points are located on four sides of the anchor frame, the anchor frame is a generated detection frame, for example, the left eye detection frame 10, and when a left eye corner point a1, a right eye corner point a2, an upper eyelid point A3, and a lower eyelid point a4 are located on four sides of the anchor frame, the anchor frame is a left eye detection frame. Therefore, the left eye corner point a1, the right eye corner point a2, the upper eyelid point A3, and the lower eyelid point a4 of the left eye are respectively located on four sides of the left eye detection frame 10, the left eye corner point B1, the right eye corner point B2, the upper eyelid point B3, and the lower eyelid point B4 of the right eye are respectively located on four sides of the right eye detection frame 20, and the left mouth corner point C1, the right mouth corner point C2, the upper lip point C3, and the lower lip point C4 of the mouth are respectively located on four sides of the mouth detection frame 30. Based on the above, the left-eye detection frame 10, the right-eye detection frame 20, and the mouth detection frame 30 can be generated based on the left-eye data, the right-eye data, and the mouth data.
The determining the length and the width of each detection frame according to the coordinate information of the face key points in the face image specifically comprises the following steps of:
L1=XA2-XA1
W1=XA4-XA3
wherein L1 is the length of the left eye detection frame 10, W1 is the width of the left eye detection frame 10, XA1Is the abscissa, X, of the left eye corner A1A2Is the abscissa, X, of the right eye corner point A2A3Is the ordinate, X, of the upper eyelid point A3A4Is the ordinate of the lower eyelid point a 4.
L2=XB2-XB1
W2=XB4-XB3
Wherein L2 is the rightThe length of the eye detection frame 10, W2 is the width of the right eye detection frame 10, XB1Is the abscissa, X, of the left eye corner B1B2Is the abscissa, X, of the right eye corner point B2B3Is the ordinate, X, of the upper eyelid point B3B4Is the ordinate of the lower eyelid point B4.
L3=XC2-XC1
W3=XC4-XC3
Wherein L3 is the length of the mouth detection frame 10, W3 is the width of the mouth detection frame 10, and XC1Is the abscissa, X, of the left mouth corner point C1C2Is the abscissa, X, of the right mouth corner point C2C3Is the ordinate, X, of the upper lip point C3C4The ordinate of the lower lip point C4.
In some embodiments, the step S4 includes the following sub-steps:
step S411, calculating the eye opening degree K of each frame of face image according to the left eye opening degree K1 and the right eye opening degree K2 of each frame of face image; wherein, the eye opening degree of the face image is equal to the average value of the left eye opening degree and the right eye opening degree, namely K is 1/2(K1+ K2);
step S412, calculating the average eye opening degree K of the current time period according to the eye opening and closing degree of each frame of face imageFlat plate(ii) a Wherein, the average eye opening degree is equal to the sum of the eye opening degrees of all the frames of the face images in the current time period divided by the total frame number N of the images in the current time period, that is
Figure BDA0002131037650000101
Wherein, KiThe eye opening degree of the face image of the ith frame is obtained;
step S413, according to the eye opening degree of each frame of face image and the average eye opening degree KFlat plateDetermining the number of frames of the human face images of the closed eyes of the driver; wherein, if the eye opening degree K of the ith frame of face imageiLess than average eye opening KFlat plateMultiplying by a preset scale factor, and closing the eyes of the driver in the ith frame of face image; the preset proportionality coefficient in this embodiment is preferably, but not limited to, 20%;
step S414, counting the number of the human face image frames with closed eyes of the driver as N, and calculating the closed eye proportion according to the number M of the human face image frames with closed eyes of the driver and the total number of the human face image frames in the current time period; wherein, the eye closing proportion is equal to the ratio of the number of the human face image frames for closing eyes of the driver to the total number of the human face image frames in the current time period, namely M/N;
and step S415, determining the state of the driver according to the comparison result of the eye closing ratio and a preset first threshold value. Specifically, if the eye closing ratio M/N is greater than a preset first threshold, it is determined that the driver state is driving fatigue, and if the eye closing ratio M/N is less than or equal to the preset first threshold, it is determined that the driver state is non-driving fatigue. The preset first threshold value in the present embodiment is preferably, but not limited to, 80%.
In some embodiments, the step S5 includes:
determining the state of the driver according to the comparison result of the opening degree of the mouth of each frame of face image and a preset second threshold value; and if the mouth opening degrees of all the frame face images in the current time period are less than or equal to the preset second threshold value, determining that the driver state is the non-driving fatigue.
Specifically, because the difference between the opening and closing of the mouth is large, when the opening and closing degree of the mouth exceeds a certain threshold value when the driver yawns, the driver is judged to be fatigue driving. Wherein the preset second threshold value may be set empirically.
In some embodiments, as shown in fig. 4, the method further comprises the steps of:
step S6, responding to the driver state being driving fatigue, generating a fatigue driving signal;
and step S7, performing fatigue driving prompt according to the fatigue driving signal.
Specifically, the fatigue driving prompt in this embodiment may be a combination of one or more of voice, text and light signal prompt modes, such as a voice prompt "you have tired driving and please stop for rest", and a text prompt "you have tired driving and please stop for rest", for example, a light signal prompt is a flashing light.
As shown in fig. 5, a second embodiment of the present invention provides a fatigue driving detection system, including:
the image acquiring unit 1 is configured to acquire a plurality of frames of face images in a current time period, wherein the plurality of frames of face images are face images of a driver in the current time period;
the image annotation unit 2 is configured to annotate the face key points in the plurality of frames of face images;
the image processing unit 3 is configured to detect the plurality of frames of labeled face images by using a target detection network to obtain eye data and mouth data of each frame of face image; wherein the eye data comprises a left eye detection frame, a right eye detection frame, a length and a width of the left eye detection frame and the right eye detection frame, and the mouth data comprises a mouth detection frame, a length and a width of the mouth detection frame;
a data processing unit 4 configured to calculate an eye opening degree and a mouth opening degree of each frame of the face image according to the eye data and the mouth data of each frame of the face image;
a fatigue judging unit 5 configured to determine a driver state according to the eye opening degree and the mouth opening degree of each frame of face image; wherein the driver state includes driving fatigue and non-driving fatigue.
In some embodiments, as shown in fig. 6, the system further comprises:
a signal generation unit 6 configured to generate a fatigue driving signal in response to the driver's state being driving fatigue;
and the prompting unit 7 is configured to perform fatigue driving prompting according to the fatigue driving signal.
The human face key points comprise left eye key points, right eye key points and mouth key points, the left eye key points comprise left eye corner points, right eye corner points, upper eyelid points and lower eyelid points of left eyes, the right eye key points comprise left eye corner points, right eye corner points, upper eyelid points and lower eyelid points of right eyes, and the mouth key points comprise left mouth corner points, right mouth corner points, upper lip points and lower lip points.
Wherein the image detection unit 4 includes:
the first detection module is configured to generate a left eye detection frame according to a left eye key point in each frame of face image, and determine the length and the width of the left eye detection frame;
the second detection module is configured to generate a right eye detection frame according to the right eye key point in each frame of face image, and determine the length and the width of the right eye detection frame;
and the third detection module is configured to generate a mouth detection frame according to the mouth key points in each frame of face image, and determine the length and the width of the mouth detection frame.
Wherein the fatigue determination unit 5 includes:
the first processing module is configured to calculate the eye opening degree of each frame of face image according to the left eye opening degree and the right eye opening degree of each frame of face image; the eye opening degree of the face image is equal to the mean value of the left eye opening degree and the right eye opening degree;
the second processing module is configured to calculate the average eye opening degree of the current time period according to the eye opening degree of each frame of face image;
the third processing module is configured to determine the number of the human face image frames of the closed eyes of the driver according to the eye opening degree of each frame of human face image and the average eye opening degree; if the eye opening degree of one frame of face image is smaller than the average eye opening degree multiplied by a preset proportionality coefficient, closing the eyes of the driver in the frame of face image;
the fourth processing module is configured to count the number of the human face image frames of the closed eyes of the driver and calculate the closed eye proportion according to the number of the human face image frames of the closed eyes of the driver and the total number of the human face image frames in the current time period; wherein, the eye closing proportion is equal to the ratio of the number of the human face image frames of the closed eyes of the driver to the total number of the human face image frames in the current time period;
and the fifth processing module is configured to determine the state of the driver according to a comparison result of the closed-eye ratio and a preset first threshold, determine that the state of the driver is driving fatigue if the closed-eye ratio is greater than the preset first threshold, and determine that the state of the driver is non-driving fatigue if the closed-eye ratio is less than or equal to the preset first threshold.
The sixth processing module is configured to determine the state of the driver according to the comparison result of the opening degree of the mouth of each frame of facial image and a preset second threshold; and if the mouth opening degrees of all the frame face images in the current time period are less than or equal to the preset second threshold value, determining that the driver state is the non-driving fatigue.
It should be noted that the system according to the second embodiment is used for implementing the method according to the first embodiment, and therefore, relevant portions of the system according to the second embodiment that are not described in detail in the first embodiment can be obtained by referring to the method according to the first embodiment, and are not described herein again.
It should also be appreciated that the method of embodiment one and the system of embodiment two may be implemented in numerous ways, including as a process, an apparatus, or a system. The methods described herein may be implemented in part by program instructions for instructing a processor to perform such methods, as well as instructions recorded on non-transitory computer-readable storage media such as hard disk drives, floppy disks, optical disks such as Compact Disks (CDs) or Digital Versatile Disks (DVDs), flash memory, and the like. In some embodiments, the program instructions may be stored remotely and transmitted over a network via an optical or electronic communication link.
As shown in fig. 7, a third embodiment of the present invention provides an electronic device 100, which includes a processor 101, a memory 102, and a communication bus 103, where the processor 101 and the memory 102 complete communication with each other through the communication bus 103;
the memory 102 is used for storing a computer program 104;
the processor 101 is configured to implement the method steps of the first embodiment when executing the computer program 104 stored in the memory 102.
As can be seen from the description of the above embodiment, in the embodiment of the present invention, the key points of the eyes and the mouth in the face image of the driver are labeled, the eyes and the mouth are detected according to the labeled key points, the opening and closing degrees of the eyes and the mouth are calculated according to the detection result, and finally, whether the driver is in the fatigue driving state is determined according to the opening and closing degrees of the eyes and the mouth. Therefore, compared with a detection method based on the physiological signals of the driver, the fatigue driving detection is carried out by using the embodiment of the invention, the driver does not need to wear detection equipment for detecting the physiological signals (such as EEG (electroencephalogram), ECG (electrocardiograph) and the like), the degree of dependence on the driver is reduced, and the discomfort brought to the driver by wearing the detection equipment is avoided; in addition, compared with a method for deducing the fatigue state of a driver based on the operation behavior (such as the operation of a steering wheel) of the driver, the fatigue driving detection method and the system thereof for detecting the fatigue driving are not influenced by a plurality of environmental factors such as personal habits, driving speed, road environment, operation skills, vehicle characteristics, roads and the like, the detection precision is greatly improved, the end-to-end detection of whether the driver is fatigued or not is realized, and the use of a lightweight network can reduce the memory consumption on the basis of ensuring the accuracy, automatically match the eye opening degree of people with different eye sizes and reduce the false detection rate of the method.
Having described embodiments of the present invention, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (10)

1. A fatigue driving detection method is characterized by comprising the following steps:
acquiring a plurality of frames of face images in the current time period, wherein the plurality of frames of face images are face images of a driver in the current time period;
labeling the key points of the human face in the multi-frame human face image;
detecting the marked multiple frames of face images by using a target detection network to obtain eye data and mouth data of each frame of face image; wherein the eye data comprises a left eye detection frame, a right eye detection frame, a length and a width of the left eye detection frame and the right eye detection frame, and the mouth data comprises a mouth detection frame, a length and a width of the mouth detection frame;
calculating the eye opening degree and the mouth opening degree of each frame of face image according to the eye data and the mouth data of each frame of face image;
determining the state of a driver according to the eye opening degree and the mouth opening degree of each frame of face image; wherein the driver state includes driving fatigue and non-driving fatigue.
2. The fatigue driving detection method of claim 1, wherein the labeling of the face key points in the plurality of frames of face images comprises:
acquiring coordinate information of face key points in each frame of face image in the face image;
marking face key points in each frame of face image according to the coordinate information;
the human face key points comprise a left eye corner point, a right eye corner point, an upper eyelid point and a lower eyelid point of a left eye, a left eye corner point, a right eye corner point, an upper eyelid point and a lower eyelid point of a right eye, a left mouth corner point, a right mouth corner point, an upper lip point and a lower lip point.
3. The fatigue driving detection method according to claim 2, wherein the detecting the plurality of labeled face images by using the target detection network to obtain the eye data and the mouth data of each frame of face image comprises:
and generating a left eye detection frame, a right eye detection frame and a mouth detection frame according to the face key points in each frame of face image, and determining the length and the width of each detection frame according to the coordinate information of the face key points in the face image.
4. The fatigue driving detection method according to claim 3, wherein the determining the length and width of each detection frame according to the coordinate information of the face key points in the face image specifically according to the following formula comprises:
L1=XA2-XA1
W1=XA4-XA3
wherein L1 is the length of the left eye detection frame, W1 is the width of the left eye detection frame, XA1Is the abscissa, X, of the corner point of the left eyeA2Is the abscissa, X, of the corner point of the right eye of the left eyeA3Is the ordinate, X, of the eyelid point on the left eyeA4Is the ordinate of the eyelid point under the left eye;
L2=XB2-XB1
W2=XB4-XB3
wherein L2 is the length of the right eye detection frame, W2 is the width of the right eye detection frame, XB1Is the abscissa, X, of the corner point of the left eye of the right eyeB2Is the abscissa, X, of the corner point of the right eyeB3Is the ordinate, X, of the eyelid point on the right eyeB4Is the ordinate of the lower eyelid point of the right eye;
L3=XC2-XC1
W3=XC4-XC3
wherein L3 is the length of the mouth detection frame, W3 is the width of the mouth detection frame, XC1Is the abscissa, X, of the left mouth corner pointC2Is the abscissa, X, of the right mouth corner pointC3Is the ordinate, X, of the upper lip pointC4The ordinate of the lower lip point.
5. The fatigue driving detecting method according to claim 1, wherein the determining the driver's state based on the eye opening degree and the mouth opening degree of each frame of the face image comprises:
calculating the eye opening degree of each frame of face image according to the left eye opening degree and the right eye opening degree of each frame of face image, wherein the eye opening degree of the face image is equal to the average value of the left eye opening degree and the right eye opening degree;
calculating the average eye opening degree of the current time period according to the eye opening degree of each frame of face image;
determining the number of the human face image frames of the closed eyes of the driver according to the eye opening degree of each frame of human face image and the average eye opening degree; if the eye opening degree of one frame of face image is smaller than the average eye opening degree multiplied by a preset proportionality coefficient, closing the eyes of the driver in the frame of face image;
counting the number of human face image frames of the eyes closed by the driver, and calculating the eye closing proportion according to the number of the human face image frames of the eyes closed by the driver and the total number of the human face image frames in the current time period; wherein, the eye closing proportion is equal to the ratio of the number of the human face image frames of the closed eyes of the driver to the total number of the human face image frames in the current time period;
and determining the state of the driver according to the comparison result of the eye closing ratio and a preset first threshold value.
6. The fatigue driving detecting method according to claim 1, wherein the determining the driver's state based on the eye opening degree and the mouth opening degree of each frame of the face image comprises:
determining the state of the driver according to the comparison result of the opening degree of the mouth of each frame of face image and a preset second threshold value; and if the mouth opening degrees of all the frame face images in the current time period are less than or equal to the preset second threshold value, determining that the driver state is the non-driving fatigue.
7. The fatigue driving detecting method according to any one of claims 1 to 6, further comprising the steps of:
generating a fatigue driving signal in response to the driver state being driving fatigue;
and carrying out fatigue driving prompt according to the fatigue driving signal.
8. A fatigue driving detection system for implementing the method of any one of claims 1-6, the system comprising:
the image acquisition unit is configured to acquire a plurality of frames of face images in the current time period, wherein the plurality of frames of face images are face images of a driver in the current time period;
the image annotation unit is configured to annotate the key points of the human faces in the plurality of frames of human face images;
the image processing unit is configured to detect the marked multiple frames of face images by using a target detection network to obtain eye data and mouth data of each frame of face image; wherein the eye data comprises a left eye detection frame, a right eye detection frame, a length and a width of the left eye detection frame and the right eye detection frame, and the mouth data comprises a mouth detection frame, a length and a width of the mouth detection frame;
the data processing unit is configured to calculate the eye opening degree and the mouth opening degree of each frame of face image according to the eye data and the mouth data of each frame of face image;
the fatigue judging unit is configured to determine the state of the driver according to the eye opening degree and the mouth opening degree of each frame of face image; wherein the driver state includes driving fatigue and non-driving fatigue.
9. The fatigue driving detection system of claim 8, comprising:
a signal generation unit configured to generate a fatigue driving signal in response to a driver state being driving fatigue;
a prompt unit configured to perform a fatigue driving prompt according to the fatigue driving signal.
10. An electronic device, comprising a processor, a memory and a communication bus, wherein the processor and the memory communicate with each other via the communication bus;
the memory is used for storing a computer program;
the processor, when executing the computer program stored in the memory, is configured to perform the method steps of any of claims 1-7.
CN201910638379.1A 2019-07-16 2019-07-16 Fatigue driving detection method and system and electronic equipment Pending CN112241645A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910638379.1A CN112241645A (en) 2019-07-16 2019-07-16 Fatigue driving detection method and system and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910638379.1A CN112241645A (en) 2019-07-16 2019-07-16 Fatigue driving detection method and system and electronic equipment

Publications (1)

Publication Number Publication Date
CN112241645A true CN112241645A (en) 2021-01-19

Family

ID=74166683

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910638379.1A Pending CN112241645A (en) 2019-07-16 2019-07-16 Fatigue driving detection method and system and electronic equipment

Country Status (1)

Country Link
CN (1) CN112241645A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112907897A (en) * 2021-02-26 2021-06-04 浙江南盾科技发展有限公司 Vehicle-mounted fatigue driving prevention reminding equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106491127A (en) * 2016-10-11 2017-03-15 广州汽车集团股份有限公司 Drive muscular strain early warning value method of testing and device and drive muscular strain prior-warning device
CN107704805A (en) * 2017-09-01 2018-02-16 深圳市爱培科技术股份有限公司 method for detecting fatigue driving, drive recorder and storage device
CN108460345A (en) * 2018-02-08 2018-08-28 电子科技大学 A kind of facial fatigue detection method based on face key point location
CN108545080A (en) * 2018-03-20 2018-09-18 北京理工大学 Driver Fatigue Detection and system
CN108875642A (en) * 2018-06-21 2018-11-23 长安大学 A kind of method of the driver fatigue detection of multi-index amalgamation
CN109271875A (en) * 2018-08-24 2019-01-25 中国人民解放军火箭军工程大学 A kind of fatigue detection method based on supercilium and eye key point information
WO2019029195A1 (en) * 2017-08-10 2019-02-14 北京市商汤科技开发有限公司 Driving state monitoring method and device, driver monitoring system, and vehicle

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106491127A (en) * 2016-10-11 2017-03-15 广州汽车集团股份有限公司 Drive muscular strain early warning value method of testing and device and drive muscular strain prior-warning device
WO2019029195A1 (en) * 2017-08-10 2019-02-14 北京市商汤科技开发有限公司 Driving state monitoring method and device, driver monitoring system, and vehicle
CN107704805A (en) * 2017-09-01 2018-02-16 深圳市爱培科技术股份有限公司 method for detecting fatigue driving, drive recorder and storage device
CN108460345A (en) * 2018-02-08 2018-08-28 电子科技大学 A kind of facial fatigue detection method based on face key point location
CN108545080A (en) * 2018-03-20 2018-09-18 北京理工大学 Driver Fatigue Detection and system
CN108875642A (en) * 2018-06-21 2018-11-23 长安大学 A kind of method of the driver fatigue detection of multi-index amalgamation
CN109271875A (en) * 2018-08-24 2019-01-25 中国人民解放军火箭军工程大学 A kind of fatigue detection method based on supercilium and eye key point information

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张智腾: ""基于卷积神经网络的驾驶员疲劳检测"", 《中国优秀硕士学位论文全文数据库 信息科技辑》, no. 1, pages 1 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112907897A (en) * 2021-02-26 2021-06-04 浙江南盾科技发展有限公司 Vehicle-mounted fatigue driving prevention reminding equipment

Similar Documents

Publication Publication Date Title
JP6784424B1 (en) Overwork detection warning system and method based on machine vision
JP4633043B2 (en) Image processing device
US9141761B2 (en) Apparatus and method for assisting user to maintain correct posture
WO2017047178A1 (en) Information processing device, information processing method, and program
CN109087485B (en) Driving reminding method and device, intelligent glasses and storage medium
CN108028957A (en) Information processor, information processing method and program
CN108076290B (en) Image processing method and mobile terminal
WO2005041579A3 (en) Method and system for processing captured image information in an interactive video display system
CN105956548A (en) Driver fatigue state detection method and device
CN108711407B (en) Display effect adjusting method, adjusting device, display equipment and storage medium
CN101390128A (en) Detecting method and detecting system for positions of face parts
CN115599219B (en) Eye protection control method, system and equipment for display screen and storage medium
CN111783687A (en) Teaching live broadcast method based on artificial intelligence
CN105976675A (en) Intelligent information exchange device and method for deaf-mute and average person
CN114155512A (en) Fatigue detection method and system based on multi-feature fusion of 3D convolutional network
CN106740581A (en) A kind of control method of mobile unit, AR devices and AR systems
KR102365162B1 (en) Video display apparatus and method for reducing sickness
Alam et al. Active vision-based attention monitoring system for non-distracted driving
US20230206093A1 (en) Music recommendation method and apparatus
CN115393830A (en) Fatigue driving detection method based on deep learning and facial features
CN112241645A (en) Fatigue driving detection method and system and electronic equipment
CN104035544B (en) The method and electronic equipment of a kind of control electronics
CN106250749A (en) A kind of virtual reality intersection control routine
CN109426342B (en) Document reading method and device based on augmented reality
CN116486383A (en) Smoking behavior recognition method, smoking detection model, device, vehicle, and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination