CN111062292B - Fatigue driving detection device and method - Google Patents

Fatigue driving detection device and method Download PDF

Info

Publication number
CN111062292B
CN111062292B CN201911258187.4A CN201911258187A CN111062292B CN 111062292 B CN111062292 B CN 111062292B CN 201911258187 A CN201911258187 A CN 201911258187A CN 111062292 B CN111062292 B CN 111062292B
Authority
CN
China
Prior art keywords
fatigue
driver
module
detection
cache
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201911258187.4A
Other languages
Chinese (zh)
Other versions
CN111062292A (en
Inventor
闫保中
王晨宇
王帅帅
何伟
韩旭东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Engineering University
Original Assignee
Harbin Engineering University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Engineering University filed Critical Harbin Engineering University
Priority to CN201911258187.4A priority Critical patent/CN111062292B/en
Publication of CN111062292A publication Critical patent/CN111062292A/en
Application granted granted Critical
Publication of CN111062292B publication Critical patent/CN111062292B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

The invention discloses a fatigue driving detection device and a method, wherein a main control module is respectively and electrically connected with an image acquisition device, a storage module and an alarm module; the alarm module comprises an LED lamp and an alarm; the image acquisition module comprises a camera and 4 infrared LED light sources; the main control module outputs a control signal to the camera, and the camera collects images of a driver and inputs the images to the main control module; the storage module is used for storing the Hash fingerprint information, the corresponding face positioning frame and the fatigue detection standard parameter; the main control module processes the received data to obtain fatigue characteristic parameters in real time, reads the fatigue detection standard parameters in the storage module, obtains the fatigue degree according to the comparison result of the fatigue characteristic parameters and the fatigue detection standard parameters, outputs a control signal to the alarm module, and controls the LED lamp to flash and the alarm to buzz. The invention can detect all weather. Early warning of fatigue can be achieved by distraction detection. The fatigue degree is comprehensively judged by utilizing multiple characteristics, and the effects in two aspects of instantaneity and accuracy are good.

Description

Fatigue driving detection device and method
Technical Field
The invention belongs to the technical field of safe driving, and particularly relates to a fatigue driving detection device and method.
Background
Along with the improvement of life happiness indexes of people, the popularization degree of automobiles is higher and higher. However, public attention to the safety problems of transportation does not increase, and especially, the public generally lacks understanding of the recessive cause of accidents caused by fatigue driving and the like. The consequences caused by fatigue driving are sometimes extremely serious, so that the harm brought by traffic accidents can be effectively reduced by detecting the fatigue driving and giving early warning in time through technical means.
Currently, many studies are made on a visual feature-based method, which collects a face image of a driver and then detects fatigue by analyzing facial features such as eyes and a mouth. However, the process is extremely complex, on one hand, facial features are easily affected by individual factors and lighting environments, and on the other hand, the human face has a posture angle problem in the driving process, so that great interference is caused to detection precision. On the other hand, some methods cannot well balance the accuracy and the real-time performance, and some methods improve the accuracy but do not consider the problem of the real-time performance; some methods can achieve rapid detection, but face the problem of poor precision. Therefore, the technology needs to be further perfected and researched in practical application.
Disclosure of Invention
In view of the above prior art, the technical problem to be solved by the present invention is to provide a fatigue driving detection apparatus and method that are both real-time and accurate and are not easily affected by illumination change and attitude angle.
In order to solve the technical problem, the fatigue driving detection device comprises an image acquisition module, a main control module, a storage module and an alarm module, wherein the main control module is respectively and electrically connected with the image acquisition module, the storage module and the alarm module; the alarm module comprises an LED lamp and an alarm; the image acquisition module comprises a camera and 4 infrared LED light sources; the main control module outputs a control signal to the camera, and the camera collects images of a driver and inputs the images to the main control module; the storage module is used for storing the Hash fingerprint information, the face positioning frame corresponding to the Hash fingerprint and the fatigue detection standard parameter; the main control module processes the received data to obtain fatigue characteristic parameters in real time, reads the fatigue detection standard parameters in the storage module, obtains the fatigue degree according to the comparison result of the fatigue characteristic parameters and the fatigue detection standard parameters, outputs corresponding control signals to the alarm module, and controls the alarm module to send out set alarm signals.
A detection method adopting the fatigue driving detection device comprises the following steps:
s1: acquiring a driving image of a driver by using an image acquisition device, wherein the image comprises an image of the driver when the driver is awake to obtain an initial value of the aspect ratio of the eyes as a fatigue detection standard parameter;
s2: using the improved human face rapid detection algorithm to position the human face position;
s3: when the human face is detected, using a human face characteristic point positioning algorithm to obtain 68 positions of the human face characteristic points;
s4: extracting eye information according to the position of the human face characteristic point, calculating the eye width-height ratio, calculating the PERCLOS value and the blink frequency of the proportion of the number of closed-eye frames in unit time to the total number of frames, and judging the relation between the PERCLOS value T and a given threshold value: when T is more than or equal to T Z Occasionally, it is judged as severe fatigue, T Z The alarm module sends out a set alarm signal when the fatigue threshold value is a severe fatigue threshold value; when T is Q <T<T Z When, T Q Performing S5 for a mild fatigue threshold; when T is less than or equal to T Q Then, step S6 is executed;
s5: further judging whether the blink frequency is too fast: when the blink frequency is greater than the given blink frequency threshold value, determining that the vehicle is normally driven, and executing step S6; otherwise, judging the driver to be in light fatigue driving, and sending a set alarm signal by the alarm module;
S6: calculating the head posture angle, judging that the head is low when the angle of the head posture in the vertical direction is greater than a given angle threshold value, judging that the head is severely fatigue when the time that the angle of the head posture in the vertical direction is greater than the given angle threshold value is greater than a given time threshold value, and sending a set alarm signal by an alarm module; otherwise, go to step S7;
s7: judging whether the eyes are closed according to the eye aspect ratio: when the eyes are closed, the detection of distraction is not carried out, and normal driving is judged; when not closed, step S8 is executed;
s8: performing attention dispersion detection, and calculating to obtain horizontal declination angle theta of sight line l And the vertical declination angle theta of the line of sight v Dividing the driver's front area into a normal watching area and a sight line deviation area according to theta l And theta v Obtaining the area where the fixation and drop point of the driver is located, judging the driver to be slightly tired when the fixation and drop point is deviated from the area in the sight line for more than a given time threshold value, and sending a set alarm signal by an alarm module; otherwise, normal driving is determined.
The invention also includes:
1. and preprocessing the subsequent video stream sampling image in the step 1, including adaptive median filtering and Laplace-based image enhancement.
2, the positioning of the face position by using the improved face rapid detection algorithm in the S2 specifically comprises:
firstly, calculating a mean hash fingerprint of an image of a driver to be detected, if fingerprint information with a difference of no more than 2 bits from the mean hash fingerprint exists in a cache module, directly outputting a face positioning frame corresponding to the fingerprint in the cache, and adding 1 to the calling frequency of the fingerprint in the cache;
if fingerprint information which has a difference of more than 2 bits but less than 5 bits with the average hash fingerprint exists in the cache module, calling an AdaBoost face detection algorithm to obtain a positioning frame, simultaneously judging whether the cache capacity is full, and if not, storing the average hash fingerprint into a cache database; when the cache capacity is full, deleting the fingerprint with the minimum calling frequency in the cache, and then storing the hash fingerprint of the current mean value into a cache database;
and if the difference between the fingerprint information in the cache module and the average hash fingerprint exceeds 5 bits, calling an AdaBoost face detection algorithm to obtain a positioning frame.
Using the human face feature point location algorithm in S3, the coordinates of 68 feature points are specifically:
firstly, judging the face orientation by using an HOG-SVM algorithm: selecting different face orientations, extracting HOG characteristics of an image to be detected to obtain HOG characteristic vectors, and using the HOG characteristic vectors as SVM input parameters to obtain a face orientation estimation classifier;
Selecting different human face characteristic point initialization models according to different human face orientation results;
and for each human face feature point position, extracting LBF (local binary patterns) features by using a trained random forest, obtaining a regression result by using a trained linear regressor, updating the human face feature point position until a given maximum iteration number is reached, and outputting the human face feature point position.
4. The LBF characteristic is extracted by using the trained random forest, and the two pixel gray levels near the characteristic point are used as classification characteristics after normalization processing.
The eye aspect ratio calculated in S4 is specifically:
the eye aspect ratio WHR satisfies:
Figure BDA0002310887210000031
wherein W is eye width, H is height, WHR init Is an initial value of the eye aspect ratio of the driver in the awake state;
s4, calculating the PERCLOS value T of the ratio of the number of closed-eye frames to the total number of frames in unit time specifically comprises the following steps:
when WHR is larger than a given threshold value, judging that human eyes are in a closed state, wherein the total frame number in unit time is Z, the frame number of the human eyes in the closed state is Z, and T meets the following conditions:
Figure BDA0002310887210000032
s4 the calculating the blink frequency specifically includes:
and when the current frame image WHR is less than or equal to 3 and the current frame image WHR is more than 3, judging that 1 blink occurs, wherein the blink frequency in a given unit time is the blink frequency.
6. The calculation of the head posture angle in step S6 specifically includes:
after the face feature point positions are obtained, two-dimensional N feature point positions on the image are mapped to corresponding N feature point positions on a standard three-dimensional face model by solving a rotation matrix and a translation matrix, and finally, head attitude angles alpha, beta and gamma are calculated by utilizing a rotation matrix R:
Figure BDA0002310887210000033
wherein beta is the angle of the head posture in the vertical direction, alpha is the angle of the head posture in the horizontal direction, and gamma is the angle of the head posture in the front-back direction.
7, performing attention dispersion detection in S8, and calculating to obtain a horizontal declination angle theta of the sight line l And the vertical declination angle theta of the line of sight v The method specifically comprises the following steps:
displacement vector for a point in the pupil and a pupil boundary point and gradient direction of the boundary pointMeasuring, wherein a point on a pupil corresponding to the maximum value of the inner products of the two is the pupil center, after the pupil center is positioned, searching the position of the Purkinje light spot nearby the pupil center, wherein the Purkinje light spot is formed by 4 infrared LED light sources distributed around the front area of the driver, calculating the relative position of the pupil center and the rectangular center surrounded by the 4 Purkinje light spots, and obtaining the horizontal deflection angle of the pupil center and the Purkinje light spot
Figure BDA0002310887210000041
The vertical deflection angle eta is used for correcting the sight line direction by using the head postures alpha and beta to obtain the final sight line horizontal deflection angle
Figure BDA0002310887210000042
Vertical deflection angle theta v =η+β。
The invention has the beneficial effects that:
1. the infrared device solves the problem of low accuracy under low-light conditions such as night and the like, thereby achieving the purpose of all-weather fatigue detection. After the face detection algorithm is improved, the position of the face can be rapidly and accurately determined;
2. by improving an initialization strategy of an LBF algorithm, the influence of a face orientation problem on the positioning of the face characteristic points is reduced, and the robustness of the problems such as wearing glasses and illumination change is enhanced;
3. the driver attention dispersion detection is applied to a fatigue driving detection algorithm, so that a system can give an early warning to the driver before the driver is deeply fatigued, and the safety of the driver is further guaranteed;
4. the fatigue degree is comprehensively judged by utilizing multiple characteristics, the real-time performance and the accuracy of the algorithm are good, and the balance problem of the two is successfully solved.
Drawings
Fig. 1 is a flow chart of a driver fatigue driving detection method according to an embodiment of the invention.
FIG. 2 is a flow chart of a driver face detection algorithm according to an embodiment of the invention.
FIG. 3 is a diagram illustrating various facial feature point initialization models according to an embodiment of the invention.
Fig. 4 is a flowchart of a face feature point location algorithm according to an embodiment of the present invention.
Fig. 5 is a schematic diagram of the division of the driver's gaze area according to an embodiment of the invention.
Fig. 6 is a block diagram of a hardware device for fatigue driving detection according to an embodiment of the present invention.
Detailed Description
The invention is described in further detail below with reference to the figures and the detailed description.
Fig. 1 is a flowchart of a method for detecting fatigue driving of a driver according to an embodiment of the present invention, which includes the following steps:
and S1, acquiring the image of the driver through the camera, and preprocessing the image.
Specifically, the camera is infrared camera, and 4 infrared LED light sources are set up around the windscreen before the car. For capturing driver images. When the system is operated initially, a driver is awake, and the input image is acquired to obtain an initial value of the eye aspect ratio. And then sampling the video stream to obtain an input image of the driver.
In a preferred embodiment, the video stream is pre-processed after the image is truncated. The pre-processing techniques include adaptive median filtering and laplacian-based image enhancement. The adaptive median filtering replaces the intermediate pixel by the median of all the pixel grays in the template, and the size of the template can be adjusted to reduce the interference of smooth non-impact noise. The laplacian image enhancement is performed by traversing the whole image and then applying a laplacian operator to obtain a value after the pixel according to the gray value of the pixel neighborhood.
And S2, marking the position of the face by using an improved face detection algorithm.
Specifically, as shown in fig. 2, for the image of the driver to be detected, the average hash fingerprint thereof is first calculated. The mean hash fingerprint is a series of binary digits obtained by correspondingly calculating an image. It can be used to characterize the degree of similarity of two images. Because the action of the driver is not large in general amplitude during fatigue driving detection, for images of a plurality of frames before and after the fatigue driving detection, the similarity degree can be judged by comparing the hash fingerprints of the mean value of the images, and a face positioning frame is directly called from the storage module by using a cache mechanism, so that the detection rate is accelerated.
The improved fast face detection algorithm flow is as follows: the input image size is reduced to 8 x 8, and then the image gray scale mean is calculated. And (3) counting the size relation between the gray value of each pixel of the image and the average value, wherein the gray value is larger than the average value and is set to be 1, and the gray value is smaller than the average value and is set to be 0, so that a string of hash fingerprint information hash _ finger consisting of 0 and 1 characters is obtained. If the fingerprint data with the digit difference less than or equal to 2 from the hash _ finger exists in the cache module, directly outputting a face positioning frame corresponding to the fingerprint data in the cache; otherwise, outputting the face positioning frame by using an AdaBoost algorithm, and determining whether to store the face positioning frame into a cache module according to the condition of the phase difference of the fingerprint. If the phase difference digit is greater than or equal to 5, not storing; and if the number of the phase difference bits is more than 2 and less than 5, storing the hash _ finger and the face positioning frame into a cache module. In order to solve the problem of cache capacity, the called times of each hash fingerprint are counted, so that the hash fingerprint with the minimum calling times and the corresponding face positioning box are deleted when the cache is full.
And S3, obtaining 68 positions of the personal face characteristic points by using an improved face characteristic point positioning algorithm.
The traditional algorithm based on Local Binary Feature (LBF) needs to input a human face feature point at the beginning to initialize a model, and then the model is enabled to approach the true position more and more through continuous regression. The invention adopts different initialization strategies for optimizing the LBF algorithm in order to improve the detection precision and speed. Instead of simply using a standard average face for initialization, the face orientation is first determined using the HOG-SVM algorithm, so that the model is initialized with different face feature points.
Specifically, the HOG feature divides the target image into several small blocks (cells), and the gradient histograms of the individual small blocks are merged together to form a vector characterizing the image. HOG has no rotation and scale invariance and therefore is much faster to compute. And the method is sensitive to shape change, can well represent the face contour, and is particularly suitable for face orientation classification tasks. After the HOG features are extracted, HOG feature vectors can be obtained for subsequent SVM classification, and finally the face orientation classifier is obtained.
In the embodiment of the invention, 5 human face orientations are selected during training, namely a non-deflection forward human face, a human face deflected leftwards by 22 degrees, a human face deflected leftwards by 45 degrees, a human face deflected rightwards by 22 degrees and a human face deflected rightwards by 45 degrees. Therefore, 5 new folders are built in the training data set folder, which are named as 0, -1, -2, 1 and 2 respectively, and the corresponding pictures are stored in the folders. And during SVM training, the name of a folder where the picture is located is used as a classification label, and the HOG characteristic vector is extracted to be used as an SVM input parameter to obtain a face orientation estimation classifier.
Referring to fig. 3, different face feature point initialization models are selected according to different face orientation results. And then extracting LBF (local binary function) features by using the trained random forest as shown in figure 4, and updating the positions of the face feature points by using the regression results obtained by using the trained linear regressor until the maximum iteration number is reached. Specifically, the random forest training process: the training sample is divided into a plurality of small samples, and the pictures in each small sample have 68 feature points. And generating 500 pixel points in a circle with each feature point as the center of the circle, and calculating the difference between the two pixel points. And taking the difference values as features, selecting a threshold value to divide all pictures in the current small sample into two categories of a left sub-tree and a right sub-tree, and calculating variance attenuation before and after classification. The threshold value with the largest value is the final threshold value of the current classification, and the characteristic at the moment is the final characteristic at the current time. The splitting continues until the maximum depth of the tree. And after the decision tree of one feature point is established, taking the next small sample set to repeat the steps, and constructing a plurality of subtrees for each feature point to form a random forest by the method. There are 68 points on the face, and thus 68 random forests.
In order to improve the classification effect, the invention does not simply use the pixel difference value as the characteristic, but uses the normalized value:
Figure BDA0002310887210000061
x and y are two pixel values, classified using NOL (x, y) as a feature, which after this normalization process makes the feature more sensitive to illumination. Meanwhile, the calculation amount is not changed even after the processing, but the classification effect is obviously improved.
And then extracting local binary features by using the trained random forest. For each feature point of each picture, the picture is necessarily classified into a leaf node, and the leaf node is encoded as 1. Otherwise it is encoded as 0. A binary code of a tree is obtained, and the LBF feature is obtained by combining all binary codes in the random forest of the feature point. Finally all LBFs are combined to form a feature map Φ t
Training the linear regressor W by using the position increment as a learning target during regression t . The LBF algorithm uses a linear regression matrix W in the regression of each level t And a feature mapping function phi t Multiplying, obtaining a position variable Delta S according to the human face characteristic point initialization model and the current characteristic point position information, thereby correcting the position information S of the hierarchy t I.e. S t =S t-1 +W t Φ t (I,S t-1 ). Regression and iteration are then continued. By minimizing the formula
Figure BDA0002310887210000062
To learn the objective function of (2). With the continuous deepening of the hierarchy, the feature points generated by regression are closer to the real positions.
S4, extracting eye information according to the positions of the face characteristic points, calculating the head postures, and using the eye information and the head postures as parameter characteristics of fatigue driving detection;
specifically, the present invention first calculates an eye aspect ratio WHR from the shape and edge detection of the eye feature point using the eye width W and height H.
Figure BDA0002310887210000071
WHR init Is an initial value of the eye aspect ratio when the driver is awake. Preferably, the present embodiment considers the current eye to be in a closed state when the eye is more than 80% closed, i.e. the eye is closed when the WHR is greater than 3. And counting the proportion of the number of the eye-closing frames to the total number of frames in unit time to obtain a PERCLOS value. The unit time was 30 seconds, and the light fatigue threshold was set to 0.3 and the heavy fatigue threshold was set to 0.5. If the PERCLOS value is detected to be greater than 0.5, the driver is seriously tired; if the PERCLOS value is less than 0.3, the driver is normal; if in between, the blink frequency is calculated. If the WHR of the previous frames is less than or equal to 3, and the WHR of the current frame is more than 3, 1 blink occurs. Setting a blinking frequency threshold value as 5 times/5 seconds, if the blinking frequency threshold value is larger than the threshold value, setting that the blinking frequency is too fast, which indicates that the blinking frequency is not caused by fatigue, but the PERCLOS value is increased due to rapid blinking caused by stress under special conditions (such as strong wind and strong light), and judging that the blinking frequency is normal driving and the irregular blinking frequency is mild fatigue driving.
And if the characteristic parameters are utilized to judge that the driver is not in the fatigue state, calculating the head posture angle. Firstly, after face feature point positions are obtained, two-dimensional N feature point positions on an image are mapped to corresponding N feature point positions on a standard three-dimensional face model by solving a rotation matrix and a translation matrix. This step involves regularization of the feature points. And finally, calculating the head attitude angles alpha, beta and gamma by using the rotation matrix R.
Figure BDA0002310887210000072
Wherein beta is the angle of the head posture in the vertical direction, the threshold value of the head posture angle is set to be 20 degrees aiming at the condition that the driver can hold down the head for a long time in fatigue, if beta for 3 seconds is greater than the threshold value continuously, the fatigue is determined to be severe, alpha is the angle of the head posture in the horizontal direction, and gamma is the angle of the head posture in the front-back direction.
And S5, judging the gazing direction of the human eyes according to the relative positions of the Puerhun light spots formed by the pupils in the human eyes and the infrared light source, and performing driver attention dispersion detection.
The degree of fatigue is progressively deepened, and distraction is often the beginning of the fatigue process. It is possible that the driver's eye features and head pose are normal, but the attention is distracted, which also affects driving safety. The present invention uses distraction as a mild fatigue index, and performs distraction detection if no fatigue is detected in step S4. Specifically, for the displacement vector of a point in the pupil and the boundary point of the pupil and the gradient vector of the boundary point, the point on the pupil corresponding to the maximum value of the inner products of the two is the pupil center. After locating the pupil center, search for purkinje spot location in its vicinity. The purkinje light spot is formed by 4 infrared LED light sources distributed around the front area of the driver, so that the purkinje light spot has the property of projection space, the relative position of the center of the pupil and the center of a rectangle enclosed by the 4 purkinje light spots is calculated, and the horizontal deviation angle of the pupil center and the center of the rectangle enclosed by the purkinje light spots is obtained
Figure BDA0002310887210000081
The vertical offset angle η. Then, the head postures alpha and beta are used for correcting the sight line direction, and the final horizontal deflection angle of the sight line is obtained
Figure BDA0002310887210000082
Vertical deflection angle theta v =η+β。
Referring to fig. 5, the embodiment of the present invention divides the front area into 9 portions of 3 x 3, and the angular dividing lines of the respective areas are indicated in the figure according to θ l 、θ v The angle gives the driver's gaze area. When the driver is driving normally, the direction of sight is almost straight ahead. The gaze landing point belongs to the line of sight deviation in all regions except regions 1, 4, 5. Long-time line of sight deviation belongs to distraction phenomenon, so that distraction detection threshold values are determined as follows: if the duration of the gaze drop point not being in the areas 1, 4, 5 is greater than 3 seconds, the driver has been distracted and is judged to be lightly tired.
The embodiment of the invention also provides a fatigue driving detection device, which is shown in fig. 6. The system comprises an image acquisition module, a main control module, a storage module and an alarm module. Preferably, the image acquisition module uses an infrared CCD camera and is provided with 4 infrared LED light sources. The main control module adopts a raspberry pi 3B mainboard. The memory module uses 1GB of LPDDR2 SDRAM memory. The alarm module comprises an LED lamp, an alarm and a loudspeaker.
The method comprises the steps of image acquisition, image preprocessing, face detection, face feature point positioning, eye feature extraction, head posture solving, attention dispersion detection and fatigue state judgment, wherein the fatigue state is divided into three grades, namely normal driving, light fatigue and heavy fatigue according to different situations of fatigue feature parameters of a driver. And judging the current fatigue state grade of the driver according to different conditions of the detected PERCLOS value, blink frequency, head posture angle, driver sight line deviation and other multi-characteristic parameters, thereby adopting different early warning grades and reducing traffic accidents caused by fatigue driving. The above description is only a preferred embodiment of the present invention and does not specifically limit the scope of the present invention. Although the foregoing preferred embodiments have been described in some detail, it should be understood by those skilled in the art that various changes in detail or structure may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.
The specific implementation mode of the invention also comprises:
in view of the problems that the fatigue driving detection technology cannot be well balanced in two aspects of accuracy and real-time performance in practical application and the accuracy is influenced by facial features due to illumination change and posture angle, the invention provides a fatigue driving detection method and device. The technical scheme of the invention can realize all-weather detection and enhance the robustness of wearing glasses, face orientation, illumination change and other problems. Early warning of fatigue can be achieved by distraction detection. The fatigue degree is comprehensively judged by utilizing multiple characteristics, the effects in the aspects of instantaneity and accuracy are good, and the balance problem of the fatigue degree and the accuracy is successfully solved.
The technical scheme for solving the problems is as follows: a fatigue driving detection method based on multiple characteristics comprises the following steps:
s1, acquiring a driving image of the driver by using an infrared device;
s2, positioning the face position by using the improved face rapid detection algorithm;
s3, using a human face feature point positioning algorithm to obtain 68 human face feature point coordinates;
s4, extracting eye information according to the positions of the face characteristic points, calculating the head postures, and using the eye information and the head postures as parameter characteristics of fatigue driving detection;
S5, judging the gazing direction of the human eyes according to the relative positions of the pupils in the human eyes and the purkinje light spots caused by the infrared LEDs, and performing driver attention dispersion detection;
and S6, comparing fatigue characteristics detected in real time in the steps S4 and S5 with a threshold value to detect the fatigue state, wherein the fatigue characteristics comprise the PERCLOS value, the blinking frequency, the head posture angle and the gaze region falling point of the driver.
In the step S1, the infrared device uses 4 infrared LED light sources in cooperation with the camera, and has an effect of collecting an image of the driver on the one hand and reducing interference caused by illumination change. Another aspect is that in said step S5, 4 infrared light sources are used to determine the driver' S gaze direction. 4 infrared LED light sources are arranged around the front windshield of the automobile.
When the face detection is performed in step S2, Haar features are extracted from positive and negative face samples, and the Haar features are input into the AdaBoost algorithm to obtain a face detection classifier. Aiming at the problem that the traditional AdaBoost algorithm is low in instantaneity, an improved algorithm is provided. And a mean value hash algorithm is introduced, the face feature information is converted into hash fingerprint information, a cache mechanism is added, and the hash fingerprint information similar to the picture is stored, so that the detection time is reduced. Firstly, the hash fingerprint information of the mean value of the current input image is calculated, and if the hash fingerprint similar to the information exists in the cache device, the face positioning frame in the cache is directly output. Otherwise, a face detection algorithm based on AdaBoost is used, and the current fingerprint information and the face positioning frame are stored in a cache so as to facilitate the judgment of a similar picture in the next frame.
In step S3, before the LBF algorithm is used to locate the face feature points, a step is added to increase the detection rate, that is, the HOG-SVM algorithm is used to determine the face orientation, different face feature point initialization models are selected according to the detection result of the face orientation, and then the improved LBF algorithm is used to locate, so as to obtain the positions of 68 points. HOG characteristics of the face image are extracted firstly, then an SVM face orientation classifier is used, and different face characteristic point initialization models are adopted according to different results of face orientation. And then the LBF optimization algorithm is used to enable the initialization model to approach the accurate position more and more through continuous regression and iteration. The LBF optimization algorithm is that when an LBF random forest is trained, the difference value of two pixels near the feature point is not used as the random forest classification feature, but the gray levels of the two pixels are normalized and then used as the classification feature.
The parameter characteristics of the fatigue driving detection in the step S4 include the degree of eye closure, the blinking frequency, the PERCLOS parameter, and the head posture angle. And obtaining an eye image according to the positions of the characteristic points of the human face, wherein the parameters for evaluating the eye closure degree and the blinking frequency are the eye width-height ratio. And judging whether the human eyes are closed or not by setting an aspect ratio threshold value, thereby counting a blink frequency threshold value. And obtaining the PERCLOS parameter according to the proportion of the human eye closing time in unit time. The pupil center is located using a gradient-based algorithm. The head pose angle is derived from the mapping of the two-dimensional 68 personal face feature points to the three-dimensional standard face model.
In step S5, the pupil position is obtained by using an improved gradient-based pupil center positioning algorithm, and the area in front of the driver is divided into 9 parts by using the property of the mapping space. Calculating the relative positions of the Purkinje light spots formed by the 4 infrared light sources and the pupil, obtaining a fixation area drop point by combining with the head posture correction angle, counting the sight deviation condition, and judging whether the attention of the driver is dispersed.
The step S6 detects fatigue based on the multi-feature obtained as described above. Firstly, judging the closing condition of human eyes according to the aspect ratio of the human eyes, updating the PERCLOS value and the blinking frequency, judging whether the PERCLOS value and the blinking frequency exceed a threshold value, and making corresponding early warning according to different conditions. And if the fatigue is not detected, calculating the head posture angle, judging whether the head is lowered for a long time, and if so, judging that the head is severely fatigued. When the above characteristics belong to normal driving and eyes are in an open state, whether attention is distracted or not is judged according to the vision deviation condition, if distracted, the eyes are judged to be slightly tired, and if not, the eyes are in normal driving. The fatigue criteria parameters include an initial value of the eye aspect ratio of the driver, a PERCLOS first threshold, a PERCLOS second threshold, a blink frequency threshold, a head pose threshold, and eye gaze region distribution statistics. And the fatigue grade is divided into three grades, the fatigue grade of the driver is judged according to different performances of the multiple characteristics, and corresponding early warning is given.
The invention also provides a fatigue driving detection device.
A fatigue driving detection device comprises an image acquisition module, a main control module, a storage module and an alarm module. The main control module is respectively and electrically connected with the image acquisition module, the storage module and the alarm module. The image acquisition module comprises a CCD camera arranged above the instrument desk shell, and 4 infrared LED light sources arranged above the instrument desk shell and around the windshield; the main control module is a raspberry group main board and a peripheral circuit, and the peripheral circuit comprises a power supply module and a communication module; the alarm module comprises an LED lamp and an alarm; the storage module is an LPDDR2 SDRAM memory and is used for storing hash fingerprint information, a face positioning box corresponding to the hash fingerprint and fatigue detection standard parameters. The main control module outputs a control signal to the camera, the camera collects images of a driver, the images are input to the main control module through the USB data interface, a fatigue driving detection program is executed, and the fatigue characteristic parameters are obtained in real time. And reading the fatigue detection standard parameters in the storage module, obtaining the fatigue degree according to the comparison result of the two parameters, outputting a control signal to the alarm module, and controlling the LED lamp to flash and the alarm to buzz.

Claims (8)

1. A fatigue driving detection device characterized in that: the intelligent alarm system comprises an image acquisition module, a main control module, a storage module and an alarm module, wherein the main control module is respectively and electrically connected with the image acquisition device, the storage module and the alarm module; the alarm module comprises an LED lamp and an alarm; the image acquisition module comprises a camera and 4 infrared LED light sources; the main control module outputs a control signal to the camera, and the camera collects images of a driver and inputs the images to the main control module; the storage module is used for storing the Hash fingerprint information, the face positioning frame corresponding to the Hash fingerprint and the fatigue detection standard parameter; the main control module processes the received data to obtain fatigue characteristic parameters in real time, reads the fatigue detection standard parameters in the storage module, obtains the fatigue degree according to the comparison result of the fatigue characteristic parameters and the fatigue detection standard parameters, outputs a corresponding control signal to the alarm module, and controls the alarm module to send out a set alarm signal; the detecting step with the device comprises:
s1: acquiring a driving image of a driver by using an image acquisition device, wherein the image comprises an image of the driver when the driver is awake to obtain an initial value of the aspect ratio of the eyes as a fatigue detection standard parameter;
S2: the method for positioning the face position by using the improved face rapid detection algorithm specifically comprises the following steps: firstly, calculating a mean hash fingerprint of an image of a driver to be detected, if fingerprint information with a difference of no more than 2 bits from the mean hash fingerprint exists in a cache module, directly outputting a face positioning frame corresponding to the fingerprint in the cache, and adding 1 to the calling frequency of the fingerprint in the cache;
if fingerprint information which has a difference of more than 2 bits but less than 5 bits with the average hash fingerprint exists in the cache module, calling an AdaBoost face detection algorithm to obtain a positioning frame, simultaneously judging whether the cache capacity is full, and if not, storing the average hash fingerprint into a cache database; when the cache capacity is full, deleting the fingerprint with the minimum calling frequency in the cache, and then storing the hash fingerprint of the current mean value into a cache database;
if the difference between the fingerprint information in the cache module and the average hash fingerprint exceeds 5 bits, calling an AdaBoost face detection algorithm to obtain a positioning frame;
s3: when the human face is detected, using a human face characteristic point positioning algorithm to obtain 68 positions of the human face characteristic points;
s4: extracting eye information according to the position of human face feature point, calculating eye width-height ratio, and calculating the ratio of eye-closing frame number to total frame number in unit time PERCLOS value and blink frequency, and judging the relation between the PERCLOS value T and a given threshold value: when T is more than or equal to T Z Occasionally, it is judged as severe fatigue, T Z The alarm module sends out a set alarm signal when the fatigue threshold value is a severe fatigue threshold value; when T is Q <T<T Z When, T Q Performing S5 for a mild fatigue threshold; when T is less than or equal to T Q Then, step S6 is executed;
s5: further judging whether the blink frequency is too fast: when the blink frequency is greater than the given blink frequency threshold value, determining that the vehicle is normally driven, and executing step S6; otherwise, judging the driver to be in light fatigue driving, and sending a set alarm signal by the alarm module;
s6: calculating the head posture angle, judging that the head is low when the angle of the head posture in the vertical direction is greater than a given angle threshold value, judging that the head is severely fatigue when the time that the angle of the head posture in the vertical direction is greater than the given angle threshold value is greater than a given time threshold value, and sending a set alarm signal by an alarm module; otherwise, go to step S7;
s7: judging whether the eyes are closed according to the eye aspect ratio: when the eyes are closed, the detection of distraction is not carried out, and normal driving is judged; when not closed, step S8 is executed;
s8: performing attention dispersion detection, and calculating to obtain horizontal declination angle theta of sight line l And the vertical declination angle theta of the line of sight v Dividing the driver's front area into a normal watching area and a sight line deviation area according to theta l And theta v Obtaining the area where the fixation and drop point of the driver is located, judging the driver to be slightly tired when the fixation and drop point is deviated from the area in the sight line for more than a given time threshold value, and sending a set alarm signal by an alarm module; otherwise, normal driving is determined.
2. A detection method using the fatigue driving detection apparatus according to claim 1, characterized by comprising the steps of:
s1: acquiring a driving image of a driver by using an image acquisition device, wherein the image comprises an image of the driver when the driver is awake to obtain an initial value of the aspect ratio of the eyes as a fatigue detection standard parameter;
s2: the improved rapid human face detection algorithm is used for positioning the human face position, and specifically comprises the following steps: firstly, calculating a mean hash fingerprint of an image of a driver to be detected, if fingerprint information with a difference of no more than 2 bits from the mean hash fingerprint exists in a cache module, directly outputting a face positioning frame corresponding to the fingerprint in the cache, and adding 1 to the calling frequency of the fingerprint in the cache;
if fingerprint information which has a difference of more than 2 bits but less than 5 bits with the average hash fingerprint exists in the cache module, calling an AdaBoost face detection algorithm to obtain a positioning frame, simultaneously judging whether the cache capacity is full, and if not, storing the average hash fingerprint into a cache database; when the cache capacity is full, deleting the fingerprint with the minimum calling frequency in the cache, and then storing the hash fingerprint of the current mean value into a cache database;
If the difference between the fingerprint information in the cache module and the average hash fingerprint exceeds 5 bits, calling an AdaBoost face detection algorithm to obtain a positioning frame;
s3: when the human face is detected, using a human face characteristic point positioning algorithm to obtain 68 positions of the human face characteristic points;
s4: extracting eye information according to the position of the human face characteristic point, calculating the eye width-height ratio, calculating the PERCLOS value and the blink frequency of the proportion of the number of closed-eye frames in unit time to the total number of frames, and judging the relation between the PERCLOS value T and a given threshold value: when T is more than or equal to T Z Occasionally, it is judged as severe fatigue, T Z The alarm module sends out a set alarm signal when the fatigue threshold value is a severe fatigue threshold value; when T is Q <T<T Z When, T Q Performing S5 for a mild fatigue threshold; when T is less than or equal to T Q Then, step S6 is executed;
s5: further judging whether the blink frequency is too fast: when the blink frequency is greater than the given blink frequency threshold value, determining that the vehicle is normally driven, and executing step S6; otherwise, judging the driver to be in light fatigue driving, and sending a set alarm signal by the alarm module;
s6: calculating the head posture angle, judging that the head is low when the angle of the head posture in the vertical direction is greater than a given angle threshold value, judging that the head is severely fatigue when the time that the angle of the head posture in the vertical direction is greater than the given angle threshold value is greater than a given time threshold value, and sending a set alarm signal by an alarm module; otherwise, go to step S7;
S7: judging whether the eyes are closed according to the eye aspect ratio: when the eyes are closed, the detection of distraction is not carried out, and normal driving is judged; when not closed, step S8 is executed;
s8: performing attention dispersion detection, and calculating to obtain horizontal declination angle theta of sight line l And the vertical declination angle theta of the line of sight v Dividing the driver's front area into a normal watching area and a sight line deviation area according to theta l And theta v Obtaining the area where the fixation and drop point of the driver is located, judging the driver to be slightly tired when the fixation and drop point is deviated from the area in the sight line for more than a given time threshold value, and sending a set alarm signal by an alarm module; otherwise, normal driving is determined.
3. A detection method according to claim 2, using the fatigue driving detection apparatus according to claim 1, characterized in that: and preprocessing the driver driving image S1, including adaptive median filtering and Laplace-based image enhancement.
4. A detection method according to claim 2, using the fatigue driving detection apparatus according to claim 1, characterized in that: using the human face feature point location algorithm described in S3, obtaining 68 feature point coordinates specifically includes:
firstly, judging the face orientation by using an HOG-SVM algorithm: selecting different face orientations, extracting HOG characteristics of an image to be detected to obtain HOG characteristic vectors, and using the HOG characteristic vectors as SVM input parameters to obtain a face orientation estimation classifier;
Selecting different human face characteristic point initialization models according to different human face orientation results;
and for each human face feature point position, extracting LBF (local binary patterns) features by using a trained random forest, obtaining a regression result by using a trained linear regressor, updating the human face feature point position until a given maximum iteration number is reached, and outputting the human face feature point position.
5. A detection method according to claim 4 using the fatigue driving detection apparatus according to claim 1, characterized in that: the LBF characteristic extraction by using the trained random forest is to normalize the two pixel gray levels near the characteristic point and then use the normalized pixel gray levels as classification characteristics.
6. A detection method according to claim 2, using the fatigue driving detection apparatus according to claim 1, characterized in that: s4 the calculating the eye aspect ratio specifically includes:
the eye aspect ratio WHR satisfies:
Figure FDA0003628993520000031
wherein W is eye width, H is height, WHR init Is an initial value of the eye aspect ratio of the driver in the awake state;
s4, calculating the PERCLOS value T of the ratio of the number of closed-eye frames to the total number of frames in unit time specifically comprises the following steps:
when WHR is larger than a given threshold value, judging that human eyes are in a closed state, wherein the total frame number in unit time is Z, the frame number of the human eyes in the closed state is Z, and T meets the following conditions:
Figure FDA0003628993520000032
S4, the calculating the blink frequency specifically includes:
and when the current frame image WHR is less than or equal to 3 and the current frame image WHR is more than 3, judging that 1 blink occurs, wherein the blink frequency in a given unit time is the blink frequency.
7. A detection method according to claim 2, using the fatigue driving detection apparatus according to claim 1, characterized in that: the step S6 of calculating the head posture angle specifically includes:
after the face feature point positions are obtained, mapping the two-dimensional N feature point positions on the image to the corresponding N feature point positions on the standard three-dimensional face model by solving a rotation matrix and a translation matrix, and finally calculating the head attitude angles alpha, beta and gamma by using a rotation matrix R:
Figure FDA0003628993520000041
wherein beta is the angle of the head posture in the vertical direction, alpha is the angle of the head posture in the horizontal direction, and gamma is the angle of the head posture in the front-back direction.
8. A detection method according to claim 2, using the fatigue driving detection apparatus according to claim 1, characterized in that: s8, performing attention dispersion detection, and calculating to obtain a horizontal declination angle theta of the sight line l And the vertical declination angle theta of the line of sight v The method specifically comprises the following steps:
for a displacement vector of a point in a pupil and a pupil boundary point and a gradient vector of the boundary point, a point on the pupil corresponding to the maximum value of the inner product of the two points is a pupil center, after the pupil center is positioned, the position of a Purkinje light spot is searched nearby, the Purkinje light spot is formed by 4 infrared LED light sources distributed around the front area of a driver, the relative position of the pupil center and the center of a rectangle formed by the 4 Purkinje light spots is calculated, and the horizontal deviation angle of the two points is obtained
Figure FDA0003628993520000042
The vertical deflection angle eta is used for correcting the sight line direction by using the head postures alpha and beta to obtain the final sight line horizontal deflection angle
Figure FDA0003628993520000043
Vertical deflection angle theta v =η+β。
CN201911258187.4A 2019-12-10 2019-12-10 Fatigue driving detection device and method Expired - Fee Related CN111062292B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911258187.4A CN111062292B (en) 2019-12-10 2019-12-10 Fatigue driving detection device and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911258187.4A CN111062292B (en) 2019-12-10 2019-12-10 Fatigue driving detection device and method

Publications (2)

Publication Number Publication Date
CN111062292A CN111062292A (en) 2020-04-24
CN111062292B true CN111062292B (en) 2022-07-29

Family

ID=70300366

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911258187.4A Expired - Fee Related CN111062292B (en) 2019-12-10 2019-12-10 Fatigue driving detection device and method

Country Status (1)

Country Link
CN (1) CN111062292B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111583585B (en) * 2020-05-26 2021-12-31 苏州智华汽车电子有限公司 Information fusion fatigue driving early warning method, system, device and medium
CN111845736A (en) * 2020-06-16 2020-10-30 江苏大学 Vehicle collision early warning system triggered by distraction monitoring and control method
CN112163470A (en) * 2020-09-11 2021-01-01 高新兴科技集团股份有限公司 Fatigue state identification method, system and storage medium based on deep learning
CN112733772B (en) * 2021-01-18 2024-01-09 浙江大学 Method and system for detecting real-time cognitive load and fatigue degree in warehouse picking task
CN113034851A (en) * 2021-03-11 2021-06-25 中铁工程装备集团有限公司 Tunnel boring machine driver fatigue driving monitoring device and method
CN113569785A (en) * 2021-08-04 2021-10-29 上海汽车集团股份有限公司 Driving state sensing method and device
CN115272645A (en) * 2022-09-29 2022-11-01 北京鹰瞳科技发展股份有限公司 Multi-mode data acquisition equipment and method for training central fatigue detection model
CN116012822B (en) * 2022-12-26 2024-01-30 无锡车联天下信息技术有限公司 Fatigue driving identification method and device and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106530623A (en) * 2016-12-30 2017-03-22 南京理工大学 Fatigue driving detection device and method
CN109299709A (en) * 2018-12-04 2019-02-01 中山大学 Data recommendation method, device, server end and client based on recognition of face
CN109325964A (en) * 2018-08-17 2019-02-12 深圳市中电数通智慧安全科技股份有限公司 A kind of face tracking methods, device and terminal
CN110516734A (en) * 2019-08-23 2019-11-29 腾讯科技(深圳)有限公司 A kind of image matching method, device, equipment and storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8655029B2 (en) * 2012-04-10 2014-02-18 Seiko Epson Corporation Hash-based face recognition system
CN108423006A (en) * 2018-02-02 2018-08-21 辽宁友邦网络科技有限公司 A kind of auxiliary driving warning method and system
CN109254654B (en) * 2018-08-20 2022-02-01 杭州电子科技大学 Driving fatigue feature extraction method combining PCA and PCANet

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106530623A (en) * 2016-12-30 2017-03-22 南京理工大学 Fatigue driving detection device and method
CN109325964A (en) * 2018-08-17 2019-02-12 深圳市中电数通智慧安全科技股份有限公司 A kind of face tracking methods, device and terminal
CN109299709A (en) * 2018-12-04 2019-02-01 中山大学 Data recommendation method, device, server end and client based on recognition of face
CN110516734A (en) * 2019-08-23 2019-11-29 腾讯科技(深圳)有限公司 A kind of image matching method, device, equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"A Bayesian Hashing approach and its application to face recognition";Qi Dai 等;《Neurocomputing》;20161112;第213卷;第5-13页 *
"基于ZYNQ的优化Adaboost人脸检测";高树静 等;《计算机工程与应用》;20190517;第201-206页 *

Also Published As

Publication number Publication date
CN111062292A (en) 2020-04-24

Similar Documents

Publication Publication Date Title
CN111062292B (en) Fatigue driving detection device and method
CN103714660B (en) System for achieving fatigue driving judgment on basis of image processing and fusion between heart rate characteristic and expression characteristic
CN104091147B (en) A kind of near-infrared eyes positioning and eye state identification method
WO2021016873A1 (en) Cascaded neural network-based attention detection method, computer device, and computer-readable storage medium
CN108614999B (en) Eye opening and closing state detection method based on deep learning
CN106682578B (en) Weak light face recognition method based on blink detection
CN111582086A (en) Fatigue driving identification method and system based on multiple characteristics
CN109359603A (en) A kind of vehicle driver&#39;s method for detecting human face based on concatenated convolutional neural network
CN104200192A (en) Driver gaze detection system
CN104008364B (en) Face identification method
CN109886086B (en) Pedestrian detection method based on HOG (histogram of oriented gradient) features and linear SVM (support vector machine) cascade classifier
CN111158457A (en) Vehicle-mounted HUD (head Up display) human-computer interaction system based on gesture recognition
Yuen et al. On looking at faces in an automobile: Issues, algorithms and evaluation on naturalistic driving dataset
CN111460950A (en) Cognitive distraction method based on head-eye evidence fusion in natural driving conversation behavior
CN103544478A (en) All-dimensional face detection method and system
CN114663985A (en) Face silence living body detection method and device, readable storage medium and equipment
CN108256378A (en) Driver Fatigue Detection based on eyeball action recognition
Panicker et al. Open-eye detection using iris–sclera pattern analysis for driver drowsiness detection
CN108596064A (en) Driver based on Multi-information acquisition bows operating handset behavioral value method
CN106295458A (en) Eyeball detection method based on image procossing
Pandey et al. Dumodds: Dual modeling approach for drowsiness detection based on spatial and spatio-temporal features
CN113920591A (en) Medium-distance and long-distance identity authentication method and device based on multi-mode biological feature recognition
Long et al. Near infrared face image quality assessment system of video sequences
CN110232300A (en) Lane vehicle lane-changing intension recognizing method and system by a kind of
CN110570469B (en) Intelligent identification method for angle position of automobile picture

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20220729