CN113140093A - Fatigue driving detection method based on AdaBoost algorithm - Google Patents

Fatigue driving detection method based on AdaBoost algorithm Download PDF

Info

Publication number
CN113140093A
CN113140093A CN202110453397.XA CN202110453397A CN113140093A CN 113140093 A CN113140093 A CN 113140093A CN 202110453397 A CN202110453397 A CN 202110453397A CN 113140093 A CN113140093 A CN 113140093A
Authority
CN
China
Prior art keywords
fatigue
driver
eye
image
early warning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110453397.XA
Other languages
Chinese (zh)
Inventor
徐柱
李舒扬
凌莎
罗文丽
罗龙
雷静静
张清瑶
杨茜厦
唐柳
何浪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guiyang Vocational and Technical College
Original Assignee
Guiyang Vocational and Technical College
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guiyang Vocational and Technical College filed Critical Guiyang Vocational and Technical College
Priority to CN202110453397.XA priority Critical patent/CN113140093A/en
Publication of CN113140093A publication Critical patent/CN113140093A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • G08B21/06Alarms for ensuring the safety of persons indicating a condition of sleep, e.g. anti-dozing alarms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/19Sensors therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/197Matching; Classification

Landscapes

  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Ophthalmology & Optometry (AREA)
  • Business, Economics & Management (AREA)
  • Emergency Management (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a fatigue driving detection method based on an AdaBoost algorithm, which is characterized in that through an AdaBoost algorithm detection system arranged at the position of a vehicle instrument desk, a real-time video image of a driver realizes the positioning of a human face and human eyes under the action of dynamic vertical direction stretching and a trained AdaBoost algorithm classifier, edge detection and contour extraction are carried out on an obtained eye image, pixel points in a contour are calculated to obtain the opening information of the eyes, then the closed eye frame number and the blink frequency proportion of the driver in a period of time are calculated by using a PERCLOS algorithm, and the fatigue state of the driver can be judged; the invention has better environmental interference resistance and real-time performance, can accurately judge the fatigue state, can effectively avoid traffic accidents caused by fatigue driving of a driver, and improves the driving safety.

Description

Fatigue driving detection method based on AdaBoost algorithm
Technical Field
The invention relates to a fatigue driving detection method based on an AdaBoost algorithm, and belongs to the technical field of automobile driving.
Background
The automobile keeping quantity in China increases year by year, and the traffic safety situation becomes more severe. Among various road traffic accidents, fatigue driving, drunk driving and overspeed driving are main causes of traffic accidents, and related data show that the accidents caused by fatigue driving account for about 7% of the total number of traffic accidents and about 40% of the total number of super-large traffic accidents. Therefore, the research on the driver fatigue detection technology has important practical significance for guaranteeing the life and property safety of drivers and passengers.
At present, fatigue driving detection methods are mainly classified into 3 types: detection based on electroencephalogram (EEG) characteristics, detection based on eye movement characteristics, and detection based on driving behavior characteristics. The method based on the eye movement feature detection is a widely-adopted research method at present in China due to the characteristics of detection directness, non-contact, consistency with physiological information of a driver, strong acceptability and the like. Dong et al propose a method of determining the degree of fatigue by using the distance between eyelids, based on the driver's eye state to determine whether the driver is in a closed state when the eye opening is smaller than a set threshold. Seifoory performs eye state analysis on the iris, and if the iris is not detected, the eye is considered to be closed. Wu and the like perform classification detection of the human eye state by adopting a template matching method, thereby obtaining fatigue state information of the driver.
The method only completes training and testing aiming at the eyes in a normal state in fatigue driving detection, and does not consider the interference of lens reflection and a spectacle frame on a detection algorithm under the condition of wearing spectacles.
Disclosure of Invention
The invention aims to solve the technical problem of providing a fatigue driving detection method based on an AdaBoost algorithm, which has better environmental interference resistance and real-time performance, can accurately judge the fatigue state, can effectively avoid traffic accidents caused by fatigue driving of a driver and improve the driving safety; the defects of the prior art can be overcome.
The technical scheme of the invention is as follows: a fatigue driving detection method based on an AdaBoost algorithm comprises a driver face image acquisition module, a driver image processing module and a fatigue driving detection and fatigue early warning module, and comprises the following steps:
(1) the CCD camera can acquire the head image of the driver in real time to acquire a video frame;
(2) carrying out face positioning judgment on each acquired image, carrying out eye detection judgment according to the image frame determined by face positioning, processing the dark or bright image of the eye through dynamic vertical direction stretching, enhancing the display effect of the image of the eye, extracting the contour of the eye through a canny edge detection algorithm, and calculating the opening degree, the number of closed frames and the blinking frequency of the contour of the eye through a Perclose algorithm to obtain the fatigue state of a driver; (3) carrying out comprehensive fatigue driving judgment on the human eye contour opening, the closed-eye frame number and the blink frequency in a period of time by fatigue driving detection; (4) and detecting and judging the fatigue state of the fatigue driving, and sending out early warning through a fatigue early warning module.
Furthermore, the image acquisition of the driver face image acquisition module adopts a CCD infrared camera; the CCD infrared camera is arranged at the position of a dashboard of a driver.
Furthermore, the image processing module mainly comprises a digital signal processor, a synchronous dynamic random access memory and a Flash memory, and is used for preprocessing the acquired image, positioning the human face and the human eyes and calculating the opening degree of the human eyes.
Further, based on an AdaBoost algorithm, Haar-like is used as a human face and human eye classifier for detection and positioning.
Further, when the Perclose eye opening detection algorithm detects that the threshold value triggering of the eyes of the driver occurs, the alarm buzzer immediately gives an alarm sound.
Further, the fatigue early warning module adopts LED lamp scintillation and buzzer audio frequency to report to the police simultaneously, realizes hierarchical early warning according to driver's fatigue degree: during primary early warning, the LED indicator light is yellow, intermittently flickers, and the buzzer intermittently gives an alarm; during secondary early warning, the indicator light turns orange, the flashing frequency is increased, and the alarm duty ratio of the buzzer is increased; and during the third-stage early warning, the indicator light turns red, and the buzzer sounds for a long time until the fatigue early warning is relieved.
Compared with the prior art, the fatigue driving detection method based on the AdaBoost algorithm has the following advantages:
1. the method has better environmental interference resistance and real-time performance, can accurately judge the fatigue state, can effectively avoid traffic accidents caused by fatigue driving of a driver, and improves the driving safety.
2. The non-contact type acquisition of the human eye data of the driver is adopted, so that the discomfort and discomfort of the driver are not caused.
3. By adopting the CCD camera and infrared light with certain frequency band wavelength, the influence of insufficient light at night on the face data acquisition can be effectively solved.
4. The designed detection system can well realize fatigue detection in an illumination self-adaptive environment, and the human eye detection precision is improved by 3.96 percent compared with the human eye detection precision of detection with an AdaBoost algorithm of an OpenCV library.
5. The three-level fatigue early warning scheme provided by the invention is simple to implement, has low cost, can realize early warning of different levels according to the fatigue driving state, and has strong implementability.
6. The method is characterized in that detection and analysis of eye opening information under different illumination intensities are used as an entry point, the overall scheme design of an illumination self-adaptive fatigue driving detection system based on the AdaBoost algorithm is provided, a large number of driving images under different illumination and glasses wearing conditions are added for classifier training, dynamic vertical direction stretching is introduced to further enhance the eye image display effect under different illumination, and the anti-illumination interference capability of the AdaBoost algorithm is improved in an auxiliary manner.
7. The image processing module mainly comprises a digital signal processor
The system comprises a Digital Signal Processor (DSP), a Synchronous Dynamic Random Access Memory (SDRAM) and a Flash memory, wherein the DSP is used for preprocessing the acquired image, and for positioning the human face and the human eyes and calculating the opening degree of the human eyes; the DSP has a plurality of functional units for independent calculation, higher calculation speed and high programmability, and can meet the overall requirements of the system; but the data in the DSP is lost after the system is powered off, so SDRAM and Flash memories are used to store digital signals, algorithms, and post-processed data.
Drawings
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be further described in detail with reference to the accompanying drawings, in which:
fig. 1 is a block diagram of the overall architecture of the system of the present invention.
FIG. 2 is the strong classifier screening process of the present invention.
Fig. 3 is a histogram stretched three-segment line graph of the present invention.
FIG. 4 is a profile processing process of the present invention.
Fig. 5 is a schematic diagram of the PERCLOSP80 criteria of the present invention.
FIG. 6 is a system workflow diagram of the present invention.
Fig. 7 is a schematic diagram of the detection result of human face and human eye according to the invention.
Fig. 8 is a diagram of the effect of eye edge calibration according to the present invention.
Fig. 9 is a diagram of the eye contour extraction effect of the present invention.
FIG. 10 is a schematic diagram of the low light vertical direction tensile test of the present invention.
Fig. 11 is a schematic diagram of human face and human eye detection and verification of the invention.
Fig. 12 is a real-time driving state of the present invention.
Detailed Description
Hereinafter, preferred embodiments of the present invention will be described in detail with reference to the accompanying drawings. It should be understood that the preferred embodiments are illustrative of the invention only and are not limiting upon the scope of the invention.
Embodiment 1. as shown in fig. 1, a fatigue driving detection method based on AdaBoost algorithm includes a driver face image acquisition module, a driver image processing module, and a fatigue driving detection and fatigue early warning module, and the method includes the following steps:
(1) the CCD camera can acquire the head image of the driver in real time to acquire a video frame; specifically, the image acquisition of the driver face image acquisition module adopts a CCD infrared camera; the CCD infrared camera is arranged at the position of a dashboard of a driver.
(2) Carrying out face positioning judgment on each acquired image, carrying out eye detection judgment according to the image frame determined by face positioning, processing the dark or bright image of the eye through dynamic vertical direction stretching, enhancing the display effect of the image of the eye, extracting the contour of the eye through a canny edge detection algorithm, and calculating the opening degree, the number of closed frames and the blinking frequency of the contour of the eye through a Perclose algorithm to obtain the fatigue state of a driver; specifically, the image processing module mainly comprises a digital signal processor, a synchronous dynamic random access memory and a Flash memory, and is used for preprocessing the acquired image, positioning the human face and the human eyes and calculating the opening degree of the human eyes; based on an AdaBoost algorithm, Haar-like is used as a human face and human eye classifier for detection and positioning.
(3) Carrying out comprehensive fatigue driving judgment on the human eye contour opening, the closed-eye frame number and the blink frequency in a period of time by fatigue driving detection;
(4) detecting and judging a fatigue state by fatigue driving, and sending out early warning by a fatigue early warning module; specifically, the fatigue early warning module adopts LED lamp scintillation and buzzer audio frequency to report to the police simultaneously, realizes hierarchical early warning according to driver's fatigue degree: during primary early warning, the LED indicator light is yellow, intermittently flickers, and the buzzer intermittently gives an alarm; during secondary early warning, the indicator light turns orange, the flashing frequency is increased, and the alarm duty ratio of the buzzer is increased; during the third-stage early warning, the indicator light turns red, and the buzzer sounds for a long time until the fatigue early warning is relieved; when the Perclose eye opening detection algorithm detects that the threshold value triggering of the eyes of the driver occurs, the alarm buzzer immediately gives out an alarm sound.
Principle of which
A fatigue driving detection method based on an AdaBoost algorithm develops a fatigue driving detection system research by combining the eye opening characteristics of a driver and provides a method for realizing the positioning of human faces and human eyes by adopting the AdaBoost algorithm. The video image of the driver realizes the positioning of the human face and the human eyes under the action of dynamic vertical direction stretching and a trained AdaBoost algorithm classifier; performing edge detection and contour extraction on the obtained eye image, and calculating pixel points in the contour to obtain the opening information of the eye; and calculating the number of eye-closing frames and the blink frequency proportion of the driver in a period of time by using a PERCLOS algorithm to obtain the fatigue state of the driver.
The method selects an automatic focusing color camera as an image acquisition module, firstly utilizes a Haar characteristic design AdaBoost algorithm to realize the detection and the positioning of human faces and human eyes, then realizes the detection and the extraction of human eye outlines according to a Canny operator, and finally utilizes a discrete PERCLOS fatigue judgment algorithm to realize the detection and the warning of the fatigue state of a driver, realizes the fatigue detection by means of open source computing image library OpenCV programming, and tests the real-time performance and the stability of the system. The overall structure of the system is shown in a block diagram, as in FIG. 1.
The fatigue early warning system is composed of an image acquisition module, an image processing module and a fatigue early warning module, and the overall framework of the system is shown in figure 1. The image acquisition module consists of an automatic focusing color camera and can realize the real-time acquisition of the facial image of the driver. As the distance between the head of a driver and the camera is changed within a certain limited range in the running process of the vehicle, the camera with a wide angle of 90 degrees, 500 ten thousand pixels and a frame frequency of 15 frames/s is selected after the vehicle test. The camera completely meets the real-time requirement of the system, and transmits the acquired image information to the image processing module through a video conversion transmission technology. The image processing module mainly comprises a Digital Signal Processor (DSP), a Synchronous Dynamic Random Access Memory (SDRAM) and a Flash memory, and is used for preprocessing the acquired image, positioning the human face and the human eye and calculating the opening degree of the human eye. The DSP has a plurality of functional units for independent calculation, higher calculation speed and high programmability, and can meet the overall requirements of the system. But the data in the DSP is lost after the system is powered off, so SDRAM and Flash memories are used to store digital signals, algorithms, and post-processed data. The fatigue early warning module adopts LED lamp scintillation and buzzer audio frequency to report to the police simultaneously, realizes hierarchical early warning according to driver's fatigue degree: during primary early warning, the LED indicator light is yellow, intermittently flickers, and the buzzer intermittently gives an alarm; during secondary early warning, the indicator light turns orange, the flashing frequency is increased, and the alarm duty ratio of the buzzer is increased; and during the third-stage early warning, the indicator light turns red, and the buzzer sounds for a long time until the fatigue early warning is relieved. The system continuously repeats the processes, so that real-time fatigue detection and early warning are realized.
Human face and human eye positioning
The AdaBoost (Adaptedboosting) algorithm has high precision and high classification speed, can greatly improve the generalization capability and is not easy to cause the overfitting phenomenon. Based on an AdaBoost algorithm, Haar-like is used as a human face and human eye classifier for detection and positioning. The AdaBoost algorithm mainly comprises two parts of selecting a weak classifier and determining a strong classifier by the weak classifier.
Taking face detection as an example, the AdaBoost algorithm comprises the following steps:
give a set of training sets (x)1,y1),…,(xn,yn) Wherein x isiFor sample description, yiE (0,1) is a sample identifier (0,1 respectively represents positive example and negative example). In fatigue detection, 0 may be defined as a non-face and 1 as a face.
Initialization: initializing sample weight, and for non-human face samples, carrying out error weight D on ith sample in t cyclest(i) 1/2m for face samples, Dt(i) 1/2l (face samples and non-face samples are initialized to different values, m is the total number of non-face samples, l is the total number of face samples). For T ═ 1,2, …, T (T is the number of cycles), the following steps are performed in a loop:
a. weight normalization:
Figure BDA0003039619140000081
in the formula, qiIs a normalized value.
b. For each feature f, training a weak classifier h (x, f, p, theta), and calculating the weighted error rate of the weak classifiers of all the features:
Figure BDA0003039619140000082
in the formula, θ and p are the threshold and the deviation parameter of the weak classifier, respectively.
c. Selecting the best weak classifier h according to the minimum error ratet(x):
Figure BDA0003039619140000083
In the formula (f)t、pt、θtRespectively, the characteristic value, the deviation value and the threshold value under the best weak classifier.
d. Adjusting weights according to the optimal weak classifier:
Figure BDA0003039619140000084
in the formula, betat=εt/(1-εt) Is an update factor; e.g. of the typeiFor categorizing the detected variables, when the sample xiWhen correctly classified, eiWhen sample x is equal to 0iWhen classified by error, ei=1。
e. The final strong classifier is:
Figure BDA0003039619140000091
in the formula (I), the compound is shown in the specification,
Figure BDA0003039619140000092
are weight coefficients.
The AdaBoost algorithm is a combination of a series of strong classifiers, and the strong classifiers are connected in series in sequence according to the intensity to form a cascade classifier, as shown in fig. 2, so that a large number of non-face images can be excluded, and the detection speed is further improved.
By collecting positive samples and negative samples under different illumination intensities, the number of layers of the face classifier and the human eye classifier obtained through training is 8, and the minimum detection rate of each stage is 0.95.
Dynamic stretching of human eyes
The detection performance can not meet the requirements only by adding the face pictures under different illumination intensities in the training process of the AdaBoost algorithm for training. In order to better improve the stability of the detection system under different illumination, the dynamic vertical direction stretching is adopted to process the darker or brighter eye image on the basis of the AdaBoost algorithm, so that the eye image display effect is enhanced, and the AdaBoost algorithm is more convenient to realize human eye detection and contour extraction.
A gray histogram is a function of the gray level, describing the number of pixels in the image (or the frequency with which the gray level pixels appear). The structure of the one-dimensional histogram is represented as:
H(rk)=nk,k∈[0,L] (6)
Figure BDA0003039619140000093
in the formula, H (r)k) Representing a discrete function of the histogram, rkIs the k-th machine gray of the image, nkFor having grey values r in the imagekN is the sum of the pixels of the image, L is the number of gray levels of the image, P (r)k) Is a gray level rkThe frequency of occurrence.
In the image enhancement process, in order to highlight the interested target or gray scale region, piecewise linear transformation can be adopted. The whole gray scale interval is divided into a plurality of gray scale intervals, the gray scale interval corresponding to the target to be enhanced is stretched, and uninteresting gray scales are restrained, so that the purpose of enhancement is achieved, and fig. 3 shows a three-section vertical direction stretching transformation principle. The mathematical expression is as follows:
Figure BDA0003039619140000101
in the formula, f (x, y) pixel (x, y) is the gray value of the image before the stretching transformation, g (x, y) is the spatial operation function, M represents the maximum gray value of the image, i.e., M is 255, and a, b, c, and d are the values of the stretching gray segment before and after the stretching, respectively.
The histogram stretched segment point values a, b are typically fixed. When the image is dark as a whole, the histogram of the feature pixels shifts to the left, and vice versa, resulting in the contrast of the feature pixels and the background pixels being stretched to be smaller, making the features more difficult to distinguish. To solve this problem, the histogram stretching is performed by using dynamic a and b values. Before stretching, the average gray scale G of the image is calculated, and the values of the segment points a and b are dynamically set by using the change of the average gray scale G. Through actual test analysis, in order to highlight the outline of the eye, the values of a and b are respectively set as follows:
a=20+(G-100)×1.2
b=80+(G-100)×1.45 (9)
eye contour extraction
Based on an edge detection algorithm proposed by john Canny, contour extraction is performed on the preprocessed binary image by adopting a Canny operator contour extraction algorithm, and the processing process is shown in fig. 4.
The Canny algorithm aims to find an optimal edge detection, and mainly comprises the following meanings: the algorithm identifies as many actual edges in the image as possible; the identified edge is as close as possible to the actual edge; edges in an image can only be identified once and possible image noise should not be identified as edges. And (5) calling a Canny edge detection algorithm in OpenCV (open source/consumer computer vision) to realize eye contour detection.
The method comprises the steps of obtaining a black-white binary image with better edge gradient through edge detection, and extracting and drawing the outline on the basis of the black-white binary image, so as to obtain the external outline characteristics of the eye image and prepare for judging the state of human eyes and solving the area. If one point in the graph is black and 8 adjacent pixel points are black, the point is an inner point. And setting all the interior points as background points to finish the extraction of the contour.
Fatigue state determination
Eye closure time ratio (PERCLOS) is a fatigue state assessment method proposed by walt wierville, which refers to the ratio of the time that the human eye is completely closed to a prescribed time. At present, PERCLOS is the most effective and accepted evaluation criterion for fatigue status. The P80 criterion is used as a test standard, and the principle is shown in FIG. 5. Fig. 5 shows the entire process of the eye from fully open to fully closed and then fully open.
Capturing each frame image of a driver in real time through a camera, calculating the number of frames in a state that eyes are closed by more than 80% in a period of time as m, calculating the total number of frames as N, and replacing the time percentage with the percentage of image frames to express the value fp of PERCLOS:
Figure BDA0003039619140000111
firstly, the number X of pixels in the contour can be obtained by the extracted human eye contour and the contouraa () contour area calculation function provided in OpenCV, and the human eye opening degree can be obtained by taking the formula (11):
Figure BDA0003039619140000112
in addition, the blink frequency refers to the number of times a complete blink process is completed per unit time. In general, a person blinks 10-15 times per minute. If the blink frequency deviates far from normal, the possible causes are mainly: the driver is in a fatigue state and closes eyes for a long time; the eyes of the driver are open, but the eyes are dull and the attention is lost. The former can be detected by calculating the number of eye closure frames in unit time by a PERCLOS method, and when fp is greater than 40% and the continuous eye closure time is greater than 3s, the driver is judged to be in a fatigue driving state; the later fp is smaller, the PERCLOS method cannot judge whether the state of the driver is abnormal, at the moment, the blinking frequency can be used for judging, if the blinking frequency is too low, the state of the eyes of the driver is abnormal, and early warning needs to be given.
Specific application process
The present invention will be described in detail below in order to make those skilled in the art better understand the technical solution of the present invention.
Fig. 1 is a general structural block diagram provided by the present invention, and fig. 6 is a working flow chart of a human eye opening fatigue detection system.
Firstly, a CCD camera can acquire head images of a driver in real time, video frames are acquired through image processing, face positioning judgment is carried out on each acquired frame image, eye detection judgment is carried out on image frames determined through face positioning, dark or bright images of eyes are processed through dynamic vertical direction stretching, the eye image display effect is enhanced, the eye contour is extracted through a canny edge detection algorithm, then the eye contour opening degree is calculated, and comprehensive fatigue driving judgment is carried out on the number of closed-eye frames and the blink frequency within a period of time. Finally, a fatigue early warning system is started, and a buzzer gives an alarm and flashes along with a warning lamp.
When a driver drives a vehicle, the CCD camera can acquire the head image of the driver in real time, the image processing obtains video frames, and the face positioning judgment of each obtained frame image is shown in fig. 7.
After the image frames determined by face positioning are subjected to eye detection judgment, dark or bright images of eyes are processed through dynamic vertical direction stretching, the eye image display effect is enhanced, the eye contour is extracted through a canny edge detection algorithm, and fig. 8 and 9 show that the vertical direction stretching test and edge detection predicted eye contour extraction of an AdaBoost algorithm are performed under strong light and weak light in an improved manner under the strong light and the weak light.
And carrying out real vehicle verification for further verifying the effectiveness and anti-interference of the designed algorithm and the fatigue detection system. The driver with the habit of sleeping at noon is selected to test in the clear weather after lunch. The testing time period is 13: 00-15: 00, a certain SUV is used as a testing vehicle, and the testing road section is an urban road with few vehicles. The configured camera is of a Rotech C930e type and is provided with a CMOS color sensor; the resolution of the collected image is 640 pixels × 480 pixels, and the image frame rate is 15 frames/s.
The human face and human eye cascade classifier based on the Haar-like characteristics is verified, in the normal driving process, the human faces in different postures are well detected under different illumination conditions, and the detection accuracy can reach 94.1%. Images under the strong light and weak light environments in fig. 10 are extracted for eye histogram tensile analysis, and the comparison test result is shown in fig. 11.
As shown in fig. 11, the dynamic histogram stretching algorithm adjusts the segment point values in real time according to the brightness information of the picture, so that the gray scale interval of the eye image is optimized, the noise is effectively suppressed, and the contrast of the image is improved. Compared with static histogram stretching, the image display effect is better, so that the eye contour extraction in the subsequent process is more accurate, and the AdaBoost algorithm after dynamic histogram stretching has stronger anti-interference capability under different illumination.
In order to verify the detection accuracy of the improved algorithm, 20s of image videos are extracted from the dynamic driving process and compared with the AdaBoost algorithm carried by the OpenCV library to carry out human eye positioning data statistical analysis, and the result is shown in Table 1.
TABLE 1 illumination factor vs. Algorithm human eye location contrast analysis
Figure BDA0003039619140000131
As can be seen from Table 1, under the working conditions of the light source intensity change caused by the driving direction change and the shade discontinuity of the tree, the influence of the illumination factor can be well overcome by improving AdaBoost, and the detection accuracy is improved.
After a period of driving, the driver has short-term mild fatigue driving to enable fpAnd triggering an alarm buzzer when the threshold value is reached. The test results of the driver in different states are shown in fig. 12, and the human eye opening information and the driving state of the driver are displayed in real time in the test process.
And then calculating the opening degree of the human eye contour, and carrying out comprehensive fatigue driving judgment on the number of closed-eye frames and the blink frequency in a period of time. Finally, a fatigue early warning system is started, and a buzzer gives an alarm and flashes along with a warning lamp.
Finally, the above embodiments are only intended to illustrate the technical solutions of the present invention and not to limit the present invention, and although the present invention has been described in detail with reference to the preferred embodiments, it will be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions, and all of them should be covered by the claims of the present invention.

Claims (7)

1. A fatigue driving detection method based on an AdaBoost algorithm is characterized by comprising a driver face image acquisition module, a driver image processing module and a fatigue driving detection and fatigue early warning module, and the method comprises the following steps:
(1) the CCD camera can acquire the head image of the driver in real time to acquire a video frame;
(2) carrying out face positioning judgment on each acquired image, carrying out eye detection judgment according to the image frame determined by face positioning, processing the dark or bright image of the eye through dynamic vertical direction stretching, enhancing the display effect of the image of the eye, extracting the contour of the eye through a canny edge detection algorithm, and calculating the opening degree, the number of closed frames and the blinking frequency of the contour of the eye through a Perclose algorithm to obtain the fatigue state of a driver;
(3) carrying out comprehensive fatigue driving judgment on the human eye contour opening, the closed-eye frame number and the blink frequency in a period of time by fatigue driving detection;
(4) and detecting and judging the fatigue state of the fatigue driving, and sending out early warning through a fatigue early warning module.
2. The fatigue driving detection method based on the AdaBoost algorithm according to claim 1, characterized in that: and the image acquisition of the driver face image acquisition module adopts a CCD infrared camera.
3. The fatigue driving detection method based on the AdaBoost algorithm according to claim 2, characterized in that: the CCD infrared camera is arranged at the position of a dashboard of a driver.
4. The fatigue driving detection method based on the AdaBoost algorithm according to claim 1, characterized in that: the image processing module mainly comprises a digital signal processor, a synchronous dynamic random access memory and a Flash memory, and is used for preprocessing the acquired image, positioning the human face and the human eyes and calculating the opening degree of the human eyes.
5. The fatigue driving detection method based on the AdaBoost algorithm according to claim 1, characterized in that: based on an AdaBoost algorithm, Haar-like is used as a human face and human eye classifier for detection and positioning.
6. The fatigue driving detection method based on the AdaBoost algorithm according to claim 1, characterized in that: when the Perclose eye opening detection algorithm detects that the threshold value triggering of the eyes of the driver occurs, the alarm buzzer immediately gives out an alarm sound.
7. The fatigue driving detection method based on the AdaBoost algorithm according to claim 1, characterized in that: the fatigue early warning module adopts LED lamp scintillation and buzzer audio frequency to report to the police simultaneously, realizes hierarchical early warning according to driver's fatigue degree: during primary early warning, the LED indicator light is yellow, intermittently flickers, and the buzzer intermittently gives an alarm; during secondary early warning, the indicator light turns orange, the flashing frequency is increased, and the alarm duty ratio of the buzzer is increased; and during the third-stage early warning, the indicator light turns red, and the buzzer sounds for a long time until the fatigue early warning is relieved.
CN202110453397.XA 2021-04-26 2021-04-26 Fatigue driving detection method based on AdaBoost algorithm Pending CN113140093A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110453397.XA CN113140093A (en) 2021-04-26 2021-04-26 Fatigue driving detection method based on AdaBoost algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110453397.XA CN113140093A (en) 2021-04-26 2021-04-26 Fatigue driving detection method based on AdaBoost algorithm

Publications (1)

Publication Number Publication Date
CN113140093A true CN113140093A (en) 2021-07-20

Family

ID=76812074

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110453397.XA Pending CN113140093A (en) 2021-04-26 2021-04-26 Fatigue driving detection method based on AdaBoost algorithm

Country Status (1)

Country Link
CN (1) CN113140093A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113990033A (en) * 2021-09-10 2022-01-28 南京融才交通科技研究院有限公司 Vehicle traffic accident remote take-over rescue method and system based on 5G internet of vehicles
CN114821713A (en) * 2022-04-08 2022-07-29 湖南大学 Fatigue driving detection method based on Video transducer
CN116168374A (en) * 2023-04-21 2023-05-26 南京淼瀛科技有限公司 Active safety auxiliary driving method and system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080126281A1 (en) * 2006-09-27 2008-05-29 Branislav Kisacanin Real-time method of determining eye closure state using off-line adaboost-over-genetic programming
CN101436314A (en) * 2007-11-14 2009-05-20 广州广电运通金融电子股份有限公司 Brake machine channel system, method and system for recognizing passing object in brake machine channel
CN101593425A (en) * 2009-05-06 2009-12-02 深圳市汉华安道科技有限责任公司 A kind of fatigue driving monitoring method and system based on machine vision
CN105286802A (en) * 2015-11-30 2016-02-03 华南理工大学 Driver fatigue detection method based on video information
CN106384339A (en) * 2016-09-30 2017-02-08 防城港市港口区高创信息技术有限公司 Infrared night vision image enhancement method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080126281A1 (en) * 2006-09-27 2008-05-29 Branislav Kisacanin Real-time method of determining eye closure state using off-line adaboost-over-genetic programming
CN101436314A (en) * 2007-11-14 2009-05-20 广州广电运通金融电子股份有限公司 Brake machine channel system, method and system for recognizing passing object in brake machine channel
CN101593425A (en) * 2009-05-06 2009-12-02 深圳市汉华安道科技有限责任公司 A kind of fatigue driving monitoring method and system based on machine vision
CN105286802A (en) * 2015-11-30 2016-02-03 华南理工大学 Driver fatigue detection method based on video information
CN106384339A (en) * 2016-09-30 2017-02-08 防城港市港口区高创信息技术有限公司 Infrared night vision image enhancement method

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113990033A (en) * 2021-09-10 2022-01-28 南京融才交通科技研究院有限公司 Vehicle traffic accident remote take-over rescue method and system based on 5G internet of vehicles
CN114821713A (en) * 2022-04-08 2022-07-29 湖南大学 Fatigue driving detection method based on Video transducer
CN116168374A (en) * 2023-04-21 2023-05-26 南京淼瀛科技有限公司 Active safety auxiliary driving method and system
CN116168374B (en) * 2023-04-21 2023-12-12 南京淼瀛科技有限公司 Active safety auxiliary driving method and system

Similar Documents

Publication Publication Date Title
CN113140093A (en) Fatigue driving detection method based on AdaBoost algorithm
CN101593425B (en) Machine vision based fatigue driving monitoring method and system
Alshaqaqi et al. Driver drowsiness detection system
US7940962B2 (en) System and method of awareness detection
CN107292251B (en) Driver fatigue detection method and system based on human eye state
CN106682578B (en) Weak light face recognition method based on blink detection
CN104200192A (en) Driver gaze detection system
CN106250801A (en) Based on Face datection and the fatigue detection method of human eye state identification
CN111062292B (en) Fatigue driving detection device and method
CN108596087B (en) Driving fatigue degree detection regression model based on double-network result
CN112241658A (en) Fatigue driving early warning system and method based on depth camera
CN111553214B (en) Method and system for detecting smoking behavior of driver
CN106295474B (en) Fatigue detection method, system and the server of deck officer
CN110717389A (en) Driver fatigue detection method based on generation of countermeasure and long-short term memory network
CN109977930A (en) Method for detecting fatigue driving and device
CN107563346A (en) One kind realizes that driver fatigue sentences method for distinguishing based on eye image processing
CN109886086A (en) Pedestrian detection method based on HOG feature and Linear SVM cascade classifier
CN104992160B (en) A kind of heavy truck night front vehicles detection method
CN114220158A (en) Fatigue driving detection method based on deep learning
Rajevenceltha et al. A novel approach for drowsiness detection using local binary patterns and histogram of gradients
Panicker et al. Open-eye detection using iris–sclera pattern analysis for driver drowsiness detection
DE102014100364A1 (en) Method for determining eye-off-the-road condition for determining whether view of driver deviates from road, involves determining whether eye-offside-the-road condition exists by using road classifier based on location of driver face
Mu et al. Research on a driver fatigue detection model based on image processing
CN113343926A (en) Driver fatigue detection method based on convolutional neural network
CN114565531A (en) Image restoration method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210720

RJ01 Rejection of invention patent application after publication