CN111274963A - Fatigue driving early warning system based on image processing - Google Patents

Fatigue driving early warning system based on image processing Download PDF

Info

Publication number
CN111274963A
CN111274963A CN202010066364.5A CN202010066364A CN111274963A CN 111274963 A CN111274963 A CN 111274963A CN 202010066364 A CN202010066364 A CN 202010066364A CN 111274963 A CN111274963 A CN 111274963A
Authority
CN
China
Prior art keywords
eye
driver
image
average gray
template
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010066364.5A
Other languages
Chinese (zh)
Inventor
王琦源
黄震
孙元
杨超
印茂伟
任珍文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southwest University of Science and Technology
Original Assignee
Southwest University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southwest University of Science and Technology filed Critical Southwest University of Science and Technology
Priority to CN202010066364.5A priority Critical patent/CN111274963A/en
Publication of CN111274963A publication Critical patent/CN111274963A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/19Sensors therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/197Matching; Classification
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • G08B21/06Alarms for ensuring the safety of persons indicating a condition of sleep, e.g. anti-dozing alarms

Landscapes

  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Ophthalmology & Optometry (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Business, Economics & Management (AREA)
  • Emergency Management (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the field of fatigue driving detection, and aims to provide a fatigue driving early warning system based on image processing.

Description

Fatigue driving early warning system based on image processing
Technical Field
The invention relates to the field of image detection, in particular to a fatigue driving early warning system based on image processing.
Background
China starts late in the aspect of driving fatigue detection, and related researches cannot break through the bottleneck of the driving fatigue detection, so that the technology of China always lags behind in the field. With continuous breakthrough and innovation of science and technology, continuous improvement of economic strength and continuous development of scientific research capability in China, more and more manpower and material resources are invested in the development of fatigue driving detection systems, and the research process of the fatigue driving detection systems is approaching to that of developed countries abroad. However, the research on the aspect is only limited in the theoretical direction in China, and the practical application aspect is still relatively deficient.
Systems developed by researchers in the united states can detect whether a driver is in a fatigue driving state. The system captures the eye information of the driver by using the infrared camera, compares the captured information with fatigue driving state data recorded by the system, judges whether the driver is in a fatigue driving state, and reminds the driver to have a rest in time if the judgment result is that the driver is in the fatigue driving state. In fact, the system has certain defects that the system can not be widely applied to daily life.
Therefore, an early warning device is needed, which performs coarse positioning of the eyes through a correlation matching manner, so that the system can identify the positions of the eyes, further identify the eyes, and finally assess whether the driver is in a fatigue state.
Disclosure of Invention
The invention aims to provide a fatigue driving early warning system based on image processing, in order to realize accurate detection of human eyes, the image is simplified into a local image only containing human faces through the detection of the human faces, the difficulty of human eye detection is reduced, and the efficiency and the precision of identification can be effectively improved, so that reliable human eye information is finally obtained;
in order to achieve the purpose, the technical scheme adopted by the invention is as follows: the fatigue driving early warning system based on image processing comprises an acquisition module, a processing module and a display module, wherein the acquisition module records an image of a driver and transmits the image to the processing module, the image contains facial features of the driver, the processing module is used for carrying out human eye positioning on the facial features in the image and acquiring eye feature data of the driver, the eye feature data are subjected to closing frequency detection to obtain a closed eye percentage, and the fatigue degree of the driver is obtained according to the closed eye percentage of the driver and is displayed through the display module.
By the technical means, the human face is roughly positioned, then the human eye part is finely positioned, and whether the driver is in a fatigue driving state or not is judged according to the human eye state.
Preferably, the processing module is pre-stored with a human eye average gray scale template, and the human eye positioning is realized by matching the eye characteristic data of the driver with the template.
Preferably, the calculation of the human eye average gray level template comprises the following steps:
step 1: carrying out binarization on the face area, establishing coordinates on the face area, and setting the width of the face area as L for calculation;
step 2: searching from the upper edge of the face region to the position 1/2L downwards, and calculating the coordinate value of the face region;
and step 3: the region D is determined through the coarse positioning of human eyes, and the whole region of the eye feature data of the driver is obtained by the region D.
Preferably, the human eye average gray template includes a left eye average gray template and a right eye gray template.
Preferably, the left-eye average gray template includes an average gray template in a state where the left eye is open and an average gray template in a state where the left eye is closed, and the right-eye average gray template includes an average gray template in a state where the right eye is open and an average gray template in a state where the right eye is closed.
Preferably, the processing module further comprises a pupil location, and determines whether the human eye is in a closed state or an open state through a Hough transformation algorithm in the through hole location.
Preferably, the closed frequency detection formula is
Figure BDA0002376079230000021
Wherein, TtotalIs the number of all image frames in a unit time; TC (tungsten carbide)totalIs the number of frames of the image in the eye-closed state.
Preferably, the acquisition module is a camera.
Compared with the prior art, the invention has the beneficial effects that:
1. in the detection process, images to be detected need to be reasonably segmented, small areas of human eyes are obtained, and then accurate positioning and identification can be carried out on the human eyes in each small area. Through the accurate dividing mode, the identification range is reduced, the execution efficiency is higher, and a more accurate identification result can be finally obtained;
2. the analysis of the eyes based on the Hough transformation algorithm can not only accurately judge the specific position information of the eyes, but also judge the state of the eyes, and can analyze the fatigue state of the driver based on the judgment;
3. a large amount of data need to be stored in the real-time video tracking process, the CF card is introduced to conveniently store the video data by utilizing the advantages of the FPGA processor, and the data can be used as important evidence for analyzing accident reasons under the condition that a driver does not timely react to an alarm to cause traffic accidents.
Drawings
Fig. 1 is a structural diagram of a fatigue driving warning system based on image processing;
FIG. 2 is a schematic diagram of Hough transform of a circle in an embodiment of the present invention;
FIG. 3 is a graph of percent eye closure at various fatigue states in an embodiment of the present invention;
FIG. 4 illustrates a Hough transform-based iris localization procedure for a human eye in accordance with an embodiment of the present invention;
FIG. 5 is an eye model of an embodiment of the invention;
FIG. 6 is a flowchart of an algorithm of a human eye fatigue detection system according to an embodiment of the present invention;
FIG. 7 is an adaptive threshold segmentation of an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention are clearly and completely described below with reference to fig. 1 to 7 of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, but not all embodiments. All other implementations made by those of ordinary skill in the art based on the embodiments of the present invention are obtained without inventive efforts.
In the description of the present invention, it is to be understood that the terms "counterclockwise", "clockwise", "longitudinal", "lateral", "up", "down", "front", "back", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", etc., indicate orientations or positional relationships based on those shown in the drawings, and are used for convenience of description only, and do not indicate or imply that the referenced devices or elements must have a particular orientation, be constructed and operated in a particular orientation, and thus, are not to be considered as limiting.
Example 1:
the fatigue driving early warning system based on image processing comprises an acquisition module, a processing module and a display module, wherein the acquisition module records an image of a driver and transmits the image to the processing module, the image contains facial features of the driver, the processing module is used for carrying out human eye positioning on the facial features in the image and acquiring eye feature data of the driver, the eye feature data are subjected to closing frequency detection to obtain a closed eye percentage, and the fatigue degree of the driver is obtained according to the closed eye percentage of the driver and is displayed through the display module.
It should be noted that, a human eye average gray template is pre-stored in the processing module, and human eye positioning is realized by matching the eye feature data of the driver with the template, and the calculation of the human eye average gray template includes the following steps:
step 1: carrying out binarization on the face area, establishing coordinates on the face area, and setting the width of the face area as L for calculation;
step 2: searching from the upper edge of the face region to the position 1/2L downwards, and calculating the coordinate value of the face region;
and step 3: the region D is determined through the coarse positioning of human eyes, and the whole region of the eye feature data of the driver is obtained by the region D.
It is worth mentioning that the human eye average gray template includes a left eye average gray template and a right eye gray template. It should be noted that the left-eye average gray template includes an average gray template in a state where the left eye is opened and an average gray template in a state where the left eye is closed, the right-eye average gray template includes an average gray template in a state where the right eye is opened and an average gray template in a state where the right eye is closed, the processing module further includes pupil positioning, and determines whether the human eye is in a closed state or an opened state through a Hough transform algorithm in the through hole positioning, the closing frequency detection formula is
Figure BDA0002376079230000041
Wherein, TtotalIs the number of all image frames in a unit time; TC (tungsten carbide)totalThe image frame number is the image frame number under the eye closing state, and the acquisition module is a camera.
Example 2:
in order to realize accurate detection of human eyes, the embodiment only simplifies the image into a local image only containing human faces, thereby reducing the difficulty of detection. Referring to fig. 6, in addition, the efficiency and accuracy of recognition can be effectively improved in this way, so that the finally obtained eye information is relatively reliable. In the detection process, images to be detected need to be reasonably segmented, small areas of human eyes are obtained, and then accurate positioning and identification can be carried out on the human eyes in each small area. Through the accurate dividing mode, the identification range is reduced, the execution efficiency is higher, and finally, a more accurate identification result can be obtained.
The specific calculation process is as follows: 1) firstly, carrying out binarization on a face area; 2) then calculating the width L of the whole area by using the coordinates of the upper edge and the lower edge of the face area; 3) 1/2L position is searched from the upper edge of the face area downwards, and the coordinate value of the area is calculated; 4) the whole region D in the range of Xl and Xm, that is, the whole region where the human eye is detected, is acquired.
Firstly, the training process of the left-eye region opening state template is analyzed, namely the process of calculating the gray value of the pixel point of the image to be detected, the gray value of each point A (i, j) of the 10 segmented images to be detected of the human eyes needs to be calculated, so that the average gray value of each corresponding point position is obtained, and the average gray template in the left-eye opening state can be obtained through the method. Next, the right-eye region open-state template is subjected to correlation training, which is substantially the same as the calculation process of the average gray scale template in the left-eye open state, so that the average gray scale template in the right-eye open state can be obtained in the same manner.
The average gray level templates of the left eye and the right eye in the open state and the closed state can be obtained through the processes. Then, the accurate positioning of the human eye can be realized through the template matching principle.
The process of positioning human eyes based on the template matching method is as follows: firstly, in the D region, the detection of a person is performed using a template of the open and closed states of the left eye, that is, the detection of the open and closed states of the left eye is performed. If one of the states can be directly determined, the left eye position L therein can be marked. The second is to detect the person in the D region by using the template of the open/closed state of the right eye, that is, to detect the open/closed state of the right eye. If one of the states can be directly determined, the right eye position R therein can be marked.
It is worth to be noted that in the embodiment, the eye data of the driver is further determined through a pupil accurate positioning algorithm after the face is coarsely positioned; referring to fig. 2, the method for accurately positioning the pupil position based on Hough transform is generally applied to the field of machinery, and the Hough transform is a commonly used image detection algorithm, and can be used for effectively identifying graphs such as straight lines and circular holes of mechanical devices, and accurately detecting the quality and shape of mechanical products. When the Hough transform algorithm is used for detecting the pupil position, a circular area in the area needs to be identified firstly, if the pupil is in a circular shape basically when being opened completely, the position of the human eye in the image to be detected can be detected more accurately, and therefore the specific coordinates of the left pupil and the right pupil can be further calculated; if no circle is detected, it indicates that the eye is in a closed state. The analysis of the eyes based on the Hough transformation algorithm not only can accurately judge the specific position information of the eyes, but also can judge the state of the eyes, and can analyze the fatigue state of the driver based on the judgment.
The Hough transform has multiple characteristics, so that the Hough transform can be effectively used in detection of human eyes, is generally not influenced by noise, and has higher robustness for the condition that edges are discontinuous.
It is worth mentioning that further in the detection of the eye closure frequency of the driver: typically, a human eye has two states, first open and second closed. No matter whether the eyes of the human body are in any one of the two states, the video and the picture can react to the eyes, and the proportion of the number of the open and closed frames of the eyes of the driver in unit time is calculated as follows.
Figure BDA0002376079230000061
Wherein Ttotal is the number of all image frames in a unit time; TCtotal is the number of image frames in the eye-closed state.
It should be noted that, referring to fig. 3, the change of the eye closing percentage has a close correlation with the fatigue degree of the human body, and usually shows a positive correlation. Overall, the percentage of eyes closed in each time interval of a conscious person also exhibits a fundamental characteristic of variability, although differences exist, remaining within a certain range overall, and thus a threshold value can be determined. The mode can judge the fatigue degree of the driver to a certain extent and give an early warning to the driver with the fatigue condition.
It is worth to be noted that, in this embodiment, system design is completed on an FPGA, a camera collects a head image of a driver in real time, image processing is used to perform fast positioning and eye state recognition and analysis, a minute after the system is turned on collects a pupil shape and blink frequency of the driver in a waking state as a template, and then real-time fatigue detection and early warning are performed on the driver, the system can adapt to the situation that the head of the driver is tilted to a certain degree, light changes in a vehicle and expression changes of the driver, the head of the driver does not move more than 15 cm left and right within a certain range, the front and back movement does not exceed 5 cm, the head slightly shakes, occasionally, the head is twisted but the head is quickly restored to the original position, and when the eyes are looking ahead, the following technical indexes are realized: 1) processing 40 frames of images per second is realized; 2) the accuracy rate of the face module reaches 95 percent; 3) the accuracy rate of human eye identification reaches 90%; 4) the fatigue detection accuracy rate reaches 85 percent.
It is worth mentioning that, in order to realize accurate detection of human eyes, the image is simplified into a local image only containing human faces through the detection of human faces, so that the difficulty of human eye detection is reduced, the efficiency and the accuracy of identification can be effectively improved, and reliable human eye information is finally obtained. The algorithm for accurately locating the coordinates of the human eye is shown in fig. 1.
It is worth noting that the human eye coarse localization with respect to cross-correlation template matching: when the human eyes are roughly positioned, a reasonable method is adopted to improve and optimize a template matching algorithm, and then the regions of the human eyes are roughly detected based on the improved and optimized template matching algorithm. And selecting a proper template matching algorithm to finish effective division of the image so as to roughly position the human eyes. The difficulty of human eye detection can be reduced by using the algorithm, and a foundation is laid for more accurate detection.
It should be noted that, referring to fig. 4, the analysis of the human eyes based on the Hough transform algorithm can not only accurately determine the specific position information of the eyes, but also determine the state of the human eyes, and can analyze the fatigue state of the driver based on the information. Therefore, the method is a better way for accurately identifying and judging the human eyes.
It is worth to say that the fatigue detection algorithm based on PERCLOS and blink frequency: after the eye window and the eyeball position are accurately positioned, a model of the eyes needs to be constructed for extracting fatigue parameters of the eyes. The eye generally includes inner and outer corners of the eye, upper and lower eyelids, black eye, white cornea, etc., and the standard model is shown in fig. 5, where the black eye portion of the eye is rounded in the eye. The height H between the upper and lower eyelids indicates the height of the eye, and when the eye opening and closing degree is detected, the height H of the eye is extracted first.
It is worth to be noted that, a human eye region is obtained according to the region growing binarization, then the distance between the upper eyelid and the lower eyelid of the eye is extracted by performing vertical integral projection on the human eye region, and the state of the eye is judged according to the height between the upper eyelid and the lower eyelid. The PERCLOS detection method needs to collect two adjacent frames of face images by a camera, obtain segmented eye images by image processing methods such as a frame difference method, binarization, integral projection, region growing and the like, determine whether eyes are closed or open by calculated threshold analysis, judge the fatigue degree of a driver and give an early warning to the driver with fatigue.
It is worth explaining that the hardware part is an FPGA internal configuration program written by using Verilog language, and mainly realizes the functions of initialization, image data processing and the like; the software part is a soft core part designed by using Nios II software and mainly realizes the functions of face image preprocessing, feature extraction, judgment output and the like.
It is worth noting that the fatigue driving behavior of the driver belongs to the fatigue phenomenon within a period of time, so that whether the object is in the fatigue driving state or not is difficult to judge by only a single picture. Only by adopting a real-time video frame sequence detection mode to detect the fatigue driving state of the object, the detection result has scientificity and reality. Firstly, a face detection algorithm is researched (a statistical calculation detection method, a face detection algorithm based on characteristics and a face detection algorithm based on template matching), simulation analysis is completed, and an optimal face detection algorithm is obtained through performance evaluation.
It should be noted that, when detecting a video image, it is first necessary to identify face information, and after determining the face information, it is possible to further identify eye information. During detection, the coordinates of each pixel point in the image need to be calculated firstly, then threshold segmentation is carried out on the eye region based on the face feature information of the person, and more accurate positioning is realized on the basis.
It should be noted that, referring to fig. 7, the threshold segmentation method is an image processing technique based on region segmentation, and the basic principle of the method is to classify pixel points into several classes by setting different feature thresholds. Commonly used features are usually derived from the grey value or colour features of the original image and features transformed from the original grey or colour values. And carrying out binarization transformation on the image according to the threshold value T, and determining whether the current pixel is a background point or a foreground point. Analyzing a histogram of the eyebrow window in the coarsely positioned eye window, calculating a gray histogram of the eye window, and then solving a peak value Tmax, wherein a threshold value is T-Tmax; and finally, comparing the pixel value with a threshold value, and binarizing the eyebrow window.
It should be noted that, in order to obtain the position of the human eye more accurately, a more accurate positioning algorithm needs to be adopted, and the specific process is as follows: 1) firstly, edge detection is carried out according to a left-eye behavior L and a right-eye rectangle R which are obtained previously, and at the moment, a Canny operator is needed to be used for calculation in the range; 2) after the edge detection process of the image is completed, binarization processing needs to be performed on the image, and the radius of the pupil circle can be determined in such a way. 3) Then, a Hough matrix needs to be initialized, an accumulator array M needs to be initialized at first, and pupil circle parameter lists are all set to be 0, namely M (i, j, r) is 0; 4) then, a circle is drawn in the rectangular areas of the left and right eyes, and the radius of the circle is r. Then, the boundary position is detected, and the coordinate value of each boundary point is obtained. Then, the value of M (i, j, r) can be obtained according to the coordinate equation of the circle; 5) and finally, calculating the maximum value of the accumulator in M, namely acquiring the center coordinates of the pupil circle. Can be represented by M (0, R).
It is worth to be noted that the accurate positioning of the human eyes can be realized through the above processes, and the method is the whole process of realizing the human eye positioning through the Hough transformation algorithm, and can effectively identify the position of the circle center of the pupil of the human eyes. The radius of the circle center is r and is kept constant, and the complexity of pupil identification can be obviously reduced in such a way, so that convenience is brought to the subsequent calculation process. For the Hough transformation algorithm, the detection of the pupils of human eyes is carried out by adopting a mode of fixing the radius r, the dimensionality of the pupils can be effectively reduced, the calculation efficiency of the algorithm can be improved, and the calculation process can be completed more quickly. If the eyes of the driver always handle the open state, the pupil of each frame of image can be obtained to be approximately circular through the algorithm; and the absence of such a near-circular shape is always detected, indicating that the driver's eyes are always closed.
In summary, the implementation principle of the invention is as follows: firstly, dynamically acquiring an image of a driver through a camera, and transmitting image data of the driver to a memory of a system; then, the human face is roughly positioned through a human face detection algorithm, and the human eyes are roughly positioned through a relevant matching mode, so that the system can identify the positions of the human eyes, further identify the human eyes, and finally assess whether a driver is in a fatigue state.

Claims (8)

1. The fatigue driving early warning system based on image processing is characterized by comprising an acquisition module, a processing module and a display module, wherein the acquisition module is used for recording an image of a driver and transmitting the image to the processing module, the image contains facial features of the driver, the processing module is used for carrying out human eye positioning on the facial features in the image and acquiring eye feature data of the driver, the eye feature data are subjected to closing frequency detection to obtain eye closing percentage, and the fatigue degree of the driver is obtained according to the eye closing percentage of the driver and is displayed by the display module.
2. The image processing-based fatigue driving warning system according to claim 1, wherein a human eye average gray scale template is prestored in the processing module, and human eye positioning is realized by matching the eye feature data of the driver with the template.
3. The image processing-based fatigue driving warning system of claim 2, wherein the calculation of the human eye average gray scale template comprises the following steps:
step 1: carrying out binarization on the face area, establishing coordinates on the face area, and setting the width of the face area as L for calculation;
step 2: searching from the upper edge of the face region to the position 1/2L downwards, and calculating the coordinate value of the face region;
and step 3: the region D is determined through the coarse positioning of human eyes, and the whole region of the eye feature data of the driver is obtained by the region D.
4. The image processing-based fatigue driving warning system of claim 3, wherein the human eye average gray template comprises a left eye average gray template and a right eye gray template.
5. The image processing-based fatigue driving warning system of claim 4, wherein the left-eye average gray template comprises an average gray template in a left-eye open state and an average gray template in a left-eye closed state, and the right-eye average gray template comprises an average gray template in a right-eye open state and an average gray template in a right-eye closed state.
6. The image processing-based fatigue driving warning system according to claim 5, wherein the processing module further comprises a pupil location, and the Hough transformation algorithm in the through hole location is used for determining whether the human eyes are in the closed state or the open state.
7. The image processing-based fatigue driving warning system of claim 1, wherein the closing frequency detection is formulated as
Figure FDA0002376079220000021
Wherein, TtotalIs the number of all driver image frames in a unit time; TC (tungsten carbide)totalIs the number of driver image frames in the eye-closed state.
8. The image processing-based fatigue driving warning system of claim 1, wherein the acquisition module is a camera.
CN202010066364.5A 2020-01-20 2020-01-20 Fatigue driving early warning system based on image processing Pending CN111274963A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010066364.5A CN111274963A (en) 2020-01-20 2020-01-20 Fatigue driving early warning system based on image processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010066364.5A CN111274963A (en) 2020-01-20 2020-01-20 Fatigue driving early warning system based on image processing

Publications (1)

Publication Number Publication Date
CN111274963A true CN111274963A (en) 2020-06-12

Family

ID=71003304

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010066364.5A Pending CN111274963A (en) 2020-01-20 2020-01-20 Fatigue driving early warning system based on image processing

Country Status (1)

Country Link
CN (1) CN111274963A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115798019A (en) * 2023-01-06 2023-03-14 山东星科智能科技股份有限公司 Intelligent early warning method for practical training driving platform based on computer vision

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9198575B1 (en) * 2011-02-15 2015-12-01 Guardvant, Inc. System and method for determining a level of operator fatigue
CN106530623A (en) * 2016-12-30 2017-03-22 南京理工大学 Fatigue driving detection device and method
CN107016381A (en) * 2017-05-11 2017-08-04 南宁市正祥科技有限公司 A kind of driven fast person's fatigue detection method
CN107229922A (en) * 2017-06-12 2017-10-03 西南科技大学 A kind of fatigue driving monitoring method and device
CN109934199A (en) * 2019-03-22 2019-06-25 扬州大学 A kind of Driver Fatigue Detection based on computer vision and system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9198575B1 (en) * 2011-02-15 2015-12-01 Guardvant, Inc. System and method for determining a level of operator fatigue
CN106530623A (en) * 2016-12-30 2017-03-22 南京理工大学 Fatigue driving detection device and method
CN107016381A (en) * 2017-05-11 2017-08-04 南宁市正祥科技有限公司 A kind of driven fast person's fatigue detection method
CN107229922A (en) * 2017-06-12 2017-10-03 西南科技大学 A kind of fatigue driving monitoring method and device
CN109934199A (en) * 2019-03-22 2019-06-25 扬州大学 A kind of Driver Fatigue Detection based on computer vision and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘鹏等: "驾驶疲劳研究中的人眼状态检测", 《中国生物医学工程进展——2007中国生物医学工程联合学术年会论文集(上册)中国生物医学工程学会会议论文集》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115798019A (en) * 2023-01-06 2023-03-14 山东星科智能科技股份有限公司 Intelligent early warning method for practical training driving platform based on computer vision
CN115798019B (en) * 2023-01-06 2023-04-28 山东星科智能科技股份有限公司 Computer vision-based intelligent early warning method for practical training driving platform

Similar Documents

Publication Publication Date Title
CN101593425B (en) Machine vision based fatigue driving monitoring method and system
Alioua et al. Driver’s fatigue detection based on yawning extraction
CN107292251B (en) Driver fatigue detection method and system based on human eye state
CN103310194B (en) Pedestrian based on crown pixel gradient direction in a video shoulder detection method
CN102436715B (en) Detection method for fatigue driving
CN104091147B (en) A kind of near-infrared eyes positioning and eye state identification method
CN101339603A (en) Method for selecting qualified iris image from video frequency stream
CN105243386B (en) Face living body judgment method and system
CN100373397C (en) Pre-processing method for iris image
CN101201893A (en) Iris recognizing preprocessing method based on grey level information
Batista A drowsiness and point of attention monitoring system for driver vigilance
CN103942539B (en) A kind of oval accurate high efficiency extraction of head part and masking method for detecting human face
EP1868138A2 (en) Method of tracking a human eye in a video image
CN101359365A (en) Iris positioning method based on Maximum between-Cluster Variance and gray scale information
CN105389554A (en) Face-identification-based living body determination method and equipment
CN106156688A (en) A kind of dynamic human face recognition methods and system
CN101739546A (en) Image cross reconstruction-based single-sample registered image face recognition method
CN101246544A (en) Iris locating method based on boundary point search and SUSAN edge detection
CN102902986A (en) Automatic gender identification system and method
CN101539991A (en) Effective image-region detection and segmentation method for iris recognition
CN103034852A (en) Specific color pedestrian detecting method in static video camera scene
CN115841651B (en) Constructor intelligent monitoring system based on computer vision and deep learning
CN103729646A (en) Eye image validity detection method
CN105631410B (en) A kind of classroom detection method based on intelligent video processing technique
CN111274963A (en) Fatigue driving early warning system based on image processing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200612