CN108053615B - Method for detecting fatigue driving state of driver based on micro-expression - Google Patents

Method for detecting fatigue driving state of driver based on micro-expression Download PDF

Info

Publication number
CN108053615B
CN108053615B CN201810022165.7A CN201810022165A CN108053615B CN 108053615 B CN108053615 B CN 108053615B CN 201810022165 A CN201810022165 A CN 201810022165A CN 108053615 B CN108053615 B CN 108053615B
Authority
CN
China
Prior art keywords
driver
micro
expression
image
texture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810022165.7A
Other languages
Chinese (zh)
Other versions
CN108053615A (en
Inventor
杨立才
王悦
边军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University
Original Assignee
Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University filed Critical Shandong University
Priority to CN201810022165.7A priority Critical patent/CN108053615B/en
Publication of CN108053615A publication Critical patent/CN108053615A/en
Application granted granted Critical
Publication of CN108053615B publication Critical patent/CN108053615B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • G08B21/06Alarms for ensuring the safety of persons indicating a condition of sleep, e.g. anti-dozing alarms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition

Abstract

The invention discloses a method for detecting fatigue driving state of a driver based on micro expression, which comprises the following steps: acquiring a driving state video of a driver by using a high-speed infrared camera in the vehicle, and obtaining facial image information of the driver from the driving state video; preprocessing and extracting characteristics of the obtained image information, and detecting the micro-expression of a driver in the driving process; identifying the collected micro expression of the driver, monitoring the driving fatigue state of the driver based on the micro expression, and early warning the fatigue state of the driver or whether the driver has the tendency of driving fatigue; the early warning identification of the driving fatigue is realized, and the road traffic accidents caused by the fact that the reaction of a driver is slowed down, the reaction time is prolonged and even a short sleeping state loses control over a vehicle are avoided.

Description

Method for detecting fatigue driving state of driver based on micro-expression
Technical Field
The invention belongs to the field of driving safety protection, and particularly relates to a method for detecting fatigue driving state of a driver based on micro-expression.
Background
With the continuous progress of social economy, the living standard of people is improved day by day, the traffic demand is improved day by day, and the number of motor vehicles on the road is increased year by year. Along with this, the problem of road traffic safety in China is increasingly prominent, traffic accidents happen occasionally, and hidden dangers are buried for social public safety. In recent years, the number of malignant road traffic accidents in China is on the rise, and the life and property safety of people is seriously threatened. Among many traffic accidents, the traffic accidents caused by fatigue driving of drivers account for 20% of the total number, and in the case of very large traffic accidents, the proportion caused by fatigue driving is as high as 40%, and the harm of fatigue driving is very visible. In a fatigue state, a driver's response to a sudden road condition becomes slow, the response time becomes long, an excessive response and an erroneous response are easy to occur, and when fatigue is severe, the driver may have a transient sleep state and lose control over a vehicle. Driving fatigue is very likely to occur during long-time driving, driving at night and driving under the condition of insufficient rest.
Existing fatigue driving state detection methods are mainly classified into three major categories, namely, a detection method based on a manipulation behavior and a vehicle driving parameter, a detection method based on a physiological parameter, and a detection method based on a facial image. The detection method based on the control behavior and the vehicle running parameters is commonly used for detecting the rotation angle of a steering wheel, the vehicle running track, the pressure of holding the steering wheel and the like, the method has small influence on the operation of a driver, the detection is more direct, but a specific threshold value standard is lacked for fatigue judgment; the detection method based on the physiological parameters mainly detects the electrocardio, pulse and electroencephalogram signals of the driver, so that the identification is more accurate, but the operation of the driver is influenced to a certain extent by the sensor, and the acquisition is more difficult; the detection method based on the face image is used for detecting the fatigue state by acquiring the face information of the driver through the high-speed camera without contacting the body of the driver, reduces the influence on the driving operation, is more easily accepted by the driver and has higher detection accuracy.
The human face can convey information and the individual micro-expressions are determined by analyzing the basic structure and muscle characteristics of the face. Micro-expressions are rapid facial expressions that last 1/25s to 1/3 s. The micro expression is often mixed in the normal expression sequence, and is flashed and is not easy to be perceived. In addition, micro-expressions refer to those expressions that are suppressed in application, in addition to transient expressions. Due to the inhibition of self-potential, micro-expressions are generally expressed in an unobvious or transient manner, such as micro-expressions which are shown to be objectionable and appear in the normal expression sequence shown in fig. 2. The micro expression has been successfully applied in the psychological research fields of lie detection, depression and the like, but no literature or patent report is available at home and abroad in the aspect of fatigue driving state detection.
Fatigue is a gradual change, and the degree of fatigue also changes from shallow to deep. The driver is apt to fatigue during driving of the vehicle for a long time, but the fatigue state is suppressed to some extent by the combined action of rationality and subconsciousness. Under the light fatigue state, the facial micro-expression can indicate the appearance of the fatigue state, the characteristics of the facial micro-expression comprise eye opening degree reduction, eyelid droop, pupil enlargement without spirit, two outer eyebrow angle droop, mouth angle droop, slight inward contraction and the like, and the micro-expression in the light fatigue state can disappear quickly, so that the facial micro-expression returns to the normal state. Only when fatigue has accumulated to a certain extent to enter a deep fatigue state does the driver's face develop noticeable fatigue characteristics. When the driver is deeply tired, the control degree of the driver on the vehicle is weakened, the response to the emergency situation is slowed down, the response time is prolonged, even a short sleep state occurs, and the control on the vehicle is lost. The existing fatigue driving state detection method is mainly used for detecting such obvious characteristics or transient sleep states and then reminding a driver. This alert is a passive post-event warning. In fact, before a human body enters a deep fatigue state, the micro expression of the human body can indicate the occurrence of the fatigue state, the technology and the method for detecting the fatigue driving state before the deep driving fatigue of a driver occurs are researched by using modern scientific technologies such as micro expression and image processing, namely, the technology and the method are carried out when the fatigue degree is shallow, and a driver driving fatigue state monitoring and early warning device is researched and developed on the basis of the technology and the method, so that the early warning identification of the driving fatigue state is realized, traffic accidents caused by fatigue driving can be effectively reduced, and the method has important social significance and application value.
Disclosure of Invention
The invention aims to solve the problem of providing a method for detecting the fatigue driving state of a driver based on micro expression, and the method can be used for early warning the fatigue state of the driver based on the technologies of micro expression analysis, image processing and the like, reducing the occurrence of fatigue driving and avoiding traffic accidents.
In order to achieve the purpose, the invention adopts the following technical scheme:
the method for detecting the fatigue driving state of the driver based on the micro expression comprises the following steps:
step (1): firstly, acquiring facial expression images of a driver: acquiring a facial expression video of a driver in a driving process by using a high-speed infrared camera arranged on an automobile rearview mirror so as to obtain a facial expression image of the driver;
step (2): image preprocessing, namely converting the facial expression image into a gray image and carrying out histogram equalization on the gray image;
and (3): positioning a face region, positioning an eye region, a mouth region and an eyebrow region of a person, segmenting and extracting an image of the face region, and performing size homogenization on the segmented and extracted image;
and (4): extracting the texture features of the eye region, the mouth region and the eyebrow region of the driver; carrying out feature fusion on the texture features of the eye region, the texture features of the mouth region and the texture features of the eyebrow region to obtain face texture features;
and (5): according to the facial texture features of the fatigue state micro expressions in the facial texture micro expression library, classifying the current facial texture features of the driver by using a minimum distance classification method, so as to identify the micro expression of the driver, and accordingly judging whether the driver enters the shallow fatigue state, if the driver enters the shallow fatigue state, continuously judging whether the frequency of the micro expression of the driver detected as the shallow fatigue state in a set time range exceeds a set threshold, if the frequency exceeds the set threshold, indicating that the driver has the tendency of entering the deep fatigue state, and meanwhile, early warning the tendency of the driver entering the deep fatigue state.
Further, the facial expression image in the step (2) is converted into a gray scale image, and linear transformation is adopted.
Further, in the step (3), an Adaboost-Haar algorithm is used for positioning the human eye region, the position of the centroid of the human eye is determined, the midpoint of the connecting line of the centroids of the two eyes is recorded as O, the distance between the centroids of the two eyes is recorded as d, d is respectively taken from left and right in the horizontal direction, 1.5d is taken from vertical direction downwards, 0.55d is taken from upwards by taking O as a reference, and the rectangular region is cut. Because the cut images have different sizes and need to be subjected to homogenization treatment, the invention uses a bilinear interpolation method to homogenize the image size to 128 multiplied by 128.
Further, the texture feature extraction method based on the gray level co-occurrence matrix is used in the step (4) to extract the texture feature.
Further, the step (5) further comprises establishing a facial texture feature micro expression library, which means that: acquiring face images in a normal state and a fatigue state through a fatigue experiment, segmenting the face region images, identifying an eye region, an eyebrow region and a mouth region, respectively identifying texture features of the eye region, the eyebrow region and the mouth region of the face images, fusing the texture features of the eye region, the eyebrow region and the mouth region to obtain facial texture features, recording the facial texture features of micro-expressions in the normal state and the fatigue state, and constructing a facial texture feature micro-expression library;
further, in the step (5): if the distance between the current facial texture feature and any fatigue state micro-surface texture feature in the facial texture micro-expression library is larger than a set threshold value, indicating that the driver does not enter a shallow fatigue state; otherwise, indicating that the driver enters a shallow fatigue state and giving an early warning prompt.
Positioning human eyes by using an Adaboost-Haar algorithm, wherein the Adaboost algorithm is as follows:
let the input dataset D { (x)1,y1),(x2,y2),...,(xi,yi) In which xiIs sample data, yiIs a sample attribute. Let the weight of the initialization sample be omegaiThe number of negative samples is m and the number of positive samples is l. When y isiWhen the value is 0, the sample is negative, ω i1/m; when y isiWhen 1, the sample is negative, ωi=1/l。
The learning cycle number is set as T, and when T is 1,2, T, the learning is carried out respectively;
step (31): weight normalization:
Figure GDA0002777810880000031
step (32): for each feature j, training a weak classifier hjCalculating the weighted error rate of all the featuresf
f=∑iωi|hj(xi)-yi|
Step (33): finding a minimum classifier from the weak classifiers determined in step (32)tWeak classifier h oftAnd updating the weight corresponding to each sample:
Figure GDA0002777810880000041
if sample xiQuilt coverExactly classify, then ηiEqual to 0, otherwise ηi1, and
Figure GDA0002777810880000042
the final strong classifier is formed as follows:
Figure GDA0002777810880000043
wherein the content of the first and second substances,
Figure GDA0002777810880000044
constructing a feature set according to the positive and negative samples, and if the weak classifier correctly classifies the samples, reducing the weight of the samples; if the weak classifier incorrectly classifies the sample, the weight of the sample is increased. The classifier strengthens the training of the misclassified samples, and finally forms a strong classifier by all weak classifiers.
Calculating a Haar-like characteristic value:
it is meant that the sum of all pixel values within the white rectangle is subtracted from the sum of all pixel values within the black rectangle in the rectangular template. The Haar-like features effectively extract the texture features of the image, and each template extracts feature values of different positions and scales through translation and scaling. The number of Haar-like features is enormous, for a given W × H image, the number of one rectangular feature is:
Figure GDA0002777810880000045
where w, x, h are the characteristic template dimensions. The maximum ratio of the feature template in the horizontal and vertical directions is as follows:
Figure GDA0002777810880000046
the number of features for 45 degrees is:
Figure GDA0002777810880000047
further, the step (4) comprises the following steps:
an arbitrary point (x, y) and another point (x + a, y + b) deviated from the arbitrary point are taken in the image, the two points constitute a point pair, and the gray value of the point pair is set to (g)1,g2) The maximum number of gray levels of the image is k, (g)1,g2) Has a total of k2And (4) seed preparation.
Each kind (g) is counted1,g2) The number of occurrences, forming a matrix, and using (g)1,g2) The total number of occurrences normalizes it to the probability of occurrence ρ (g)1,g2) And forming a new matrix, wherein the new matrix is a gray level co-occurrence matrix;
the distance difference values (a, b) take different numerical value combinations to obtain joint probability matrixes under different conditions:
when a is 1 and b is 0, the pixel pair is horizontal, i.e. a 0 ° scan;
when a is 0 and b is 1, the pixel pair is vertical, i.e. 90 ° scan;
when a is 1 and b is 1, the pixel pair is right diagonal, i.e. 45 ° scan;
when a-1, b-1, the pixel pair is the left diagonal, i.e. 135 ° scan.
The spatial coordinates of (x, y) are converted to the value of ρ (g) by the probability that the point pair appears at different distance differential values1,g2) And forming a gray level co-occurrence matrix by the formed new matrix.
Normalizing the gray level co-occurrence matrix:
Figure GDA0002777810880000051
extracting texture features, namely calculating a statistical characteristic value by utilizing a gray level co-occurrence matrix of an image:
texture energy:
Figure GDA0002777810880000052
texture inertia:
Figure GDA0002777810880000053
texture correlation:
Figure GDA0002777810880000054
texture entropy:
Figure GDA0002777810880000055
wherein the content of the first and second substances,
Figure GDA0002777810880000056
Figure GDA0002777810880000057
establishing gray level co-occurrence matrixes in four directions, and extracting Q from the co-occurrence matrixes in each direction1、Q2、Q3、Q4Each texture contains 16 feature vectors.
Further, the step (5) comprises the following steps:
the minimum distance classification is a classification method for classifying points to be classified into a class with the minimum distance by defining the distance between the points to be classified and the classes. The minimum distance classifier is expressed as: let the data be M bands, and N classes respectively use the standard sample W1、W2、...、WNIt is shown that, according to the principle of minimum classification, the distance from a point P to be classified to a class is defined as:
Figure GDA0002777810880000061
let the i-th class training sample set be { xjk1, 2.., N }, the standard sample is selected as the center of a class of training samples:
Figure GDA0002777810880000062
the classification criterion is:
Figure GDA0002777810880000063
compared with the prior art, the invention has the beneficial effects that: a method for detecting the driving fatigue state of a driver based on micro expression is characterized in that the fatigue state of the driver is early warned or whether the driver tends to drive fatigue or not by identifying the micro expression of the driver in a shallow fatigue state by using the modern scientific technologies such as micro expression and image processing before the driver enters a deep fatigue state, and a driver driving fatigue state monitoring and early warning device is developed based on the micro expression to realize early warning identification of the driving fatigue and avoid traffic accidents caused by slow response and increased response time of the driver due to deep fatigue driving and even short sleep state loss of control over a vehicle.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments of the application and, together with the description, serve to explain the application and are not intended to limit the application.
FIG. 1 is an implementation of the present invention;
FIG. 2 is a micro-expression occurring in a normal expression sequence;
3(a) -3 (d) are edge features of a Haar-like feature matrix template;
4(a) -4 (d) are linear features of a Haar-like feature matrix template;
FIGS. 5(a) -5 (d) are linear features of a Haar-like feature matrix template;
FIGS. 6(a) -6 (b) are central features of the Haar-like feature matrix template.
Detailed Description
It should be noted that the following detailed description is exemplary and is intended to provide further explanation of the disclosure. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs.
Fig. 1 shows an implementation process of the present invention.
The method for detecting the fatigue driving state of the driver based on the micro expression comprises the following steps:
step (1): in order to obtain the micro expression of the driver, firstly, the facial expression image of the driver is collected. The driving state video of the driver is collected by the high-speed infrared camera arranged at the rearview mirror, and the facial image information of the driver is obtained from the driving state video.
Step (2): and converting the acquired image into a gray image, and performing histogram equalization on the image.
Step (21): the image gray scale transformation comprises linear transformation and nonlinear transformation, and the invention uses the linear transformation which is defined as:
Figure GDA0002777810880000071
its function is to change the range of the function gray value f (x, y) from M, M to N, N.
The expression for the logarithmic transformation is:
g(x,y)=clog[f(x,y)+1]
where c is the transform coefficient.
Step (22): the specific implementation algorithm of the histogram equalization is as follows,
let L be the number of image gray levels, calculate the gray level r of the original imagekK is 0,2,. cndot.l-1; calculating the number n of pixels of each gray level of the original imagekThe total number of pixels in the image is N; counting the frequency P of each gray level in the imager(rk)=nkN; calculating cumulative histogram s of original imagek(ii) a Calculating a new quantization level tk(ii) a Determining mapping relation s before and after image histogram changek→tk(ii) a Counting the number n of each gray level pixel after image mappingk(ii) a Calculating the post-image-mapping gray distribution Pt(tk)=n′kN; the calculated mapping relation is used for modifying the gray level of the original image, and the approximate uniform distribution of the histogram can be obtained.
And (3): and positioning the face area, only reserving the face area, and reducing the calculation amount of subsequently extracting the related micro-expression features. And locating the eye region, the mouth region and the eyebrow region of the person; performing segmentation and extraction of the face region image, performing size homogenization on the segmented and extracted image,
positioning the human eye region by using an Adaboost-Haar algorithm, determining the position of the mass center of the human eye, recording the midpoint of a connecting line of the mass centers of the two eyes as O, recording the distance between the mass centers of the two eyes as d, respectively shearing d in the left and right directions in the horizontal direction, vertically taking 1.5d downwards and taking 0.55d upwards by taking O as a reference, and shearing the rectangular region.
Since the size of the cut images is different, it is necessary to perform homogenization processing on the images. The invention uses bilinear interpolation to make the image uniform to 128 × 128.
The human eye is located using an Adaboost-Haar classifier, the Adaboost algorithm is as follows:
let the input dataset D { (x)1,y1),(x2,y2),...,(xi,yi) In which xiIs sample data, yiIs a sample attribute. Let the weight of the initialization sample be omegaiThe number of negative samples is m and the number of positive samples is l. When y isiWhen the value is 0, the sample is negative, ω i1/m; when y isiWhen 1, the sample is negative, ωi=1/l。
The learning cycle number is set as T, and when T is 12.., T, the learning is carried out respectively;
step (31): weight normalization:
Figure GDA0002777810880000081
step (32): for each feature j, training a weak classifier hjCalculating the weighted error rate of all the featuresf
f=∑iωi|hj(xi)-yi|
Step (33): finding a minimum classifier from the weak classifiers determined in step (32)tWeak classifier h oftAnd updating the weight corresponding to each sample:
Figure GDA0002777810880000082
if sample xiIs correctly classified, then etaiEqual to 0, otherwise η i1, and
Figure GDA0002777810880000083
the final strong classifier is formed as follows:
Figure GDA0002777810880000084
wherein:
Figure GDA0002777810880000085
constructing a feature set according to the characteristics of the positive and negative samples, and if the weak classifier correctly classifies the samples, reducing the weight of the samples; if the classification is wrong, the weight of the sample is increased. The classifier can strengthen the training of the misclassification sample, and finally all weak classifiers form strong classifiers, and the images are monitored by comparing the weighted and average voting results voted by the weak classifiers.
The calculation of the Haar-like feature value is to use the sum of all pixel values in the black rectangle minus the sum of all pixel values in the white rectangle in the rectangular templates in fig. 3(a) -3 (d), 4(a) -4 (d), 5(a) -5 (d) and 6(a) -6 (b). The Haar-like characteristics can effectively extract the texture characteristics of the image, and each template extracts characteristic values of different positions and scales through translation and scaling. The number of Haar-like features is huge, for a given W × H picture, the number of one rectangular feature is:
Figure GDA0002777810880000086
where w, x, h are the characteristic template dimensions.
The maximum ratio of feature templates amplified in the horizontal and vertical directions is:
Figure GDA0002777810880000091
the number of features for 45 degrees is:
Figure GDA0002777810880000092
the step (4) comprises the following steps:
an arbitrary point (x, y) and another point (x + a, y + b) deviated from the arbitrary point are taken in the image, the two points constitute a point pair, and the gray value of the point pair is set to (g)1,g2) The maximum number of gray levels of the image is k, (g)1,g2) Has a total of k2And (4) seed preparation.
Each kind (g) is counted1,g2) The number of occurrences, forming a matrix, and using (g)1,g2) The total number of occurrences normalizes it to the probability of occurrence ρ (g)1,g2) And forming a new matrix, wherein the new matrix is a gray level co-occurrence matrix; and (d) taking different numerical combinations of the distance difference values (a, b) to obtain joint probability matrixes under different conditions.
When a is 1 and b is 0, the pixel pair is horizontal, i.e. a 0 ° scan; when a is 0 and b is 1, the pixel pair is vertical, i.e. 90 ° scan; when a is 1 and b is 1, the pixel pair is right diagonal, i.e. 45 ° scan; when a-1, b-1, the pixel pair is the left diagonal, i.e. 135 ° scan.
The spatial coordinates of (x, y) are converted to the value of ρ (g) by the probability that the point pair appears at different distance differential values1,g2) And forming a gray level co-occurrence matrix by the formed new matrix.
Normalizing the gray level co-occurrence matrix:
Figure GDA0002777810880000093
the texture feature extraction is to utilize a gray level co-occurrence matrix of the image to obtain a statistical feature value:
texture energy:
Figure GDA0002777810880000094
texture inertia:
Figure GDA0002777810880000095
texture correlation:
Figure GDA0002777810880000096
texture entropy:
Figure GDA0002777810880000101
wherein the content of the first and second substances,
Figure GDA0002777810880000102
Figure GDA0002777810880000103
in order to make the image classification result more accurate, gray level co-occurrence matrixes in four directions are established, and Q is extracted from the co-occurrence matrixes in each direction1、Q2、Q3、Q4Each texture contains 16 feature vectors.
The step (5) comprises the following steps:
the minimum distance classification is a classification method for classifying points to be classified into a class with the minimum distance by defining the distance from the points to be classified to each class;
the minimum distance classifier is expressed as: setting dataFor M bands, N classes are respectively used as standard samples W1、W2、...、WNIt is shown that, according to the principle of minimum classification, the distance from a point P to be classified to a class is defined as:
Figure GDA0002777810880000104
let the i-th class training sample set be { xjk1, 2.., N }, the standard sample is selected as the center of a class of training samples:
Figure GDA0002777810880000105
the classification criterion is:
Figure GDA0002777810880000106
when a driver is in a shallow driving fatigue state, the micro-expression of the driver has the characteristics of reduced eye opening and closing degree, eyelid droop, pupil dilation, two outer side eyebrow droop, mouth angle droop, slight inward contraction and the like, and the obtained face image also has corresponding texture characteristics; acquiring face images in a normal state and a fatigue state by designing a fatigue excitation experiment, identifying face textures in a corresponding micro expression, and establishing a corresponding face texture feature micro expression library; and calculating to obtain the vector distance between the current face texture feature and the micro-expression face texture feature in the micro-expression library through a minimum distance discrimination function so as to judge the micro-expression of the driver. And if the number of times that the micro-expression of the driver is detected as shallow fatigue within a set time range exceeds a set threshold value, early warning is carried out on the tendency of the driver to enter a deep fatigue state.
A method for detecting the driving fatigue state of a driver based on micro expression is characterized in that the fatigue state of the driver is early warned or whether the driver tends to drive fatigue or not by identifying the micro expression of the driver in a shallow fatigue state by using the modern scientific technologies such as micro expression and image processing before the driver enters a deep fatigue state, and a driver driving fatigue state monitoring and early warning device is developed based on the micro expression to realize early warning identification of the driving fatigue and avoid traffic accidents caused by slow response and increased response time of the driver due to deep fatigue driving and even short sleep state loss of control over a vehicle.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (1)

1. The method for detecting the fatigue driving state of the driver based on the micro expression is characterized by comprising the following steps of:
before a driver enters a deep fatigue state, the fatigue state of the driver or whether the driver has a tendency of driving fatigue is warned by identifying the micro expression of the driver in the shallow fatigue state by utilizing the micro expression and image processing;
step (1): firstly, acquiring facial expression images of a driver: acquiring a facial expression video of a driver in a driving process by using a high-speed infrared camera arranged on an automobile rearview mirror so as to obtain a facial expression image of the driver;
step (2): image preprocessing, namely converting the facial expression image into a gray image by adopting linear transformation, and carrying out histogram equalization on the gray image;
a linear transformation, defined as:
Figure FDA0002777810870000011
the function is to change the range of function gray value f (x, y) from [ M, M ] to [ N, N ];
the expression for the logarithmic transformation is:
g(x,y)=clog[f(x,y)+1]
wherein c is a transform coefficient;
the specific implementation algorithm of the histogram equalization is as follows,
let L be the number of image gray levels, calculate the gray level r of the original imagekK is 0,1,2,. cndot, L-1; calculating the number n of pixels of each gray level of the original imagekThe total number of pixels in the image is N; counting the frequency P of each gray level in the imager(rk)=nkN; calculating cumulative histogram s of original imagek(ii) a Calculating a new quantization level tk(ii) a Determining mapping relation s before and after image histogram changek→tk(ii) a Counting the number n of each gray level pixel after image mappingk(ii) a Calculating the post-image-mapping gray distribution Pt(tk)=nkN; the gray level of the original image is modified by the mapping relation obtained by calculation, so that the approximate uniform distribution of the histogram can be obtained;
and (3): positioning a face region, positioning an eye region, a mouth region and an eyebrow region of a person, segmenting and extracting an image of the face region, and performing size homogenization on the segmented and extracted image;
positioning the human eye region by using an Adaboost-Haar algorithm, determining the position of the mass center of the human eye, recording the midpoint of a connecting line of the mass centers of the two eyes as O, recording the distance between the mass centers of the two eyes as d, taking d in the horizontal direction, taking d vertically downwards for 1.5d, taking 0.55d upwards respectively, and cutting the rectangular region;
positioning the human eye by using an Adaboost-Haar algorithm, wherein the Adaboost-Haar algorithm is as follows:
let the input dataset D { (x)1,y1),(x2,y2),...,(xi,yi) In which xiIs sample data, yiIs a sample attribute; let the weight of the initialization sample be omegaiThe number of negative samples is m, and the number of positive samples is l; when y isiWhen the value is 0, the sample is negative, ωi1/m; when y isiWhen 1, the sample is positive, ωi=1/l;
The learning cycle number is set as T, and when T is 1,2, T, the learning is carried out respectively;
step (31): weight normalization:
Figure FDA0002777810870000021
step (32): for each feature j, training a weak classifier hjCalculating the weighted error rate of all the featuresf
f=∑iωi|hj(xi)-yi|
Step (33): finding a minimum classifier from the weak classifiers determined in step (32)tWeak classifier h oftAnd updating the weight corresponding to each sample:
Figure FDA0002777810870000022
if sample xiIs correctly classified, then ei0, otherwise ei1, and
Figure FDA0002777810870000023
the final strong classifier is formed as follows:
Figure FDA0002777810870000024
wherein the content of the first and second substances,
Figure FDA0002777810870000025
constructing a feature set according to the positive and negative samples, and if the weak classifier correctly classifies the samples, reducing the weight of the samples; if the weak classifier wrongly classifies the samples, the weight of the samples is increased; training the wrongly-divided samples by the classifier is strengthened, and finally, all weak classifiers form a strong classifier;
and (4): extracting the texture features of the eye region, the mouth region and the eyebrow region of the driver; carrying out feature fusion on the texture features of the eye region, the texture features of the mouth region and the texture features of the eyebrow region to obtain face texture features; extracting texture features by using a texture feature extraction method based on a gray level co-occurrence matrix;
an arbitrary point (x, y) and another point (x + a, y + b) deviated from the arbitrary point are taken in the image, the two points constitute a point pair, and the gray value of the point pair is set to (g)1,g2) The maximum number of gray levels of the image is k, (g)1,g2) Has a total of k2Seed growing;
each kind (g) is counted1,g2) The number of occurrences, forming a matrix, and using (g)1,g2) The total number of occurrences normalizes it to the probability of occurrence ρ (g)1,g2) And forming a new matrix, wherein the new matrix is a gray level co-occurrence matrix;
the distance difference values (a, b) take different numerical value combinations to obtain joint probability matrixes under different conditions:
when a is 1 and b is 0, the pixel pair is horizontal, i.e. a 0 ° scan;
when a is 0 and b is 1, the pixel pair is vertical, i.e. 90 ° scan;
when a is 1 and b is 1, the pixel pair is right diagonal, i.e. 45 ° scan;
when a is-1, b is-1, the pixel pair is the left diagonal, i.e. 135 ° scan;
the spatial coordinates of (x, y) are converted to the value of ρ (g) by the probability that the point pair appears at different distance differential values1,g2) Forming a new matrix to form a gray level co-occurrence matrix;
normalizing the gray level co-occurrence matrix:
Figure FDA0002777810870000031
extracting texture features, namely calculating a statistical characteristic value by utilizing a gray level co-occurrence matrix of an image:
texture energy:
Figure FDA0002777810870000032
texture inertia:
Figure FDA0002777810870000033
texture correlation:
Figure FDA0002777810870000034
texture entropy:
Figure FDA0002777810870000035
wherein the content of the first and second substances,
Figure FDA0002777810870000036
Figure FDA0002777810870000037
establishing gray level co-occurrence matrixes in four directions, and extracting Q from the co-occurrence matrixes in each direction1、Q2、Q3、Q4Each texture contains 16 eigenvectors;
and (5): classifying the current facial texture features of the driver by using a minimum distance classification method according to the facial texture features of the fatigue state micro expressions in the facial texture micro expression library, so as to identify the micro expression of the driver, and accordingly judging whether the driver enters a shallow fatigue state, if the driver enters the shallow fatigue state, continuously judging whether the frequency of the micro expression of the driver detected as the shallow fatigue state in a set time range exceeds a set threshold, if the frequency exceeds the set threshold, indicating that the driver has a tendency of entering a deep fatigue state, and meanwhile, early warning the tendency of the driver entering the deep fatigue state;
acquiring face images in a normal state and a fatigue state by designing a fatigue excitation experiment, identifying face textures in a corresponding micro expression, and establishing a corresponding face texture feature micro expression library; calculating and obtaining a vector distance between the current face texture feature and the micro-expression face texture feature in the micro-expression library through a minimum distance discrimination function so as to judge the micro-expression of the driver;
if the distance between the current facial texture feature and any fatigue state micro-surface texture feature in the facial texture micro-expression library is larger than a set threshold value, indicating that the driver does not enter a shallow fatigue state; otherwise, indicating that the driver enters a shallow fatigue state, and giving an early warning prompt;
the step (5) further comprises the step of establishing a facial texture feature micro-expression library, which means that: acquiring face images in a normal state and a fatigue state through a fatigue experiment, segmenting the face region images, identifying an eye region, an eyebrow region and a mouth region, respectively identifying texture features of the eye region, the eyebrow region and the mouth region of the face images, fusing the texture features of the eye region, the eyebrow region and the mouth region to obtain facial texture features, recording the facial texture features of micro-expressions in the normal state and the fatigue state, and constructing a facial texture feature micro-expression library;
when a driver is in a shallow driving fatigue state, the micro-expression of the driver has the characteristics of reduced eye opening and closing degree, eyelid droop, pupil dilation, two outer side eyebrow droop, mouth angle droop and slight inward contraction, and the obtained face image also has corresponding texture characteristics; acquiring face images in a normal state and a fatigue state by designing a fatigue excitation experiment, identifying face textures in a corresponding micro expression, and establishing a corresponding face texture feature micro expression library; calculating and obtaining a vector distance between the current face texture feature and the micro-expression face texture feature in the micro-expression library through a minimum distance discrimination function so as to judge the micro-expression of the driver; and if the number of times that the micro-expression of the driver is detected as shallow fatigue within a set time range exceeds a set threshold value, early warning is carried out on the tendency of the driver to enter a deep fatigue state.
CN201810022165.7A 2018-01-10 2018-01-10 Method for detecting fatigue driving state of driver based on micro-expression Active CN108053615B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810022165.7A CN108053615B (en) 2018-01-10 2018-01-10 Method for detecting fatigue driving state of driver based on micro-expression

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810022165.7A CN108053615B (en) 2018-01-10 2018-01-10 Method for detecting fatigue driving state of driver based on micro-expression

Publications (2)

Publication Number Publication Date
CN108053615A CN108053615A (en) 2018-05-18
CN108053615B true CN108053615B (en) 2020-12-25

Family

ID=62126829

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810022165.7A Active CN108053615B (en) 2018-01-10 2018-01-10 Method for detecting fatigue driving state of driver based on micro-expression

Country Status (1)

Country Link
CN (1) CN108053615B (en)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108664947A (en) * 2018-05-21 2018-10-16 五邑大学 A kind of fatigue driving method for early warning based on Expression Recognition
WO2020019286A1 (en) * 2018-07-27 2020-01-30 高雄医学大学 Blepharoptosis detection method and system
CN109121078A (en) * 2018-08-27 2019-01-01 惠州Tcl移动通信有限公司 A kind of base station connection method, mobile terminal and the storage medium of mobile terminal
CN109614892A (en) * 2018-11-26 2019-04-12 青岛小鸟看看科技有限公司 A kind of method for detecting fatigue driving, device and electronic equipment
CN109559481A (en) * 2018-12-13 2019-04-02 平安科技(深圳)有限公司 Drive risk intelligent identification Method, device, computer equipment and storage medium
CN109784175A (en) * 2018-12-14 2019-05-21 深圳壹账通智能科技有限公司 Abnormal behaviour people recognition methods, equipment and storage medium based on micro- Expression Recognition
CN109815817A (en) * 2018-12-24 2019-05-28 北京新能源汽车股份有限公司 A kind of the Emotion identification method and music method for pushing of driver
CN109993093B (en) * 2019-03-25 2022-10-25 山东大学 Road rage monitoring method, system, equipment and medium based on facial and respiratory characteristics
CN109903565A (en) * 2019-04-11 2019-06-18 深圳成有科技有限公司 A kind of the fatigue driving determination method and system of bus or train route collaboration
CN110781828A (en) * 2019-10-28 2020-02-11 北方工业大学 Fatigue state detection method based on micro-expression
CN110796838B (en) * 2019-12-03 2023-06-09 吉林大学 Automatic positioning and recognition system for facial expression of driver
CN111968338A (en) * 2020-07-23 2020-11-20 南京邮电大学 Driving behavior analysis, recognition and warning system based on deep learning and recognition method thereof
CN112699802A (en) * 2020-12-31 2021-04-23 青岛海山慧谷科技有限公司 Driver micro-expression detection device and method
CN112818754A (en) * 2021-01-11 2021-05-18 广州番禺职业技术学院 Learning concentration degree judgment method and device based on micro-expressions
CN115439836B (en) * 2022-11-09 2023-02-07 成都工业职业技术学院 Healthy driving assistance method and system based on computer
CN116805405B (en) * 2023-08-25 2023-10-27 南通大学 Intelligent protection method and system for milling machine equipment

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4497305B2 (en) * 2004-12-08 2010-07-07 株式会社デンソー Driver status determination device
CN101593425B (en) * 2009-05-06 2011-01-12 深圳市汉华安道科技有限责任公司 Machine vision based fatigue driving monitoring method and system
CN102920467B (en) * 2011-08-08 2015-04-01 长天科技股份有限公司 Fatigue detecting method and device
CN102431452A (en) * 2011-12-07 2012-05-02 刘晓运 Sensor based control method for driving safety
CN102542257B (en) * 2011-12-20 2013-09-11 东南大学 Driver fatigue level detection method based on video sensor
CN104434066A (en) * 2014-12-05 2015-03-25 上海电机学院 Physiologic signal monitoring system and method of driver
CN106250801A (en) * 2015-11-20 2016-12-21 北汽银翔汽车有限公司 Based on Face datection and the fatigue detection method of human eye state identification
CN105956548A (en) * 2016-04-29 2016-09-21 奇瑞汽车股份有限公司 Driver fatigue state detection method and device
CN106407922A (en) * 2016-09-08 2017-02-15 哈尔滨工程大学 Online dictionary learning deformation model-based fatigue state recognition method
CN106408877A (en) * 2016-11-17 2017-02-15 西南交通大学 Rail traffic driver fatigue state monitoring method
CN107491769A (en) * 2017-09-11 2017-12-19 中国地质大学(武汉) Method for detecting fatigue driving and system based on AdaBoost algorithms

Also Published As

Publication number Publication date
CN108053615A (en) 2018-05-18

Similar Documents

Publication Publication Date Title
CN108053615B (en) Method for detecting fatigue driving state of driver based on micro-expression
Mbouna et al. Visual analysis of eye state and head pose for driver alertness monitoring
Ji et al. Fatigue state detection based on multi-index fusion and state recognition network
Yan et al. Real-time driver drowsiness detection system based on PERCLOS and grayscale image processing
US9064145B2 (en) Identity recognition based on multiple feature fusion for an eye image
CN100592322C (en) An automatic computer authentication method for photographic faces and living faces
CN112241658B (en) Fatigue driving early warning method based on depth camera
CN104778453B (en) A kind of night pedestrian detection method based on infrared pedestrian's brightness statistics feature
Junaedi et al. Driver drowsiness detection based on face feature and PERCLOS
CN110811649A (en) Fatigue driving detection method based on bioelectricity and behavior characteristic fusion
Jie et al. Analysis of yawning behaviour in spontaneous expressions of drowsy drivers
Yin et al. Multiscale dynamic features based driver fatigue detection
CN109740477A (en) Study in Driver Fatigue State Surveillance System and its fatigue detection method
CN111753674A (en) Fatigue driving detection and identification method based on deep learning
CN107563346A (en) One kind realizes that driver fatigue sentences method for distinguishing based on eye image processing
CN106529504A (en) Dual-mode video emotion recognition method with composite spatial-temporal characteristic
CN111460950A (en) Cognitive distraction method based on head-eye evidence fusion in natural driving conversation behavior
CN110598574A (en) Intelligent face monitoring and identifying method and system
CN108108651B (en) Method and system for detecting driver non-attentive driving based on video face analysis
CN106203338A (en) Based on net region segmentation and the human eye state method for quickly identifying of threshold adaptive
Rajevenceltha et al. A novel approach for drowsiness detection using local binary patterns and histogram of gradients
CN107977622B (en) Eye state detection method based on pupil characteristics
Panicker et al. Open-eye detection using iris–sclera pattern analysis for driver drowsiness detection
Gao et al. Fatigue state detection from multi-feature of eyes
Rani et al. Development of an Automated Tool for Driver Drowsiness Detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant