CN113469022A - Driver emotion change capturing device and method - Google Patents
Driver emotion change capturing device and method Download PDFInfo
- Publication number
- CN113469022A CN113469022A CN202110730246.4A CN202110730246A CN113469022A CN 113469022 A CN113469022 A CN 113469022A CN 202110730246 A CN202110730246 A CN 202110730246A CN 113469022 A CN113469022 A CN 113469022A
- Authority
- CN
- China
- Prior art keywords
- driver
- expression
- emotion
- facial
- change
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000008859 change Effects 0.000 title claims abstract description 83
- 230000008451 emotion Effects 0.000 title claims abstract description 70
- 238000000034 method Methods 0.000 title claims abstract description 42
- 230000014509 gene expression Effects 0.000 claims abstract description 87
- 230000008921 facial expression Effects 0.000 claims abstract description 77
- 230000008909 emotion recognition Effects 0.000 claims abstract description 33
- 230000006399 behavior Effects 0.000 claims abstract description 27
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 12
- 238000012706 support-vector machine Methods 0.000 claims abstract description 12
- 210000004709 eyebrow Anatomy 0.000 claims description 62
- 239000011159 matrix material Substances 0.000 claims description 37
- 230000001815 facial effect Effects 0.000 claims description 32
- 238000012545 processing Methods 0.000 claims description 31
- 238000000605 extraction Methods 0.000 claims description 27
- 239000013598 vector Substances 0.000 claims description 16
- 230000002996 emotional effect Effects 0.000 claims description 10
- 230000004927 fusion Effects 0.000 claims description 10
- 238000010606 normalization Methods 0.000 claims description 10
- 206010063659 Aversion Diseases 0.000 claims description 9
- 238000001914 filtration Methods 0.000 claims description 9
- 230000003183 myoelectrical effect Effects 0.000 claims description 9
- 230000003321 amplification Effects 0.000 claims description 7
- 230000000903 blocking effect Effects 0.000 claims description 7
- 238000001514 detection method Methods 0.000 claims description 7
- 238000003199 nucleic acid amplification method Methods 0.000 claims description 7
- 239000006185 dispersion Substances 0.000 claims description 6
- 238000007781 pre-processing Methods 0.000 claims description 5
- 230000000007 visual effect Effects 0.000 claims description 5
- 230000002902 bimodal effect Effects 0.000 claims description 4
- 238000013528 artificial neural network Methods 0.000 claims description 3
- 239000003623 enhancer Substances 0.000 claims description 3
- 238000005070 sampling Methods 0.000 claims description 3
- 239000000126 substance Substances 0.000 claims description 3
- 230000001629 suppression Effects 0.000 claims description 3
- 230000007246 mechanism Effects 0.000 claims description 2
- 210000000744 eyelid Anatomy 0.000 description 6
- 230000006870 function Effects 0.000 description 5
- 210000003423 ankle Anatomy 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 210000003414 extremity Anatomy 0.000 description 3
- 210000003205 muscle Anatomy 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 206010039203 Road traffic accident Diseases 0.000 description 2
- 230000036760 body temperature Effects 0.000 description 2
- 210000004556 brain Anatomy 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 230000036651 mood Effects 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- OKTJSMMVPCPJKN-UHFFFAOYSA-N Carbon Chemical compound [C] OKTJSMMVPCPJKN-UHFFFAOYSA-N 0.000 description 1
- 238000005452 bending Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 239000000969 carrier Substances 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 210000000887 face Anatomy 0.000 description 1
- 229910021389 graphene Inorganic materials 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000011835 investigation Methods 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000007619 statistical method Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000001052 transient effect Effects 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
- 230000037303 wrinkles Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/08—Feature extraction
Abstract
The invention provides a driver emotion change capturing device and method, and belongs to the technical field of safe driving of automobiles. When the time domain characteristics of the electromyographic signals change, the expression-driving behavior emotion recognition submodule judges the emotion change condition of the driver through a classifier of a support vector machine; and otherwise, the expression emotion recognition sub-module performs emotion recognition by a facial expression criterion model based on the geometric features of the face of the driver and by designing an expression recognition method integrating a time sequence attention recursive network and a convolutional neural network. The invention can obviously improve the facial expression recognition efficiency and accuracy of the driver.
Description
Technical Field
The invention belongs to the technical field of safe driving of automobiles, and particularly relates to a device and a method for capturing emotion change of a driver.
Background
Investigation shows that about 90% of traffic accidents are caused by human factors (illegal behaviors, attitudes, emotions and the like), and the percentage of accidents caused by emotional changes of drivers is up to 40%. If the change of the emotion of the driver can be accurately and efficiently captured before an accident occurs, and early warning is timely carried out, the traffic accident caused by dangerous emotion can be obviously reduced.
According to the psychological research result, the driver can generate six emotions of happiness, sadness, surprise, anger, fear and disgust during driving: (1) happiness or satisfaction, which is a state in which a person feels happy or satisfied when meeting a wedding; (2) sadness, grief, sadness, and difficulty in describing and hurting heart are common emotions; (3) surprisingly, a relatively transient emotion, which is unexpected, generally occurs for a very short time; other traffic participants, drivers physiological, psychological factors, etc. may cause a "surprise" to occur; (4) anger, a mood that is likely to occur during driving, refers to a stressful and unpleasant mood that arises when a desire is not fulfilled or when the intended action is frustrated; (5) fear, an unforeseen psychological or physiological strong reaction caused by the surrounding unpredictable factors, an emotional experience of trying to get rid of, escape and incapacitate; (6) aversion, a dislike emotion, which not only can cause aversion by vision, hearing, taste, smell and touch, but also can cause the same result by the appearance, behavior and even thought of people; this emotion is caused by physiological, psychological and socio-environmental factors of the driver. Among them, dangerous emotion (emotion having a great influence on traffic safety) of a driver mainly means: anger. When a driver encounters a driving scene such as congestion, severe weather, special illumination and the like, or is influenced by surrounding drivers and driving environments, the emotion of 'road rage' can be triggered, the scales of stepping on an accelerator, stepping on a brake, clutching, engaging a gear or turning a steering wheel can be increased, the number of times of changing is increased, the action of operating an automobile can be exerted harder by subconsciousness or the body is unstable and shakes, more obvious lane departure can be generated, and the driver tends to dangerous driving behaviors.
At present, researches on emotion change of a driver mainly focus on the aspects of emotion change identification of the driver, facial expression identification algorithm of the driver, facial expression change representation of the driver and the like, and the existing defects mainly comprise: (1) the traditional driver emotion recognition method is generally based on facial expressions of a driver and combines with comprehensive judgment of information in various aspects such as voice tone, pulse, body temperature, brain wave and the like, and although the judgment accuracy is high, the influence factors are more, and in addition, certain data acquisition is difficult; (2) in the aspect of facial expression recognition algorithms of drivers, most algorithms adopt a single method such as a convolutional neural network or a support vector machine, and the problems of low precision, low practicability, difficulty in capturing long-term time correlation change and the like exist; (3) the representation of the facial expression change of the driver usually takes the form of facial images and limb movements as the main part, and although the expression change can be expressed, a plurality of problems still exist, such as: redundant information is excessive when represented by a face image; the expression by the limb movement has the problems of individual difference, insufficient intuition of expression and the like.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a device and a method for capturing emotion change of a driver, which can comprehensively judge the emotion change condition of the driver by collecting facial expressions and driving behaviors of the driver, can quickly recognize dangerous emotion and carry out early warning through an alarm, and improve emotion recognition efficiency and accuracy.
The present invention achieves the above-described object by the following technical means.
A method for capturing emotion change of a driver comprises the following steps:
s1, obtaining the face image and myoelectric signal of the driver
S2, preprocessing the face image and the electromyographic signal of the driver
S3, extracting the facial image and myoelectric signal characteristics of the driver
S4: driver emotion change recognition
When the time domain characteristics of the electromyographic signals change, the expression-driving behavior emotion recognition submodule judges the emotion change condition of the driver through a classifier of a support vector machine; and otherwise, the expression emotion recognition sub-module performs emotion recognition by a facial expression criterion model based on the geometric features of the face of the driver and by designing an expression recognition method integrating a time sequence attention recursive network and a convolutional neural network.
Further, the expression-driving behavior emotion recognition submodule judges the emotion change condition of the driver through a classifier of a support vector machine, and specifically comprises:
according to the facial expression geometric characteristics H of the driver and the time domain characteristic matrix Q of the electromyographic signals of the driving behaviors, the characteristic fusion sub-module constructs an optimal characteristic matrix R of bimodal data based on a fusion algorithm of a Fisher criterion;
the Fisher discriminant function is:
wherein the content of the first and second substances,is any n-dimensional vector; make a functionVector to maximumReferred to as the best discrimination vector, which has the smallest intra-class dispersion and the largest inter-class dispersion; sbAnd SωAn inter-class scatter matrix and an intra-class scatter matrix, respectively;
according to the formula (2), whenWhen the maximum value is reached, the denominator of the maximum value is set to be a non-zero constant, the numerator of the maximum value is selected, and a Lagrange multiplier method is introduced to solve the geometric characteristic matrix H and the time domain characteristic matrix of the electromyographic signals of the face of the driverQ are respectively corresponding to the best discrimination vectors delta*、ψ*(ii) a Let delta1=δ*、ψ1=ψ*So as to obtain H, Q Foley-Sammon discrimination vector sets of { delta-delta respectivelyi,1<i≤n1}、{ψiI is more than 1 and less than or equal to n2 }; let r bei=max{HTδi,QTψi},R=(r1,r2,…,rn)TI.e. the best feature matrix fused by H, Q, where n is max (n1, n 2).
Furthermore, the expression emotion recognition submodule performs emotion recognition by a facial expression criterion model based on the geometric characteristics of the face of the driver and by designing an expression recognition method integrating a time sequence attention recursive network and a convolutional neural network, and specifically comprises the following steps: constructing a driver facial expression criterion model according to the state change condition of the driver facial feature area, and primarily judging the emotion change based on the driver facial expression; a method for designing and fusing a time sequence attention recursive network and a convolutional neural network is designed for emotion recognition:
and (H) extracting the feature to obtain a geometric feature matrix H ═ H of the facial expression of the driver1,h2,…hn1)TThe convolution neural network is used for down sampling to select the optimal characteristic Hmax equal to max (h)1,h2,…,hn1);
Generating a feature representation by assigning different weights to eyebrow, eye and mouth coordinates using a time-sequential attention mechanism αiRepresenting importance information;
the optimal feature matrix of the facial expression of the driver is multiplied by the corresponding weight and then input into a recursive network to finish the emotion classification work, and the output result isWherein M represents a fully connected layerB is a bias term.
Furthermore, the facial expression criterion model of the driver is constructed based on the states of three facial components, namely eyebrows, eyes and mouth, and a facial coordinate system of the driver, which takes the center of the nose of the driver as an origin, a connecting line of the nose wings as an x axis and a connecting line of the nose bridge as a y axis, is established, wherein: 5 feature points are taken for each eyebrow, the feature points of the left eyebrow are respectively calibrated by 1, 2, 3, 4 and 5, and the feature points of the right eyebrow are respectively calibrated by 6, 7, 8, 9 and 10; taking 6 characteristic points for each eye, calibrating the characteristic points of the left eye by 11, 12, 13, 14, 15 and 16 respectively, and calibrating the characteristic points of the right eye by 17, 18, 19, 20, 21 and 22 respectively; taking 7 characteristic points by a nose, and calibrating by 23, 24, 25, 26, 27, 28 and 29; taking 20 characteristic points of the mouth, and calibrating the characteristic points by using 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48 and 49;
the driver facial expression criterion model comprises:
1) the criterion model of facial expression change when the driver is happy is as follows:
gha=αhauha+βhavha+γhawha<0
wherein: u. ofha、vha、whaRepresents the expression of the state of eyebrows, eyes and mouth during happy, and αha、βha、γharepresenting the weight coefficient of the state of the eyebrows, the eyes and the mouth of the driver in the expression of expression pleasure;
2) the criterion model of facial expression change when the driver is sad is as follows:
wherein: u. ofsa、vsa、wsaIndicates the status expression of eyebrow, eye and mouth during sadness, and αsa、βsa、γsarepresenting the weight coefficient occupied by the state of the eyebrows, the eyes and the mouth of the driver in the expression of the expression sadness;
3) the facial expression change criterion model when the driver is surprised is as follows:
gsu=αsuusu+βsuvsu+γsuwsu>0
wherein: u. ofsu、vsu、wsuShows the state expression of eyebrow, eye and mouth at surprise, and αsu、βsu、γsurepresenting the weight coefficient of the states of the eyebrows, the eyes and the mouth of the driver in expression of expression surprise;
4) the criterion model of the change of the facial expression when the driver is angry is as follows:
gan=αanuan+βanvan+γanwan>0
wherein: u. ofan、van、wanRepresenting the expression of the state of eyebrows, eyes and mouth when angry, and αxn、βan、γana weight coefficient representing a state of eyebrows, eyes, and a mouth of the driver occupying in expression of expression anger;
5) the facial expression change criterion model when the driver is frightened is as follows:
gfe=αfeufe+βfevfe+yfewfe>0
wherein: u. offe、vfe、wfeRepresents the expression of the state of eyebrows, eyes and mouth during fear, and αfe、βfe、γferepresenting the weight coefficient occupied by the states of the eyebrows, the eyes and the mouth of the driver in the expression of the expression fear;
6) the facial expression change criterion model when the driver dislikes is as follows:
gdi=αdiudi+βdivdi+γdiwdi<0
wherein: u. ofdi、vdi、wdiIndicates the state expression of eyebrows, eyes and mouth during aversion, and αdi、βdi、γdirepresents a weight coefficient that the state of the eyebrows, eyes, and mouth of the driver occupies in the expression of expression aversion.
Further, the extraction of the facial image and the electromyographic signal features of the driver specifically comprises the following steps:
the first feature extraction submodule selects facial feature points by using the active appearance model to extract geometric features of the preprocessed driver facial image, and a facial expression geometric feature matrix H (H) is constructed1,h2,…,hn1)TN1 denotes a total of n1 facial images;
the second feature extraction submodule selects the absolute value mean value a, the root mean square value o, the integral absolute value d and the variance value l to perform time domain feature extraction on the preprocessed electromyographic signals, and constructs an electromyographic signal time domain feature matrix Q ═ (Q ═ Q)1,q2,…,qn2)TAnd n2 represents that n2 groups of electromyographic signals are acquired in total.
Further, still include: when the emotion of the driver changes to anger, i.e. s (h) or k (r) outputs an "result, indicating that dangerous emotion has occurred, the system immediately reminds the driver of adjustment by the alarm voice.
Further, the preprocessing of the facial image and the electromyographic signal of the driver comprises:
carrying out face detection on the face image of the driver, and judging whether the face of the driver is contained in the collected face image of the driver; positioning operation is carried out to accurately find out the face position; the visual effect of the image is improved by using interference suppression, edge sharpening and pseudo-color processing image enhancement methods, and noise interference is eliminated; unifying the gray value and size of the image to 40 × 40;
the electromyographic signals of the driver are subjected to blocking processing, the electromyographic signals are amplified by the high-power amplification sub-module, high-frequency interference signals are removed by the low-pass filtering sub-module, and the signal-to-noise ratio of the output signals is improved to be more than 50dB by the power frequency notch image enhancer module.
A driver emotion change capturing device, comprising:
the system comprises a driver state acquisition and processing module and a driving behavior acquisition and processing module, wherein the facial expression acquisition and processing module comprises a vehicle-mounted camera, a face detection submodule, a face alignment submodule, an image enhancement submodule and a face normalization submodule, and the driving behavior acquisition and processing module comprises an electronic tattoo sensor, a blocking processing submodule, a high-power amplification submodule, a low-pass filtering submodule and a power frequency notch submodule;
the emotion change recognition and early warning system comprises a feature extraction module, an emotion change recognition module and an early warning module; the feature extraction module comprises a first feature extraction submodule and a second feature extraction submodule; the driving emotion change recognition module comprises an expression emotion recognition sub-module, an expression-driving behavior emotion recognition sub-module and a feature fusion sub-module; the early warning module comprises an embedded controller and an alarm.
The invention has the beneficial effects that:
(1) when the time domain characteristics of the electromyographic signals change, the change of the emotion of the driver is judged by a classifier which combines the geometric characteristics of facial expressions and the time domain characteristics of the electromyographic signals and selects a support vector machine, and compared with the method which only adopts the facial expressions, the accuracy is obviously improved; compared with the combination of information in various aspects such as voice tone, pulse, body temperature, brain wave and the like, the facial expression reduces the difficulty and the complexity of information acquisition and reduces the data volume;
(2) the facial expression change criterion model of the driver is designed, when the time domain characteristics of the electromyographic signals are not changed, the facial expression criterion model based on the geometric characteristics of the face of the driver is designed, and the sequential attention recursive network and convolutional neural network algorithm are fused to identify the emotion change condition of the driver, so that the facial expression identification efficiency and accuracy of the driver can be improved; the expression is simpler and clearer compared with a face image method, and the expression is clearer and more universal compared with a limb action method.
Drawings
FIG. 1 is a schematic view of a driver emotion change capturing device according to the present invention;
FIG. 2 is a schematic view of the calibration of facial feature points of a driver according to the present invention;
FIG. 3 is a flowchart of a method for capturing emotional changes of a driver according to the present invention.
Detailed Description
The invention will be further described with reference to the following figures and specific examples, but the scope of the invention is not limited thereto.
As shown in fig. 1, the emotion change capturing device of the driver of the present invention includes a driver state collecting and processing system and an emotion change recognizing and early warning system.
The driver state acquisition and processing system comprises a facial expression acquisition and processing module and a driving behavior acquisition and processing module.
The facial expression acquisition processing module comprises a vehicle-mounted camera, a face detection submodule, a face alignment submodule, an image enhancement submodule and a face normalization submodule; the vehicle-mounted camera is arranged in a vehicle cab and faces the face of a driver, and the vehicle-mounted camera acquires a facial image of the driver; the face detection submodule is used for judging whether the face is contained in the face image of the driver collected by the vehicle-mounted camera; the face alignment submodule is used for carrying out positioning operation on the detected face area and accurately finding out the face position; the image enhancement submodule is used for improving the visual effect of the image aiming at the application occasion of the given image; the face normalization submodule is used for adjusting the image, so that the face data is not interfered by problems of posture change, light, shielding and the like.
The driving behavior acquisition processing module comprises an electronic tattoo sensor, a blocking processing submodule, a high-power amplification submodule, a low-pass filtering submodule and a power frequency trap submodule; the electronic tattoo sensor is adhered to the arm and ankle of the driver and collects the myoelectric signals generated by the arm and ankle; the blocking processing submodule is used for cutting off high-pass filtering with lower frequency in the electromyographic signals; the high-power amplification sub-module is used for amplifying the electromyographic signals, the electromyographic signals are extremely weak, and the frequency range of the really useful electromyographic signals is approximately between 10 Hz and 500 Hz; the low-pass filtering submodule is used for removing high-frequency interference signals; and the power frequency trap submodule is used for improving the signal-to-noise ratio of the output signal.
The emotion change recognition and early warning system comprises a feature extraction module, an emotion change recognition module and an early warning module.
The feature extraction module comprises a first feature extraction submodule and a second feature extraction submodule, wherein the first feature extraction submodule is used for performing dimensionality reduction processing on huge data and extracting facial expression geometric features (namely geometric features of eyebrows, eyes and mouths of the faces) of a driver, the second feature extraction submodule is used for performing dimensionality reduction processing on the huge data and extracting time domain features of myoelectric signals of arms and ankles of the driver (namely performing statistical analysis on the myoelectric signals in a time domain), the myoelectric signals are bioelectricity signals generated when muscles contract and are directly reflected by driving operation behaviors, and muscle state information contained in the myoelectric signals can reflect the current driving emotion of the driver.
The driving emotion change recognition module comprises an expression emotion recognition sub-module, an expression-driving behavior emotion recognition sub-module and a feature fusion sub-module; the expression emotion recognition sub-module is used for recognizing the emotion change condition of the driver by designing and fusing a time sequence Attention recursive Network and a convolutional Neural Network (TARN-CNN) algorithm through a facial expression criterion model based on the geometric characteristics of the face of the driver when the time domain characteristics of the electromyographic signals of the driver are not changed; the expression-driving behavior emotion recognition sub-module is used for selecting a classifier of a support vector machine to recognize the emotion of the driver by fusing facial expression geometric features and the time domain features of the electromyographic signals when the time domain features of the electromyographic signals of the driver change; and the feature fusion submodule fuses facial expression geometric features and electromyographic signal time domain features based on a Fisher criterion to obtain an optimal feature matrix of bimodal data.
The early warning module comprises an embedded controller and an alarm, the embedded controller is used for processing and judging the recognized emotion, and the alarm is used for giving an alarm to the driver when the emotion is judged to be dangerous emotion.
The driver facial expression change criterion model specifically comprises the following steps:
the first feature extraction sub-module performs facial feature point calibration on the driver, in this embodiment, the condition that the right face part identifies the facial expression change of the driver is taken as an example, and 68 facial feature points are generated according to an Active Appearance Model (AAM), and in this embodiment, in the process of identifying the facial expression change of the driver, 49 points (as shown in fig. 2) which have important roles in facial expressions, such as eyebrow, eyes, nose and mouth, are mainly concerned; wherein, 5 characteristic points are taken for each eyebrow (the characteristic points of the left eyebrow are respectively marked by 1, 2, 3, 4 and 5, the characteristic points of the right eyebrow are respectively marked by 6, 7, 8, 9 and 10), 6 characteristic points are taken for each eye (the characteristic points of the left eye are respectively marked by 11, 12, 13, 14, 15 and 16, the characteristic points of the right eye are respectively marked by 17, 18, 19, 20, 21 and 22), 7 characteristic points are taken for the nose (marked by 23, 24, 25, 26, 27, 28 and 29), and 20 characteristic points are taken for the mouth (marked by 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48 and 49); after 49 feature points are obtained, according to the change of the face feature area state of the driver, a face coordinate system of the driver is established, wherein the face coordinate system takes the center of a nose of the driver as an origin, a connecting line of a nose wing as an x axis and a connecting line of a nose bridge as a y axis, and a facial expression change criterion model of the driver based on the states of three face components of eyebrows, eyes and mouth is established by combining a FACS (facial Action Coding System) Coding system, so that the expression change of the driver is represented.
Let E ═ ha, sa, su, an, fe, di denote driver facial expressions, including happiness (ha), sadness (sa), surprise (su), anger (an), fear (fe), and disgust (di). u. ofha、vha、whaExpressing the state expression of eyebrows, eyes and mouths in happy time; u. ofsa、vsa、wsaRepresenting the state expression of eyebrows, eyes and mouths in sadness; u. ofsu、vsu、wsuIndicating expression of state of eyebrows, eyes and mouths at surprise; u. ofan、van、wanRepresenting the expression of the states of eyebrows, eyes and mouth when angry; u. offe、vfe、wfeRepresenting the expression of the state of eyebrows, eyes and mouths in fear; u. ofdi、vdi、wdiThe expression of the state of eyebrows, eyes and mouth in aversion is shown. When the vehicle-mounted camera collects the face image of the driver, the frame rate is 10fps, t1、t2Respectively representing an initial time 0s (frame 1) and a next time 0.1s (frame 10); x is the number ofη、yηRepresents t1The abscissa and ordinate of the time η point; x'η、y’ηRepresents t2Abscissa and ordinate of the time η point.
1) When the driver's expression is happy, the states of the respective face members are expressed as follows:
in summary, the criterion model of facial expression change in happy hours of the driver can be expressed as:
gha=αhauha+βhavha+γhawha<0 (4)
wherein, whaByAndit is decided that,the degree of rise of the mouth angle at the time of happiness is shown,representing the tensile magnitude of the mouth; ghaIs a happy state criterion model; alpha is alphaha、βha、γhaThe weight coefficient which represents the states of the eyebrows, the eyes and the mouth of the driver and is occupied in expression of expression pleasure can be obtained by calculating the information gain obtained by judging the expression through the geometrical characteristics of the face, carrying out normalization processing and finally obtaining alphaha=0.2,βha=0.3,γha=0.5。
2) When the driver's expression is sad, the states of the face members are expressed as follows:
in summary, the criterion model of facial expression change when the driver is sad can be expressed as:
wherein, wsaByAndit is decided that,indicating the degree of rise of the mouth angle in case of sadness,showing the height of the lower lip; gsaIs a sad state criterion model; alpha is alphasa、βsa、γsaThe weight coefficient which represents the state of the eyebrows, the eyes and the mouth of the driver and is occupied in the expression of expression sadness can be obtained by calculating the information gain obtained by judging the expression by the geometrical characteristics of the face and carrying out normalization processing, and finally alpha is obtainedsa=0.3,βsa=0.4,γsa=0.3。
3) When the driver's expression is surprised, each facial member state is expressed as follows:
to sum up, the facial expression change criterion model when the driver is surprised can be expressed as:
gsu=αsuusu+βsuvsu+γsuwsu>0 (12)
wherein, gsuIs a surprising state criterion model, alphasu、βsu、γsuThe weight coefficient which represents the states of the eyebrows, the eyes and the mouth of the driver and is occupied in expression surprise of the expression can be obtained by calculating the information gain obtained by judging the expression through the geometrical characteristics of the face, carrying out normalization processing and finally obtaining alphasu=0.2,βsu=0.4,γsu=0.4。
4) When the driver's expression is anger, each face member state is expressed as follows:
in summary, the criterion model of facial expression change when the driver is angry can be expressed as:
gan=αan+uan+βanvan+γanwan>0 (16)
wherein, wanByAndit is decided that,indicating the amount of distance between the lips when angry,indicating the height of the lip angle; ganIs a state criterion model of anger; alpha is alphaan、βan、γanThe weight coefficient which represents the states of the eyebrows, the eyes and the mouth of the driver and is occupied in expression of expression anger can be obtained by calculating the information gain obtained by judging the expression by the geometrical characteristics of the face and carrying out normalization processing, and finally alpha is obtainedan=0.4,βan=0.4,γan=0.2。
5) When the driver's expression is fear, the states of the respective face members are expressed as follows:
in conclusion, the facial expression change criterion model when the driver is frightened can be expressed as:
gfe=αfeufe+βfevfe+γfewfe>0 (20)
wherein, gfeIs a fear state criterion model; alpha is alphafe、βfe、γfeThe weight coefficient of the states of the eyebrows, the eyes and the mouth of the driver in expression of the expression fear is represented, the information gain obtained by expression judgment can be obtained by calculating the geometrical characteristics of the face, normalization processing is carried out, and finally alpha is obtainedfe=0.3,βfe=0.3,γfe=0.4。
6) When the driver's expression is aversion, each face member state is expressed as follows:
the upper lip rises, the lower lip rises:
in summary, the criterion model of facial expression change when the driver dislikes can be expressed as:
gdi=αdiudi+βdivdi+γdiwdi<0 (24)
wherein, gdiIs an aversive state criterion model; alpha is alphadi、βdi、γdiThe weight coefficient which represents the states of the eyebrows, the eyes and the mouth of the driver and is occupied in expression of expression aversion can be obtained by calculating the information gain obtained by judging the expression through the geometrical characteristics of the face, normalizing the information gain and finally obtaining alphadi=0.4,βdi=0.3,γdi=0.3。
The vehicle-mounted camera uses an MINI-238H camera with the resolution of 720 × 480 and the frame rate of 60fps, and is arranged at a front windshield which does not obstruct the sight of a driver; the electronic tattoo sensor uses a transparent, stretchable and stickable graphene electronic tattoo sensor number; embedded controller usageA microcontroller integrating 256KB flash memory and 96KB RAM to store application and sensor codes; the alarm uses a vehicle-mounted voice audible and visual alarm with the model of XYSG-S03DYQ, the power is 20W, and the selectable voltage DC12V/DC 24V.
As shown in fig. 3, the specific workflow of the method for capturing the emotion change of the driver is as follows:
s1: acquiring facial image and electromyographic signal of driver
Shooting the face of a driver through a vehicle-mounted camera, collecting and storing the face image of the driver, and collecting RGB images and Depth images in a size of 320 multiplied by 240 at an interval of 0.1 s; meanwhile, the electromyographic signals of corresponding muscles of arms and ankles of a driver, of which the frequency ranges are more than or equal to 20 and less than or equal to 500Hz, are obtained and stored by the electronic tattoo sensor through a method of selecting the surface level, and the voltage of the electromyographic signals is mainly collected.
S2: preprocessing of facial images and electromyographic signals of driver
S2-1: carrying out face detection on the face image of the driver, and judging whether the face of the driver is contained in the collected face image of the driver; and (3) carrying out direct blocking treatment on the electromyographic signals of the driver, and carrying out high-pass filtering with the cut-off frequency below 200 Hz.
S2-2: positioning operation is carried out to accurately find out the face position; and amplifying the electromyographic signals by a high-power amplification sub-module.
S2-3: the image enhancement module utilizes interference suppression, edge sharpening and pseudo color processing image enhancement methods to improve the visual effect of the image and eliminate noise interference; the high frequency interference signal is removed by a low pass filtering sub-module.
S2-4: unifying the gray value and the size of the image to 40 × 40 by the face normalization submodule, and adjusting the image to ensure that the face data is not interfered by the problems of posture change, light rays and shielding; the signal-to-noise ratio of the output signal of the electromyographic signal is improved to more than 50dB through the power frequency notch image enhancer module.
S3: extraction of facial image and electromyographic signal features of driver
The first feature extraction submodule selects 49 facial feature points of eyebrows, eyes, a nose and a mouth with larger influence on facial expression proportion from an Active Appearance Model (AAM) to extract the geometric features of the preprocessed face image of the driver, and sets a coordinate set h of the geometric feature points of the ith face imagei=(x1,y1,…,x49,y49)TAnd constructing a facial expression geometric feature matrix H ═ (H) by using the facial expression geometric feature matrix H1,h2,…,hn1)TN1 denotes a total of n1 facial images;
the second characteristic extraction submodule selects the absolute value mean value a, the root mean square value o, the integral absolute value d and the variance value l to carry out time domain characteristic extraction operation on the preprocessed electromyographic signals, and sets an i-th group of electromyographic signal time domain characteristic set qi=(a,o,d,l)TAnd constructing a time domain characteristic matrix Q ═ (Q) of the electromyographic signal1,q2,…,qn2)TAnd n2 represents that n2 groups of electromyographic signals are acquired in total.
S4: driver emotion change recognition
And selecting different emotion change identification methods according to the change condition of the electromyographic signals of the driver. When the time domain characteristics of the electromyographic signals change, the expression-driving behavior emotion recognition submodule judges the emotion change condition of the driver through a classifier of a support vector machine; and otherwise, the expression emotion recognition sub-module performs emotion recognition by a facial expression criterion model based on the geometric features of the face of the driver and by designing an expression recognition method integrating a time sequence attention recursive network and a convolutional neural network.
S4-1: driver facial expression based emotion recognition
S4-1-1: constructing a facial expression criterion model of the driver according to the state change conditions (such as bent eyebrows, closed upper and lower eyelids of eyes, lifted mouth angle and elongated mouth) of the facial feature areas (eyebrows, eyes and mouth) of the driver, preliminarily judging the emotion change based on the facial expression of the driver, and designing a method for integrating a time sequence attention recursive network and a convolutional neural network to recognize emotion on the basis;
s4-1-2: and (H) extracting the feature to obtain a geometric feature matrix H ═ H of the facial expression of the driver1,h2,…,hn1)TThe convolution neural network is used for down sampling to select the optimal characteristic Hmax equal to max (h)1,h2,…,hn1);
S4-1-3: generating a feature representation by assigning different weights to eyebrow, eye and mouth coordinates using a time-sequential attention mechanismαiRepresenting importance information;
s4-1-4: the optimal feature matrix of the facial expression of the driver is multiplied by the corresponding weight and then input into a recursive network to finish the emotion classification work, and the output result isWhere M represents the weight matrix of the fully-connected layer and b is the bias term.
S4-2: emotion recognition based on driver facial expressions and driving behaviors
The driving behavior is embodied by collecting the operation behavior of the driver on the vehicle through an electronic tattooing sensor.
S4-2-1: collecting facial images and electromyographic signals as information carriers for emotion recognition of drivers, extracting facial expression geometric characteristics H and driving behavior electromyographic signal time domain characteristic matrixes Q of the drivers, and constructing an optimal characteristic matrix R of bimodal data by a characteristic fusion sub-module based on a fusion algorithm of Fisher criterion; the method comprises the following specific steps:
the Fisher discriminant function is:
wherein the content of the first and second substances,is any n-dimensional vector; make a functionVector to maximumReferred to as the best discrimination vector, which has the smallest intra-class dispersion and the largest inter-class dispersion; sbAnd SωAn inter-class scatter matrix and an intra-class scatter matrix, respectively;
from the formula (26), whenWhen the maximum value is reached, the denominator of the maximum value is set to be a non-zero constant, the numerator of the maximum value is selected, and the Lagrange multiplier method is introduced to solve the optimal identification vectors delta respectively corresponding to the geometric characteristic matrix H and the time domain characteristic matrix Q of the electromyographic signals of the face of the driver*、ψ*;
② setting delta1=δ*、ψ1=ψ*So as to obtain H, Q Foley-Sammon discrimination vector sets of { delta-delta respectivelyi,1<i≤n1}、{ψi,1<i≤n2};
③ ream ofi=max{HTδi,QTψi},R=(r1,r2,…,rn)TI.e. the best feature matrix fused by H, Q, where n is max (n1, n 2).
S4-2-2: inputting the optimal feature matrix R into a classifier of a support vector machine to realize the classification of the emotion of the driver; the objective function of the support vector machine is:
wherein ω represents a weight vector of the support vector machine; b' is a constant; c represents a penalty coefficient and is used for controlling the penalty degree of the misclassification sample and balancing the complexity and the loss error of the model; xiiThe slack variable is a non-negative value that adjusts the number of misclassified samples allowed to exit during the classification process.
S4-2-3: and finally, obtaining an optimal classification function model k (r) ═ sgn (ω r + b'), and obtaining the emotion category according to the classification decision function value of r.
S5: dangerous emotion early warning
When the emotion of the driver changes into anger, namely s (h) or k (r), the output result is 'an', the dangerous emotion is generated, and the system immediately reminds the driver to pay attention to adjustment through the alarm voice; otherwise, the process proceeds to S1, and the next recognition is performed.
The present invention is not limited to the above-described embodiments, and any obvious improvements, substitutions or modifications can be made by those skilled in the art without departing from the spirit of the present invention.
Claims (8)
1. A method for capturing emotion change of a driver is characterized by comprising the following steps:
s1, obtaining the face image and myoelectric signal of the driver
S2, preprocessing the face image and the electromyographic signal of the driver
S3, extracting the facial image and myoelectric signal characteristics of the driver
S4: driver emotion change recognition
When the time domain characteristics of the electromyographic signals change, the expression-driving behavior emotion recognition submodule judges the emotion change condition of the driver through a classifier of a support vector machine; and otherwise, the expression emotion recognition sub-module performs emotion recognition by a facial expression criterion model based on the geometric features of the face of the driver and by designing an expression recognition method integrating a time sequence attention recursive network and a convolutional neural network.
2. The method for capturing emotional changes of a driver as claimed in claim 1, wherein the emotion recognition sub-module for expression-driving behavior judges the emotional change condition of the driver by a classifier of a support vector machine, and specifically comprises:
according to the facial expression geometric characteristics H of the driver and the time domain characteristic matrix Q of the electromyographic signals of the driving behaviors, the characteristic fusion sub-module constructs an optimal characteristic matrix R of bimodal data based on a fusion algorithm of a Fisher criterion;
the Fisher discriminant function is:
wherein the content of the first and second substances,is any n-dimensional vector; make a functionVector to maximumThe boundary is the best discrimination vector which has the minimum intra-class dispersion and the maximum inter-class dispersion; sbAnd SωAn inter-class scatter matrix and an intra-class scatter matrix, respectively;
according to the formula (2), whenWhen the maximum value is reached, the denominator of the maximum value is set to be a non-zero constant, the numerator of the maximum value is selected, and the Lagrange multiplier method is introduced to solve the optimal identification vectors delta respectively corresponding to the geometric characteristic matrix H and the time domain characteristic matrix Q of the electromyographic signals of the face of the driver*、ψ*(ii) a Let delta1=δ*、ψ1=ψ*So as to obtain H, Q Foley-Sammon discrimination vector sets of { delta-delta respectivelyi,1<i≤n1}、{ψiI is more than 1 and less than or equal to n2 }; let r bei=max{HTδi,QTψi),R=(r1,r2,…,rn)TI.e. the best feature matrix fused by H, Q, where n is max (n1, n 2).
3. The method for capturing the emotion change of the driver as recited in claim 1, wherein the emotion recognition sub-module performs emotion recognition by designing an expression recognition method that integrates a time series attention recursive network and a convolutional neural network, based on a facial expression criterion model of the geometric features of the face of the driver, and specifically includes: constructing a driver facial expression criterion model according to the state change condition of the driver facial feature area, and primarily judging the emotion change based on the driver facial expression; a method for designing and fusing a time sequence attention recursive network and a convolutional neural network is designed for emotion recognition:
and (H) extracting the feature to obtain a geometric feature matrix H ═ H of the facial expression of the driver1,h2,…,hn1)TThe convolution neural network is used for down sampling to select the optimal characteristic Hmax equal to max (h)1,h2,…,hn1);
Generating a feature representation by assigning different weights to eyebrow, eye and mouth coordinates using a time-sequential attention mechanism αiRepresenting importance information;
the optimal feature matrix of the facial expression of the driver is multiplied by the corresponding weight and then input into a recursive network to finish the emotion classification work, and the output result isWhere M represents the weight matrix of the fully-connected layer and b is the bias term.
4. The method for capturing emotional changes of the driver as claimed in claim 3, wherein the facial expression criterion model of the driver is constructed based on three facial component states of eyebrows, eyes and mouth, and a facial coordinate system of the driver is established with a nose center of the driver as an origin, a connecting line of a nose wing as an x-axis and a connecting line of a nose bridge as a y-axis, wherein: 5 feature points are taken for each eyebrow, the feature points of the left eyebrow are respectively calibrated by 1, 2, 3, 4 and 5, and the feature points of the right eyebrow are respectively calibrated by 6, 7, 8, 9 and 10; taking 6 characteristic points for each eye, calibrating the characteristic points of the left eye by 11, 12, 13, 14, 15 and 16 respectively, and calibrating the characteristic points of the right eye by 17, 18, 19, 20, 21 and 22 respectively; taking 7 characteristic points by a nose, and calibrating by 23, 24, 25, 26, 27, 28 and 29; taking 20 characteristic points of the mouth, and calibrating the characteristic points by using 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48 and 49;
the driver facial expression criterion model comprises:
1) the criterion model of facial expression change when the driver is happy is as follows:
gha=αhauha+βhavha+γhawha<0
wherein: u. ofha、vha、whaRepresents the expression of the state of eyebrows, eyes and mouth during happy, and αha、βha、γharepresenting the weight coefficient of the state of the eyebrows, the eyes and the mouth of the driver in the expression of expression pleasure;
2) the criterion model of facial expression change when the driver is sad is as follows:
wherein: u. ofsa、vsa、wsaIndicates the status expression of eyebrow, eye and mouth during sadness, and αsa、βsa、γsashowing the eyebrows of the driverThe weight coefficient that the state of eyes and mouth accounts for in the expression of expression sadness;
3) the facial expression change criterion model when the driver is surprised is as follows:
gsu=αsuusu+βsuvsu+γsuwsu>0
wherein: u. ofsu、vsu、wsuShows the state expression of eyebrow, eye and mouth at surprise, and αsu、βsu、γsurepresenting the weight coefficient of the states of the eyebrows, the eyes and the mouth of the driver in expression of expression surprise;
4) the criterion model of the change of the facial expression when the driver is angry is as follows:
gan=αanuan+βanvan+γanwan>0
wherein: u. ofan、van、wanRepresenting the expression of the state of eyebrows, eyes and mouth when angry, and αan、βan、γana weight coefficient representing a state of eyebrows, eyes, and a mouth of the driver occupying in expression of expression anger;
5) the facial expression change criterion model when the driver is frightened is as follows:
gfe=αfeufe+βfevfe+γfewfe>0
wherein: u. offe、vfe、wfeRepresents the expression of the state of eyebrows, eyes and mouth during fear, and αfe、βfe、γferepresenting the weight coefficient occupied by the states of the eyebrows, the eyes and the mouth of the driver in the expression of the expression fear;
6) the facial expression change criterion model when the driver dislikes is as follows:
gdi=αdiudi+βdivdi+γdiwdi<0
5. The method for capturing emotional changes of a driver as claimed in claim 1, wherein the facial image and electromyographic signal feature extraction of the driver is specifically as follows:
the first feature extraction submodule selects facial feature points by using the active appearance model to extract geometric features of the preprocessed driver facial image, and a facial expression geometric feature matrix H (H) is constructed1,h2,…,hn1)TN1 denotes a total of n1 facial images;
the second feature extraction submodule selects the absolute value mean value a, the root mean square value o, the integral absolute value d and the variance value l to perform time domain feature extraction on the preprocessed electromyographic signals, and constructs an electromyographic signal time domain feature matrix Q ═ (Q ═ Q)1,q2,…,qn2)TAnd n2 represents that n2 groups of electromyographic signals are acquired in total.
6. The method of capturing emotional changes of a driver as recited in claim 1, further comprising: when the emotion of the driver changes to anger, i.e. s (h) or k (r) outputs an "result, indicating that dangerous emotion has occurred, the system immediately reminds the driver of adjustment by the alarm voice.
7. The method of capturing emotional changes of a driver as claimed in claim 1, wherein the preprocessing of the facial image and the electromyographic signals of the driver comprises:
carrying out face detection on the face image of the driver, and judging whether the face of the driver is contained in the collected face image of the driver; positioning operation is carried out to accurately find out the face position; the visual effect of the image is improved by using interference suppression, edge sharpening and pseudo-color processing image enhancement methods, and noise interference is eliminated; unifying the gray value and size of the image to 40 × 40;
the electromyographic signals of the driver are subjected to blocking processing, the electromyographic signals are amplified by the high-power amplification sub-module, high-frequency interference signals are removed by the low-pass filtering sub-module, and the signal-to-noise ratio of the output signals is improved to be more than 50dB by the power frequency notch image enhancer module.
8. An apparatus for implementing the method for capturing emotional changes of a driver as claimed in any one of claims 1 to 7, comprising:
the system comprises a driver state acquisition and processing module and a driving behavior acquisition and processing module, wherein the facial expression acquisition and processing module comprises a vehicle-mounted camera, a face detection submodule, a face alignment submodule, an image enhancement submodule and a face normalization submodule, and the driving behavior acquisition and processing module comprises an electronic tattoo sensor, a blocking processing submodule, a high-power amplification submodule, a low-pass filtering submodule and a power frequency notch submodule;
the emotion change recognition and early warning system comprises a feature extraction module, an emotion change recognition module and an early warning module; the feature extraction module comprises a first feature extraction submodule and a second feature extraction submodule; the driving emotion change recognition module comprises an expression emotion recognition sub-module, an expression-driving behavior emotion recognition sub-module and a feature fusion sub-module; the early warning module comprises an embedded controller and an alarm.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110730246.4A CN113469022B (en) | 2021-06-29 | Device and method for capturing emotion change of driver |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110730246.4A CN113469022B (en) | 2021-06-29 | Device and method for capturing emotion change of driver |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113469022A true CN113469022A (en) | 2021-10-01 |
CN113469022B CN113469022B (en) | 2024-05-14 |
Family
ID=
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108216254A (en) * | 2018-01-10 | 2018-06-29 | 山东大学 | The road anger Emotion identification method merged based on face-image with pulse information |
CN110472511A (en) * | 2019-07-19 | 2019-11-19 | 河海大学 | A kind of driver status monitoring device based on computer vision |
KR20200010680A (en) * | 2018-07-11 | 2020-01-31 | 한국과학기술원 | Automated Facial Expression Recognizing Systems on N frames, Methods, and Computer-Readable Mediums thereof |
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108216254A (en) * | 2018-01-10 | 2018-06-29 | 山东大学 | The road anger Emotion identification method merged based on face-image with pulse information |
KR20200010680A (en) * | 2018-07-11 | 2020-01-31 | 한국과학기술원 | Automated Facial Expression Recognizing Systems on N frames, Methods, and Computer-Readable Mediums thereof |
CN110472511A (en) * | 2019-07-19 | 2019-11-19 | 河海大学 | A kind of driver status monitoring device based on computer vision |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108216254B (en) | Road anger emotion recognition method based on fusion of facial image and pulse information | |
CN108053615B (en) | Method for detecting fatigue driving state of driver based on micro-expression | |
CN104077579B (en) | Facial expression recognition method based on expert system | |
CN109740477B (en) | Driver fatigue detection system and fatigue detection method thereof | |
CN111190484B (en) | Multi-mode interaction system and method | |
CN110728241A (en) | Driver fatigue detection method based on deep learning multi-feature fusion | |
CN111582086A (en) | Fatigue driving identification method and system based on multiple characteristics | |
CN107798318A (en) | The method and its device of a kind of happy micro- expression of robot identification face | |
CN108596087B (en) | Driving fatigue degree detection regression model based on double-network result | |
CN110859609B (en) | Multi-feature fusion fatigue driving detection method based on voice analysis | |
CN111753674A (en) | Fatigue driving detection and identification method based on deep learning | |
CN107563346A (en) | One kind realizes that driver fatigue sentences method for distinguishing based on eye image processing | |
CN112949560B (en) | Method for identifying continuous expression change of long video expression interval under two-channel feature fusion | |
CN110264670A (en) | Based on passenger stock tired driver driving condition analytical equipment | |
Kahlon et al. | Driver drowsiness detection system based on binary eyes image data | |
CN110232327B (en) | Driving fatigue detection method based on trapezoid cascade convolution neural network | |
CN113989788A (en) | Fatigue detection method based on deep learning and multi-index fusion | |
CN113469022B (en) | Device and method for capturing emotion change of driver | |
CN113469022A (en) | Driver emotion change capturing device and method | |
CN113887386A (en) | Fatigue detection method based on multi-feature fusion of deep learning and machine learning | |
CN115713754B (en) | Staged hierarchical intervention method and system based on driver fear emotion recognition | |
Yao et al. | Filter-pruned 3D convolutional neural network for drowsiness detection | |
CN114582008A (en) | Living iris detection method based on two wave bands | |
Huang et al. | Driver fatigue expression recognition research based on convolutional neural network | |
CN113627300A (en) | Face recognition and living body detection method based on deep learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant |