CN109993093A - Road anger monitoring method, system, equipment and medium based on face and respiratory characteristic - Google Patents

Road anger monitoring method, system, equipment and medium based on face and respiratory characteristic Download PDF

Info

Publication number
CN109993093A
CN109993093A CN201910228205.8A CN201910228205A CN109993093A CN 109993093 A CN109993093 A CN 109993093A CN 201910228205 A CN201910228205 A CN 201910228205A CN 109993093 A CN109993093 A CN 109993093A
Authority
CN
China
Prior art keywords
image
acquisition
driver
face
facial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910228205.8A
Other languages
Chinese (zh)
Other versions
CN109993093B (en
Inventor
杨立才
张成昱
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University
Original Assignee
Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University filed Critical Shandong University
Priority to CN201910228205.8A priority Critical patent/CN109993093B/en
Publication of CN109993093A publication Critical patent/CN109993093A/en
Application granted granted Critical
Publication of CN109993093B publication Critical patent/CN109993093B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • G06V10/507Summing image-intensity values; Histogram projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The present disclosure discloses road anger monitoring method, system, equipment and media based on face and respiratory characteristic, the face-image and respiration information for acquiring driver pre-process face-image and respiration information respectively and extract the feature for being able to reflect driver road anger mood;Fusion Features are carried out to the two kinds of feature extracted, machine learning method is then based on, establishes driver road anger Emotion identification model;The model can judge whether driver is in road anger state, and the road anger mood of driver can be adjusted according to result.The present invention is due to using the image and respiration information that are easy to acquire, the emotional state of driver can be detected in the case where not influencing driver's normal driving, and when driver is in road anger mood, it can be reminded by audio frequency apparatus, warning adjusts mood.

Description

Road anger monitoring method, system, equipment and medium based on face and respiratory characteristic
Technical field
This disclosure relates to road anger monitoring method, system, equipment and the medium of the feature based on face-image and respiration information.
Background technique
The statement of this part is only to refer to background technique relevant to the disclosure, not necessarily constitutes the prior art.
In implementing the present disclosure, following technical problem exists in the prior art in inventor:
" road anger disease " refers to that the driver of automobile or other motor vehicles has aggressive or indignation row in driving procedure For, such as vulgar gesture, speech humiliate, deliberately with it is dangerous or threaten safety mode drive vehicle.Studies have shown that road anger Mood will affect the normal driving of driver.Radical driving, the performances such as venture drives and mistake drives are all with " road anger disease " in just Relevant relationship.Nowadays, road anger drives the major reason for having become traffic accident generation, it is therefore necessary to driver Road anger mood identified and carry out security warning, mood regulation.
What driver road anger Emotion identification method mostly used is the method that merges with physiological signal of face-image to judge to drive The person's of sailing mood.Present most common physiological signal is EEG signals and pulse signal, they can be used in identifying road anger mood, But it is limited to current sensor technology, the signal pickup assembly of EEG signals and pulse signal more or less can all influence to drive The normal driving behavior of member.To obtain EEG signals, need in driver in head-mount brain electricity cap;To obtain pulse signal, It needs in the parts such as driver's wrist or finger wearable sensors.These signal acquisition devices increase the burden of driver, can Driver can be made to generate discomfort, affect the normal driving behavior of driver.
Summary of the invention
In order to solve the deficiencies in the prior art, present disclose provides based on face and respiratory characteristic road anger monitoring method, System, equipment and medium, it is intended under the premise of not influencing driver's normal driving, identify the concurrent responding of road anger mood of driver Show, adjusts driver's mood.
In a first aspect, present disclose provides the road anger monitoring methods based on face and respiratory characteristic;
Road anger monitoring method based on face and respiratory characteristic, comprising:
Obtain the facial video and breath data of driver;
Facial area image is extracted from facial video, extracts facial characteristics from the facial area image of acquisition;
Respiratory characteristic is extracted from the breath data of acquisition;
Facial characteristics and respiratory characteristic to acquisition carry out Fusion Features;
Fused feature is input in trained deep learning model, the monitoring state of road anger is exported.
Second aspect, present disclose provides the road anger based on face and respiratory characteristic to monitor system;
Road anger based on face and respiratory characteristic monitors system, comprising:
Module is obtained, for obtaining the facial video and breath data of driver;
Facial feature extraction module, for extracting facial area image from facial video, from the facial area figure of acquisition Facial characteristics is extracted as in;
Respiratory characteristic extraction module, for extracting respiratory characteristic from the breath data of acquisition;
Fusion Features module, for the facial characteristics and respiratory characteristic progress Fusion Features to acquisition;
Road anger state monitoring module is exported for fused feature to be input in trained deep learning model The monitoring state of road anger.
The third aspect, the disclosure additionally provide a kind of electronic equipment, including memory and processor and are stored in storage The computer instruction run on device and on a processor when the computer instruction is run by processor, is completed in first aspect Method.
Fourth aspect, the disclosure additionally provide a kind of computer readable storage medium, described for storing computer instruction When computer instruction is executed by processor, in completion first aspect the step of method.
Compared with prior art, the beneficial effect of the disclosure is:
1. claim to extract the method with image pyramid based on HOG spy extracting the facial image portion in video and having used, Face-image can fast and effeciently be extracted.
2. face-image and respiration information has been used to carry out the phase of convolutional neural networks in Fusion Features and machine learning Pass method, the road anger identification model high reliablity trained.
3. having used respiration information as signal source, binder type Breath collection terminal is incorporated on safety belt, can stablize Ground obtains breath signal.
4. the camera in information collection is placed in front of driver, binder type Breath collection terminal is incorporated on safety belt, The normal driving operations of driver are not influenced.
5. binder type Breath collection terminal can be placed on safety belt using breath signal as signal source.It drives The person of sailing requires wear safety belt when driving a vehicle, and binder type breathing equipment is incorporated on safety belt, can either obtain and effectively exhale Signal is inhaled, and will not influence the normal driving of driver.
6. convolutional neural networks be one of deep learning common algorithms and field of image recognition core algorithm it One, it the image procossing and Classification and Identification the problem of in there is good effect.Use convolutional neural networks training road anger mood Facial information and respiration information are carried out information fusion, can effectively improve the discrimination and robustness of model by identification model.
Detailed description of the invention
The accompanying drawings constituting a part of this application is used to provide further understanding of the present application, and the application's shows Meaning property embodiment and its explanation are not constituted an undue limitation on the present application for explaining the application.
Fig. 1 is flow chart of the method for the present invention;
Fig. 2 is hardware system figure of the invention;
Fig. 3 is convolutional neural networks structure chart of the invention;
Fig. 4 is face-image extraction step figure of the invention;
Fig. 5 is road anger Emotion identification model training block diagram of the invention.
Specific embodiment
It is noted that described further below be all exemplary, it is intended to provide further instruction to the application.Unless another It indicates, all technical and scientific terms used herein has usual with the application person of an ordinary skill in the technical field The identical meanings of understanding.
It should be noted that term used herein above is merely to describe specific embodiment, and be not intended to restricted root According to the illustrative embodiments of the application.As used herein, unless the context clearly indicates otherwise, otherwise singular Also it is intended to include plural form, additionally, it should be understood that, when in the present specification using term "comprising" and/or " packet Include " when, indicate existing characteristics, step, operation, device, component and/or their combination.
The Chinese of HOG is explained and full name in English: histograms of oriented gradients (Histogram of Oriented Gradient,HOG)
Embodiment one:
As shown in Figure 1, the road anger monitoring method based on face and respiratory characteristic, comprising:
Obtain the facial video and breath data of driver;
Facial area image is extracted from facial video, extracts facial characteristics from the facial area image of acquisition;
Respiratory characteristic is extracted from the breath data of acquisition;
Facial characteristics and respiratory characteristic to acquisition carry out Fusion Features;
Fused feature is input in trained deep learning model, the monitoring state of road anger is exported.
As one embodiment, the specific steps of facial area image are extracted from facial video are as follows:
The face-image for choosing setting duration extracts a frame face-image at interval of set period of time, if being extracted altogether Dry frame face-image, is smoothed each frame face-image of extraction and denoising;
To several frame images after denoising, extracted by the way of based on HOG feature extraction and image pyramid The facial area image of driver.
As shown in figure 4, further, several frame images to after denoising, using based on HOG feature extraction and The mode of image pyramid extracts the specific steps of the facial area image of driver are as follows:
Sub-sampling is carried out respectively to each frame image after denoising, to one image gold word of each frame picture construction Tower;
To each straton image zooming-out HOG feature vector of each image pyramid, and to extracted HOG feature vector It is standardized;
Finally, all layers in each image pyramid of HOG feature vector is cascaded, obtains HOG pyramid feature;
HOG pyramid feature is input in the support vector machines human face region detection model that training obtains in advance, The human face region part for retaining image, deletes non-face region, obtains the facial area image of current frame image.
It is finally based on bilinearity differential technique, it is big to carry out picture size to the facial area image of all frame images extracted Small normalized, so as to subsequent extracted feature.
It is to be understood that the support vector machines human face region detection model that training obtains in advance, specific training process:
Construct support vector machines model;
It is special using the HOG pyramid with human face region label and history driver's face-image of non-face area label Sign is trained support vector machines model;
Obtain trained support vector machines human face region detection model.
The trained standard of support vector machines is that classification accuracy is more than given threshold.
Further, to the specific steps of each straton image zooming-out HOG feature vector of image pyramid are as follows:
Calculate the gradient g of each pixel (x, y) of imagex、gy, gradient magnitude g and direction θ.
gx=f (x+1, y)-f (x-1, y);
gy=f (x, y+1)-f (x, y-1);
Single image is divided into the region unit block of setting quantity and same size, each region unit is sub-partitioned into Set the cell cell of quantity and same size;
Gradient magnitude g and direction θ based on each pixel, count the gradient orientation histogram of each cell respectively, The histogram of gradients of cell in the same area block is connected into region unit histogram, L2-norm is carried out to region unit histogram Standardization finally cascades the feature vector of all areas block block, the HOG feature vector of entire image can be obtained.
Further, the specific steps extracted HOG feature vector being standardized are as follows:
Wherein, x is HOG feature vector, | | x | |2For the 2 rank norms of x, ε is constant.
As one embodiment, the specific steps of facial characteristics are extracted from the face-image of acquisition are as follows:
Facial feature extraction is carried out to driver's face-image that pretreatment obtains, extracts face using convolutional neural networks Feature;
The feature extraction of face-image is carried out using convolutional neural networks.Convolutional neural networks are common in deep learning It is a kind of to represent algorithm, it is a kind of comprising convolutional calculation and with the neural network of depth structure.It can be in the fine of image The differentiation feature of image is extracted in Classification and Identification, so that other classifiers are learnt.As shown in figure 3, convolutional neural networks one As be made of input layer, convolutional layer, pond layer, full articulamentum and output layer.Input layer is the data of input, is driven herein The face-image for the person of sailing;Convolutional layer carries out traversal processing to data by convolution kernel, extracts the feature of input layer, convolution kernel is usual It is the weight matrix of a 3*3 or 5*5, matrix multiplication is carried out to input element, is exported as individual element value.Pond layer is being rolled up After lamination carries out characteristic processing, Feature Mapping is carried out using activation primitive, to reduce feature quantity, activation primitive can be nerve Network introduces non-linear elements, and common activation primitive has Sigmoid function, Tanh function, ReLU function etc..Full articulamentum will The multidimensional characteristic of extraction expands into feature vector, and is transmitted to output layer by excitation function.Output layer, can for classification problem Use classification function processing feature vector, output category label.Wherein, convolutional neural networks can have multiple convolutional layers and pond Change layer.The feature vector for first extracting face-image herein, without output category.
As one embodiment, in the breath data from acquisition acquire respiratory characteristic in acquisition respiratory characteristic step it Before, it needs to pre-process the breath data of acquisition.
Further, the breath data of described pair of acquisition carries out pretreated specific steps are as follows:
Empirical mode decomposition is based on to breath data to be denoised and be filtered.
It is to be understood that as follows using the formula based on Empirical mode decomposition decomposed signal x (t):
Wherein, imfiIt (t) is i-th of IMF component, RES indicates residual volume.
Signal decomposition is several limited IMF components and a residual volume RES by Empirical mode decomposition EMD.
Each IMF component must satisfy two conditions:
(1) the extreme point number of signal and the difference of zero crossing number must be less than;
(2) coenvelope of signal arbitrary point and the mean value of lower envelope are zero.
After often decompositing an IMF component, judge that can RES continue to decomposite the IMF component of the condition of satisfaction, if can, Continue, if cannot, terminate.
The utility model has the advantages that can remove noise and unwanted signal part after using EMD method decomposed signal.Based on experience Mode decomposition method EMD is filtered denoising to the breath signal of acquisition.Empirical mode decomposition does not need default basic function And analytic function, do not need the characteristics such as the degree of rarefication of forethought signal, and being capable of adaptively decomposed signal.
As one embodiment, the specific steps of respiratory characteristic are extracted from the breath data of acquisition are as follows:
Respiratory characteristic extraction is carried out to the obtained breath data of pretreatment, extracts temporal signatures, frequency domain character and non-thread Property feature;
The temporal signatures, comprising: mean value, standard deviation, degree of bias value and kurtosis value;
Wherein, the calculation formula of degree of bias value s and kurtosis value k are as follows:
Wherein,For the mean value of breath signal, σ is the standard deviation of breath signal, and degree of bias value indicates the central symmetry journey of signal Degree, kurtosis value indicate the steep slow degree of signal distributions form.
The frequency domain character is power the sum of of the breath signal in each frequency range, each frequency range respectively include: 0- 0.1Hz, 0.1-0.2Hz, 0.2-0.3Hz, 0.3-0.4Hz or 0.4-1Hz;
The nonlinear characteristic, comprising: multi-scale entropy, approximate entropy or heart rate variability;
Multi-scale entropy algorithm calculates two parts by coarse process and Sample Entropy and forms, by calculating in multiple time scales Sample Entropy assessment time series complexity.When driver is in angry state, breathing becomes relatively nervous, rapid, breathing Signal time sequence complexity is promoted, and compared with tranquility, the value of multi-scale entropy can generate biggish variation.
Approximate entropy is the nonlinear kinetics parameter of a kind of regularity for the fluctuation of quantization time sequence and scrambling. It reflects a possibility that new information occurs in time series, and the corresponding approximate entropy of irregular time series is bigger.It drives When the person of sailing is in angry state, breath signal is fluctuated when relative to tranquility, and the scrambling of time series is big, approximate Entropy is larger.
Heart rate variability refers to the situation of change of gradually heart beat cycle difference, containing being able to reflect part cardiovascular disease Information is also able to reflect the mood of people.When driver is in angry state, the cycle differentiation of breath signal can also change, Heart rate variability correlation index feature can be used in judging the emotional state of driver.
As one embodiment, the specific steps of Fusion Features are carried out to the facial characteristics of acquisition and respiratory characteristic are as follows:
Facial characteristics and respiratory characteristic to acquisition are all made of maximin method for normalizing and are normalized;
The facial characteristics and respiratory characteristic obtain to normalized carries out Fusion Features by the way of weighting, is melted The feature vector of conjunction.
As one embodiment, the deep learning model, in particular to: convolutional neural networks model.
As one embodiment, as shown in figure 5, the training step of trained deep learning model are as follows:
Obtain the face-image and breath data of driver;
Facial characteristics is extracted from the face-image of acquisition, extracts respiratory characteristic from the breath data of acquisition;
Facial characteristics and respiratory characteristic to acquisition carry out Fusion Features;Obtain fused feature vector;
To fused feature vector, road anger label and Fei Lu anger label are marked;
The fusion feature vector for marking label is divided into training set and test set;
Convolutional neural networks model is constructed, training set is input in convolutional neural networks model, to convolutional neural networks Model is trained, and when discrimination reaches given threshold, obtains the good convolutional neural networks model of initial training;Otherwise, after Continuous training;
Then, test set is input in the good convolutional neural networks model of initial training, the convolution good to initial training Neural network is tested, if testing classification accuracy is higher than given threshold, obtains trained convolutional neural networks model, Otherwise, optimize the parameter of convolutional neural networks, update training set, re-start training, until obtaining trained convolutional Neural Network model terminates.
As one embodiment, the monitoring state of road anger, comprising: the road road Nu Huofei anger.
As one embodiment, the face-image is obtained by infrared high-speed camera;The breath data passes through abdomen Belt Breath collection terminal obtains.
Optionally, the infrared high-speed camera is fixed on the instrument board immediately ahead of main driving seat, infrared high-speed Camera is connect with controller;The face-image of acquisition or video are uploaded to controller.
It is to be understood that the infrared high-speed camera can moving in rotation in a certain range, to obtain driver's Front face image.
Further, the infrared high-speed camera can acquire nighttime driving person's face-image.
The beneficial effect of above-mentioned technical proposal is that the acquisition of the comprehensive face-image of driver may be implemented, and is avoided only The case where acquiring left face or only acquiring right face appearance.
Optionally, the binder type Breath collection terminal is pressure sensor, and pressure sensor connect with controller, will adopt The breath data of collection is uploaded to controller;The pressure sensor is arranged on the safety belt of main driving seat, when driver fastens After safety belt, pressure sensor is located at the abdomen middle position of driver, and pressure sensor is built in safety belt;Pressure sensing Device is responsible for acquiring the abdominal pressure data of driver, and abdominal pressure data are considered as breath data.
It is the flexible driver behavior that driver may be implemented that pressure sensor, which is built in the beneficial effect in safety belt, is kept away Exempt to be placed in and the driving-activity radius of driver is impacted in other positions, the behavior act for the driver that lets loose.
As shown in Fig. 2, controller carries out pretreatment and feature extraction to the face-image and breath data of acquisition respectively, Facial characteristics and respiratory characteristic to extraction carry out Fusion Features, carry out the judgement of road anger result to the result after Fusion Features, such as The road Guo Shi anger state, then controller issues control instruction to audio frequency apparatus, reminds driver to adjust mood by audio frequency apparatus.Sound Frequency equipment, including microphone.
Embodiment two: it provides the road anger based on face and respiratory characteristic and monitors system;
Road anger based on face and respiratory characteristic monitors system, comprising:
Module is obtained, for obtaining the facial video and breath data of driver;
Facial feature extraction module, for extracting facial area image from facial video, from the facial area figure of acquisition Facial characteristics is extracted as in;
Respiratory characteristic extraction module, for extracting respiratory characteristic from the breath data of acquisition;
Fusion Features module, for the facial characteristics and respiratory characteristic progress Fusion Features to acquisition;
Road anger state monitoring module is exported for fused feature to be input in trained deep learning model The monitoring state of road anger.
Embodiment three: the present embodiment additionally provides a kind of electronic equipment, including memory and processor and is stored in The computer instruction run on reservoir and on a processor, when the computer instruction is run by processor, in Method Of Accomplishment Each operation, for sake of simplicity, details are not described herein.
The electronic equipment can be mobile terminal and immobile terminal, and immobile terminal includes desktop computer, move Dynamic terminal includes smart phone (Smart Phone, such as Android phone, IOS mobile phone), smart glasses, smart watches, intelligence The mobile internet device that energy bracelet, tablet computer, laptop, personal digital assistant etc. can carry out wireless communication.
It should be understood that in the disclosure, which can be central processing unit CPU, which, which can be said to be, can be it His general processor, digital signal processor DSP, application-specific integrated circuit ASIC, ready-made programmable gate array FPGA or other Programmable logic device, discrete gate or transistor logic, discrete hardware components etc..General processor can be micro process Device or the processor are also possible to any conventional processor etc..
The memory may include read-only memory and random access memory, and to processor provide instruction and data, The a part of of memory can also include non-volatile RAM.For example, memory can be with the letter of storage device type Breath.
During realization, each step of the above method can by the integrated logic circuit of the hardware in processor or The instruction of software form is completed.The step of method in conjunction with disclosed in the disclosure, can be embodied directly in hardware processor and execute At, or in processor hardware and software module combination execute completion.Software module can be located at random access memory, dodge It deposits, this fields are mature deposits for read-only memory, programmable read only memory or electrically erasable programmable memory, register etc. In storage media.The storage medium is located at memory, and processor reads the information in memory, completes the above method in conjunction with its hardware The step of.To avoid repeating, it is not detailed herein.Those of ordinary skill in the art may be aware that in conjunction with institute herein Each exemplary unit, that is, algorithm steps of disclosed embodiment description, can be hard with electronic hardware or computer software and electronics The combination of part is realized.These functions are implemented in hardware or software actually, the specific application depending on technical solution And design constraint.Professional technician can realize described function using distinct methods to each specific application Can, but this realization is it is not considered that exceed scope of the present application.
It is apparent to those skilled in the art that for convenience and simplicity of description, the system of foregoing description, The specific work process of device and unit, can refer to corresponding processes in the foregoing method embodiment, and details are not described herein.
In several embodiments provided herein, it should be understood that disclosed systems, devices and methods, it can be with It realizes in other way.For example, the apparatus embodiments described above are merely exemplary, for example, the unit It divides, only a kind of division of logic function, there may be another division manner in actual implementation, such as multiple units or group Part can be combined or can be integrated into another system, or some features can be ignored or not executed.In addition, showing The mutual coupling or direct-coupling or communication connection shown or discussed can be through some interfaces, device or unit Indirect coupling or communication connection, can be electrically, mechanical or other forms.
It, can be with if the function is realized in the form of SFU software functional unit and when sold or used as an independent product It is stored in a computer readable storage medium.Based on this understanding, the technical solution of the application is substantially right in other words The part of part or the technical solution that the prior art contributes can be embodied in the form of software products, the calculating Machine software product is stored in a storage medium, including some instructions are used so that a computer equipment (can be individual Computer, server or network equipment etc.) execute each embodiment the method for the application all or part of the steps.And it is preceding The storage medium stated includes: USB flash disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory The various media that can store program code such as (RAM, Random Access Memory), magnetic or disk.
The foregoing is merely preferred embodiment of the present application, are not intended to limit this application, for the skill of this field For art personnel, various changes and changes are possible in this application.Within the spirit and principles of this application, made any to repair Change, equivalent replacement, improvement etc., should be included within the scope of protection of this application.

Claims (10)

1. the road anger monitoring method based on face and respiratory characteristic, characterized in that include:
Obtain the facial video and breath data of driver;
Facial area image is extracted from facial video, extracts facial characteristics from the facial area image of acquisition;
Respiratory characteristic is extracted from the breath data of acquisition;
Facial characteristics and respiratory characteristic to acquisition carry out Fusion Features;
Fused feature is input in trained deep learning model, the monitoring state of road anger is exported.
2. the method as described in claim 1, characterized in that extract the specific steps of facial area image from facial video Are as follows:
The face-image for choosing setting duration extracts a frame face-image at interval of set period of time, is extracted several frames altogether Face-image, is smoothed each frame face-image of extraction and denoising;
To several frame images after denoising, driving is extracted by the way of based on HOG feature extraction and image pyramid The facial area image of member.
3. method according to claim 2, characterized in that several frame images to after denoising, using based on HOG The mode of feature extraction and image pyramid extracts the specific steps of the facial area image of driver are as follows:
Sub-sampling is carried out respectively to each frame image after denoising, to one image pyramid of each frame picture construction;
It is carried out to each straton image zooming-out HOG feature vector of each image pyramid, and to extracted HOG feature vector Standardization;
Finally, all layers in each image pyramid of HOG feature vector is cascaded, obtains HOG pyramid feature;
By HOG pyramid feature, it is input in the support vector machines human face region detection model that training obtains in advance, retains The human face region part of image, deletes non-face region, obtains the facial area image of current frame image.
4. the method as described in claim 1, characterized in that acquire to acquire in respiratory characteristic in the breath data from acquisition and exhale Before inhaling characterization step, need to pre-process the breath data of acquisition;
The breath data of described pair of acquisition carries out pretreated specific steps are as follows:
Empirical mode decomposition is based on to breath data to be denoised and be filtered;
Alternatively,
The specific steps of respiratory characteristic are extracted from the breath data of acquisition are as follows:
Respiratory characteristic extraction is carried out to the breath data that pretreatment obtains, extracts temporal signatures, frequency domain character and non-linear spy Sign;
The temporal signatures, comprising: mean value, standard deviation, degree of bias value and kurtosis value;
The frequency domain character is power the sum of of the breath signal in each frequency range, each frequency range respectively include: 0-0.1Hz, 0.1-0.2Hz, 0.2-0.3Hz, 0.3-0.4Hz or 0.4-1Hz;
The nonlinear characteristic, comprising: multi-scale entropy, approximate entropy or heart rate variability.
5. the method as described in claim 1, characterized in that facial characteristics and respiratory characteristic to acquisition carry out Fusion Features Specific steps are as follows:
Facial characteristics and respiratory characteristic to acquisition are all made of maximin method for normalizing and are normalized;To normalizing The facial characteristics and respiratory characteristic that change is handled carry out Fusion Features by the way of weighting, the feature vector merged.
6. the method as described in claim 1, characterized in that the training step of trained deep learning model are as follows:
Obtain the face-image and breath data of driver;
Facial characteristics is extracted from the face-image of acquisition, extracts respiratory characteristic from the breath data of acquisition;
Facial characteristics and respiratory characteristic to acquisition carry out Fusion Features;Obtain fused feature vector;
To fused feature vector, road anger label and Fei Lu anger label are marked;
The fusion feature vector for marking label is divided into training set and test set;
Convolutional neural networks model is constructed, training set is input in convolutional neural networks model, to convolutional neural networks model It is trained, when discrimination reaches given threshold, obtains the good convolutional neural networks model of initial training;Otherwise, continue to instruct Practice;
Then, test set is input in the good convolutional neural networks model of initial training, the convolutional Neural good to initial training Network is tested, if testing classification accuracy is higher than given threshold, obtains trained convolutional neural networks model, no Then, optimize the parameter of convolutional neural networks, update training set, re-start training, until obtaining trained convolutional Neural net Network model terminates.
7. method according to claim 2, characterized in that the face-image is obtained by infrared high-speed camera;It is described Breath data is obtained by binder type Breath collection terminal;
The binder type Breath collection terminal is pressure sensor, and pressure sensor is connect with controller, by the Respiration Rate of acquisition According to being uploaded to controller;The pressure sensor is arranged on the safety belt of main driving seat, after driver fastens one's safety belt, pressure Force snesor is located at the abdomen middle position of driver, and pressure sensor is built in safety belt;Pressure sensor is responsible for acquisition The abdominal pressure data of driver, abdominal pressure data are considered as breath data.
8. the road anger based on face and respiratory characteristic monitors system, characterized in that include:
Module is obtained, for obtaining the facial video and breath data of driver;
Facial feature extraction module, for extracting facial area image from facial video, from the facial area image of acquisition Extract facial characteristics;
Respiratory characteristic extraction module, for extracting respiratory characteristic from the breath data of acquisition;
Fusion Features module, for the facial characteristics and respiratory characteristic progress Fusion Features to acquisition;
Road anger state monitoring module exports road anger for fused feature to be input in trained deep learning model Monitoring state.
9. a kind of electronic equipment, characterized in that on a memory and on a processor including memory and processor and storage The computer instruction of operation when the computer instruction is run by processor, is completed described in any one of claim 1-7 method The step of.
10. a kind of computer readable storage medium, characterized in that for storing computer instruction, the computer instruction is located When managing device execution, step described in any one of claim 1-7 method is completed.
CN201910228205.8A 2019-03-25 2019-03-25 Road rage monitoring method, system, equipment and medium based on facial and respiratory characteristics Active CN109993093B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910228205.8A CN109993093B (en) 2019-03-25 2019-03-25 Road rage monitoring method, system, equipment and medium based on facial and respiratory characteristics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910228205.8A CN109993093B (en) 2019-03-25 2019-03-25 Road rage monitoring method, system, equipment and medium based on facial and respiratory characteristics

Publications (2)

Publication Number Publication Date
CN109993093A true CN109993093A (en) 2019-07-09
CN109993093B CN109993093B (en) 2022-10-25

Family

ID=67131402

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910228205.8A Active CN109993093B (en) 2019-03-25 2019-03-25 Road rage monitoring method, system, equipment and medium based on facial and respiratory characteristics

Country Status (1)

Country Link
CN (1) CN109993093B (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110693508A (en) * 2019-09-02 2020-01-17 中国航天员科研训练中心 Multi-channel cooperative psychophysiological active sensing method and service robot
CN110751015A (en) * 2019-09-02 2020-02-04 合肥工业大学 Perfusion optimization and artificial intelligence emotion monitoring method for facial infrared heat map
CN110751381A (en) * 2019-09-30 2020-02-04 东南大学 Road rage vehicle risk assessment and prevention and control method
CN110781719A (en) * 2019-09-02 2020-02-11 中国航天员科研训练中心 Non-contact and contact cooperative mental state intelligent monitoring system
CN110991428A (en) * 2019-12-30 2020-04-10 山东大学 Breathing signal emotion recognition method and system based on multi-scale entropy
CN111027391A (en) * 2019-11-12 2020-04-17 湖南大学 Fatigue state identification method based on CNN pyramid characteristics and LSTM
CN111127117A (en) * 2019-12-31 2020-05-08 上海能塔智能科技有限公司 Vehicle operation and use satisfaction identification processing method and device and electronic equipment
CN111626186A (en) * 2020-05-25 2020-09-04 宁波大学 Driver distraction detection method
CN111991012A (en) * 2020-09-04 2020-11-27 北京中科心研科技有限公司 Method and device for monitoring driving road rage state
CN112043252A (en) * 2020-10-10 2020-12-08 山东大学 Emotion recognition system and method based on respiratory component in pulse signal
CN112699774A (en) * 2020-12-28 2021-04-23 深延科技(北京)有限公司 Method and device for recognizing emotion of person in video, computer equipment and medium
CN112712022A (en) * 2020-12-29 2021-04-27 华南理工大学 Pressure detection method, system and device based on image recognition and storage medium
CN113191283A (en) * 2021-05-08 2021-07-30 河北工业大学 Driving path decision method based on emotion change of on-road travelers
CN113191212A (en) * 2021-04-12 2021-07-30 合肥综合性国家科学中心人工智能研究院(安徽省人工智能实验室) Driver road rage risk early warning method and system
CN114312997A (en) * 2021-12-09 2022-04-12 科大讯飞股份有限公司 Vehicle steering control method, device and system and storage medium
CN112699774B (en) * 2020-12-28 2024-05-24 深延科技(北京)有限公司 Emotion recognition method and device for characters in video, computer equipment and medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1768701A (en) * 2005-07-21 2006-05-10 高春平 Integrated intelligent type physiological signal sensor
CN101699470A (en) * 2009-10-30 2010-04-28 华南理工大学 Extracting method for smiling face identification on picture of human face
DE102013018663B4 (en) * 2013-11-07 2017-05-24 Dräger Safety AG & Co. KGaA Device and a method for measuring an alcohol or Rauschmittelanteils in the breath of a driver
CN107235045A (en) * 2017-06-29 2017-10-10 吉林大学 Consider physiology and the vehicle-mounted identification interactive system of driver road anger state of manipulation information
CN108053615A (en) * 2018-01-10 2018-05-18 山东大学 Driver tired driving condition detection method based on micro- expression
CN108216254A (en) * 2018-01-10 2018-06-29 山东大学 The road anger Emotion identification method merged based on face-image with pulse information
CN108574701A (en) * 2017-03-08 2018-09-25 理查德.A.罗思柴尔德 System and method for determining User Status
CN109498041A (en) * 2019-01-15 2019-03-22 吉林大学 Driver road anger state identification method based on brain electricity and pulse information

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1768701A (en) * 2005-07-21 2006-05-10 高春平 Integrated intelligent type physiological signal sensor
CN101699470A (en) * 2009-10-30 2010-04-28 华南理工大学 Extracting method for smiling face identification on picture of human face
DE102013018663B4 (en) * 2013-11-07 2017-05-24 Dräger Safety AG & Co. KGaA Device and a method for measuring an alcohol or Rauschmittelanteils in the breath of a driver
CN108574701A (en) * 2017-03-08 2018-09-25 理查德.A.罗思柴尔德 System and method for determining User Status
CN107235045A (en) * 2017-06-29 2017-10-10 吉林大学 Consider physiology and the vehicle-mounted identification interactive system of driver road anger state of manipulation information
CN108053615A (en) * 2018-01-10 2018-05-18 山东大学 Driver tired driving condition detection method based on micro- expression
CN108216254A (en) * 2018-01-10 2018-06-29 山东大学 The road anger Emotion identification method merged based on face-image with pulse information
CN109498041A (en) * 2019-01-15 2019-03-22 吉林大学 Driver road anger state identification method based on brain electricity and pulse information

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
CHRISTOS D. KATSIS ET AL.: "Toward Emotion Recognition in Car-Racing Drivers:A Biosignal Processing Approach", 《IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART A: SYSTEMS AND HUMANS》 *
万平: "基于信息融合的驾驶愤怒识别方法研究", 《中国博士学位论文全文数据库工程科技Ⅰ辑》 *
于申浩: "基于深度学习与信息融合的路怒情绪识别研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 *
张明龙 等: "《英国创新信息概述》", 31 August 2015, 企业管理出版社 *
李飞: "基于心肺系统的情绪识别研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110693508A (en) * 2019-09-02 2020-01-17 中国航天员科研训练中心 Multi-channel cooperative psychophysiological active sensing method and service robot
CN110751015A (en) * 2019-09-02 2020-02-04 合肥工业大学 Perfusion optimization and artificial intelligence emotion monitoring method for facial infrared heat map
CN110781719A (en) * 2019-09-02 2020-02-11 中国航天员科研训练中心 Non-contact and contact cooperative mental state intelligent monitoring system
CN110751015B (en) * 2019-09-02 2023-04-11 合肥工业大学 Perfusion optimization and artificial intelligence emotion monitoring method for facial infrared heat map
CN110751381A (en) * 2019-09-30 2020-02-04 东南大学 Road rage vehicle risk assessment and prevention and control method
CN111027391A (en) * 2019-11-12 2020-04-17 湖南大学 Fatigue state identification method based on CNN pyramid characteristics and LSTM
CN110991428A (en) * 2019-12-30 2020-04-10 山东大学 Breathing signal emotion recognition method and system based on multi-scale entropy
CN111127117A (en) * 2019-12-31 2020-05-08 上海能塔智能科技有限公司 Vehicle operation and use satisfaction identification processing method and device and electronic equipment
CN111626186A (en) * 2020-05-25 2020-09-04 宁波大学 Driver distraction detection method
CN111991012A (en) * 2020-09-04 2020-11-27 北京中科心研科技有限公司 Method and device for monitoring driving road rage state
CN111991012B (en) * 2020-09-04 2022-12-06 北京中科心研科技有限公司 Method and device for monitoring driving road rage state
CN112043252A (en) * 2020-10-10 2020-12-08 山东大学 Emotion recognition system and method based on respiratory component in pulse signal
CN112043252B (en) * 2020-10-10 2021-09-28 山东大学 Emotion recognition system and method based on respiratory component in pulse signal
CN112699774A (en) * 2020-12-28 2021-04-23 深延科技(北京)有限公司 Method and device for recognizing emotion of person in video, computer equipment and medium
CN112699774B (en) * 2020-12-28 2024-05-24 深延科技(北京)有限公司 Emotion recognition method and device for characters in video, computer equipment and medium
CN112712022A (en) * 2020-12-29 2021-04-27 华南理工大学 Pressure detection method, system and device based on image recognition and storage medium
CN113191212A (en) * 2021-04-12 2021-07-30 合肥综合性国家科学中心人工智能研究院(安徽省人工智能实验室) Driver road rage risk early warning method and system
CN113191283B (en) * 2021-05-08 2022-09-23 河北工业大学 Driving path decision method based on emotion change of on-road travelers
CN113191283A (en) * 2021-05-08 2021-07-30 河北工业大学 Driving path decision method based on emotion change of on-road travelers
CN114312997A (en) * 2021-12-09 2022-04-12 科大讯飞股份有限公司 Vehicle steering control method, device and system and storage medium

Also Published As

Publication number Publication date
CN109993093B (en) 2022-10-25

Similar Documents

Publication Publication Date Title
CN109993093A (en) Road anger monitoring method, system, equipment and medium based on face and respiratory characteristic
US20230333635A1 (en) Systems, methods, apparatuses and devices for detecting facial expression and for tracking movement and location in at least one of a virtual and augmented reality system
Kong et al. A system of driving fatigue detection based on machine vision and its application on smart device
Zhao et al. Intelligent recognition of fatigue and sleepiness based on inceptionV3-LSTM via multi-feature fusion
CN111329474A (en) Electroencephalogram identity recognition method and system based on deep learning and information updating method
CN111920420B (en) Patient behavior multi-modal analysis and prediction system based on statistical learning
CN107798318A (en) The method and its device of a kind of happy micro- expression of robot identification face
CN107729882A (en) Emotion identification decision method based on image recognition
Zhang et al. Detecting negative emotional stress based on facial expression in real time
Du et al. A multimodal fusion fatigue driving detection method based on heart rate and PERCLOS
Prasetio et al. The facial stress recognition based on multi-histogram features and convolutional neural network
Schels et al. Using unlabeled data to improve classification of emotional states in human computer interaction
WO2019104008A1 (en) Interactive electronic content delivery in coordination with rapid decoding of brain activity
CN106991409A (en) A kind of Mental imagery EEG feature extraction and categorizing system and method
CN114565957A (en) Consciousness assessment method and system based on micro expression recognition
CN104679967B (en) A kind of method for judging psychological test reliability
Zhao et al. Deep convolutional neural network for drowsy student state detection
Ukwuoma et al. Deep learning review on drivers drowsiness detection
Arasu et al. Human Stress Recognition from Facial Thermal-Based Signature: A Literature Survey.
Gatea et al. Deep learning neural network for driver drowsiness detection using eyes recognition
CN108960023A (en) A kind of portable Emotion identification device
Zatarain-Cabada et al. Building a corpus and a local binary pattern recognizer for learning-centered emotions
TWI646438B (en) Emotion detection system and method
Yashaswini et al. Stress detection using deep learning and IoT
Vasavi et al. Regression modelling for stress detection in humans by assessing most prominent thermal signature

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant