CN112699802A - Driver micro-expression detection device and method - Google Patents

Driver micro-expression detection device and method Download PDF

Info

Publication number
CN112699802A
CN112699802A CN202011629369.0A CN202011629369A CN112699802A CN 112699802 A CN112699802 A CN 112699802A CN 202011629369 A CN202011629369 A CN 202011629369A CN 112699802 A CN112699802 A CN 112699802A
Authority
CN
China
Prior art keywords
driver
information
micro
face
expression
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011629369.0A
Other languages
Chinese (zh)
Inventor
崔里宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Haishan Huigu Technology Co ltd
Original Assignee
Qingdao Haishan Huigu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Haishan Huigu Technology Co ltd filed Critical Qingdao Haishan Huigu Technology Co ltd
Priority to CN202011629369.0A priority Critical patent/CN112699802A/en
Publication of CN112699802A publication Critical patent/CN112699802A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • G06V40/176Dynamic expression
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/269Analysis of motion using gradient-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • G08B21/06Alarms for ensuring the safety of persons indicating a condition of sleep, e.g. anti-dozing alarms
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/57Mechanical or electrical details of cameras or camera modules specially adapted for being embedded in other devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Abstract

The invention relates to the technical field of driving safety protection, in particular to a driver micro-expression detection device and method. The multifunctional recording device comprises a shell, wherein a high-speed infrared camera is fixedly embedded in the middle of the front end of the shell, a PCB is arranged in the shell, and a camera module, a recording ic chip, a loudspeaker, a processor, a memory and a wireless communication module are regularly arranged on the PCB. The design of the invention can accurately detect the micro expression change of the driver and judge the mental state of the driver, and when the driver has the mental state or behavior which is not beneficial to safe driving, the invention can send out voice prompt in time for intervention, correct the dangerous driving state of the driver, reduce traffic accidents caused by mental unfocused and ensure the safety of the driver and pedestrians.

Description

Driver micro-expression detection device and method
Technical Field
The invention relates to the technical field of driving safety protection, in particular to a driver micro-expression detection device and method.
Background
With the continuous progress of social economy, the living standard of people is improved day by day, the traffic demand is improved day by day, and the number of motor vehicles on the road is increased year by year. Along with this, the problem of road traffic safety in China is increasingly prominent, traffic accidents happen occasionally, and hidden dangers are buried for social public safety. Among many traffic accidents, the traffic accidents caused by distraction, emotional abnormality and fatigue driving account for 20% of the total number, and in an unconscious state, the response of the driver to sudden road conditions becomes slow, the response time becomes long, excessive response and false response are easy to occur, and even the control of the vehicle is briefly lost. The human face can transmit information, but the micro expression of the human face can accurately reflect the mental state of the human, such as fatigue, emotional excitement, distraction and the like. However, there is no perfect device and method for determining the driving state of the driver by detecting the micro-expression of the driver so as to correct the dangerous driving behavior.
Disclosure of Invention
The present invention is directed to a device and a method for detecting a driver's micro-expression to solve the above problems.
In order to solve the above technical problems, an object of the present invention is to provide a driver micro-expression detecting device, which includes a housing, a high-speed infrared camera is fixedly embedded in the middle of the front end of the housing, a PCB is disposed in the housing, and a camera module, a recording ic chip, a speaker, a processor, a memory and a wireless communication module are regularly disposed on the PCB.
As a further improvement of the technical scheme, both sides of the high-speed infrared camera are respectively provided with a sound receiving hole and a sound playing port, the high-speed infrared camera corresponds to the camera module, the sound receiving hole corresponds to the recording ic chip, and the sound playing port corresponds to the loudspeaker.
Another object of the present invention is to provide a method for detecting a driver's micro-expression, comprising the steps of:
s1, verifying the identity of the driver through an AI face recognition technology;
s2, collecting action information of a driver, including dynamic video and sound;
s3, processing the collected graphic information, and identifying and judging the facial micro expression of the driver;
s4, processing the collected audio information;
s5, judging the driving state of the driver by combining the micro-expression and the audio information of the driver at the same time;
and S6, reporting the state information of the driver, and intervening the behavior of the driver which is not beneficial to safe driving through voice.
As a further improvement of the present technical solution, in S1, the method for authenticating includes the following steps:
s1.1, acquiring an image with a human face from video information acquired by a camera;
s1.2, detecting the face part, and carrying out face alignment and living body detection processing;
s1.3, extracting the face features, and comparing the face features with the face features stored in a face database in advance;
and S1.4, outputting a face recognition result and confirming the identity of the driver.
As a further improvement of the technical scheme, before the motion information of the driver is collected in S2, a space coordinate system of the face is established by taking the fixed position in the vehicle as a relative coordinate to obtain the position of the face in a three-dimensional space; collecting feature points of different facial expressions of the human face and positions of the feature points in a three-dimensional space in advance, inputting the feature points into a convolutional neural network model, and establishing a model base;
in S2, the method for collecting information includes the following steps:
s2.1, acquiring a facial expression change video of the driver through a high-speed infrared camera;
s2.2, recording the sound information of the driver through a recording ic chip;
and S2.3, simultaneously transmitting the acquired information to a processor and respectively storing the acquired information.
As a further improvement of the present invention, in S3, the method for processing image information includes the steps of:
s3.1, obtaining a static image of each frame, and screening out fuzzy image information;
s3.2, converting the facial expression image of the driver into a gray level image, and performing histogram equalization operation on the gray level image;
s3.3, positioning the face area of each frame by using a Viola-Jones face detector, and calculating a group of initial feature points by extracting a response block and low-dimensional projection; directly inputting the initial characteristic points into a model base to obtain a three-dimensional coordinate position Y of the initial characteristic points in a pre-established space coordinate system;
then, the initial characteristic point pixels of the image are strengthened according to the following formula:
Figure BDA0002879765950000031
Qirepresenting the enhanced image, XiRepresenting the original image, Y (X)i) Representing the mapping of the plane position coordinates of a certain pixel point of the original image to the coordinate position, x, of the stereo space11,…,xnnA pixel size representing an initial feature point;
calculating a difference Y-Y (X) for the spatial position coordinates of each feature pointi) Removing the characteristic points with the difference value larger than the threshold value, and correcting the position;
s3.4, accurately positioning 68 key feature points of the face region by adopting a DRMF (dry digital multi-function) method for the screened feature points, and simultaneously dividing the face region;
s3.5, preprocessing the graph sequence, decomposing the image into a structure part and a texture part, and calculating the optical flow field of the texture part;
s3.6, deducing the motion change of the facial expression of the driver by detecting the constantly changing pixel intensity between two image frames by adopting an optical flow field calculation mode;
s3.7, correcting the rotation and conversion actions of the face region in the image sequence, and calculating the HOOF characteristics in each block;
and S3.8, carrying out standardization processing on the calculation result to obtain the normalized micro expression video clip MDMO characteristic vector, and realizing detection and identification of the micro expression.
As a further improvement of the technical solution, in S3.6, a calculation formula of the optical flow field is as follows:
I(x,y,t)=I(x+Δx,y+Δy,t+Δt);
wherein, (x, y, t) is the location of one pixel, the pixel variation strength is I (x, y, t), and Δ x, Δ y, Δ t are the moving amounts of the pixel location (x, y, t) between two frames, respectively.
As a further improvement of the present technical solution, in S4, the method for processing audio information includes the following steps:
s4.1, extracting characteristic states in the audio, and dividing the audio information into two types of voice information and other sound information;
s4.2, testing the decibel of each piece of audio information;
s4.3, detecting the speed of speech of the speech information;
s4.4, carrying out voice recognition on the voice information, extracting voice keywords, and comparing the voice keywords with sensitive words prestored in a database;
and S4.5, comparing the characteristics of other sound information with the sound types pre-stored in the database, and judging the sound types.
As a further improvement of the present invention, in S5, the method for determining the driving state of the driver includes:
s5.1, judging whether the driver is in a distraction state or a state of communicating with other people in the vehicle or not by the whole facial expression of the driver, particularly the face orientation or the eye spirit orientation of the driver and combining the sound information;
s5.2, judging whether the driver is in a yawning state or a fatigue driving state or not according to the whole facial expression of the driver, particularly the eye opening, the lip opening or the blinking frequency of the driver and the existence of haar;
s5.3, judging whether the driver is in a laughing or crying state or not by the whole facial expression of the driver, particularly the inclination directions of the mouth corners and eyebrows and combining the sound information;
and S5.4, judging whether the driver is in a call receiving and making state or not by combining the voice information through the whole facial expression of the driver, particularly the opening degree of lips and the motion frequency of the lips.
As a further improvement of the present technical solution, in S6, the method for intervening the driver behavior includes the following steps:
s6.1, when detecting that the driver has behaviors influencing safe driving or other overstimulations, sending corresponding voice information to remind the driver of safe driving;
s6.2, when the driver is detected to have a shallow fatigue state, sending corresponding voice information for reminding;
s6.3, detecting whether the frequency of the shallow fatigue state of the driver exceeds a set threshold value within a set period, if so, judging that the driver enters a deep fatigue state and needing to warn;
s6.4, after each voice prompt is sent, the driver needs to respond to the prompt information through voice, and after the system identifies the fed-back voice information, the prompt voice is stopped to be sent;
s6.5, if the system does not receive the voice information of the driver response prompt within a certain time, continuously sending out prompt information, and reporting the driver and vehicle information to a dispatching center through a wireless communication technology;
and S6.6, after the dispatching center receives the report, intervening and correcting the abnormal behavior of the driver in other modes.
The present invention also provides a device for a method for detecting micro-expression of a driver, which comprises a processor, a memory and a computer program stored in the memory and running on the processor, wherein the processor is used for implementing any of the steps of the method for detecting micro-expression of a driver when executing the computer program.
A fourth object of the present invention is to provide a computer-readable storage medium storing a computer program, characterized in that: the computer program, when executed by a processor, implements any of the steps of the driver micro-expression detection method described above.
Compared with the prior art, the invention has the beneficial effects that:
1. in the driver micro-expression detection device, the video and audio information of the facial area of the driver can be collected in time, and whether the mental state or behavior which is not beneficial to safe driving exists in the driver can be judged and identified so as to carry out early warning;
2. according to the method for detecting the micro expression of the driver, the micro expression change of the driver can be accurately detected, meanwhile, the mental state of the driver can be accurately judged by combining sound information, when the driver is in states of fatigue, emotional excitement, distraction and the like, a voice prompt can be timely sent to intervene, the dangerous driving state of the driver is corrected, traffic accidents caused by mental confusion are reduced, and the safety of the driver and pedestrians is guaranteed.
Drawings
FIG. 1 is a schematic view of the overall structure of the apparatus of the present invention;
FIG. 2 is a schematic view of a partial half-section of the apparatus of the present invention;
FIG. 3 is a schematic structural view of a PCB board according to the present invention;
FIG. 4 is an overall flow chart of the method of the present invention;
FIG. 5 is a partial flow diagram of a method of the present invention;
FIG. 6 is a second partial flow chart of the method of the present invention;
FIG. 7 is a third partial flow chart of the method of the present invention;
FIG. 8 is a partial flow chart of a fourth embodiment of the method of the present invention;
FIG. 9 is a partial flow diagram of a fifth embodiment of the method of the present invention;
FIG. 10 is a sixth partial flow chart of the method of the present invention;
fig. 11 is an exemplary architecture diagram of a system apparatus of the present invention.
In the figure:
1. a housing; 11. a sound receiving hole; 12. a sound playing port;
2. a high-speed infrared camera;
3. a PCB board; 31. a camera module; 32. recording an ic chip; 33. a speaker; 34. a processor; 35. a memory; 36. a wireless communication module.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Product examples
As shown in fig. 1-3, an object of the present embodiment is to provide a device for detecting a micro-expression of a driver, which includes a housing 1, a high-speed infrared camera 2 fixedly embedded in the middle of the front end of the housing 1, a PCB 3 disposed in the housing 1, and a camera module 31, a recording ic chip 32, a speaker 33, a processor 34, a memory 35 and a wireless communication module 36 regularly disposed on the PCB 3.
In this embodiment, the two sides of the high-speed infrared camera 2 are respectively provided with a sound receiving hole 11 and a sound playing port 12, the high-speed infrared camera 2 corresponds to the camera module 31, the sound receiving hole 11 corresponds to the recording ic chip 32, and the sound playing port 12 corresponds to the speaker 33.
Specifically, the camera module 31 is connected with the high-speed infrared camera 2, so that the camera shooting operation is facilitated; the recording ic chip 32 collects the voice information of the driver through the sound receiving hole 11, and the speaker 33 plays the prompt voice pre-stored in the database through the sound playing port 12.
Method embodiment
As shown in fig. 4 to 10, the present embodiment aims to provide a driver micro-expression detection method, which includes the following steps:
s1, verifying the identity of the driver through an AI face recognition technology;
s2, collecting action information of a driver, including dynamic video and sound;
s3, processing the collected graphic information, and identifying and judging the facial micro expression of the driver;
s4, processing the collected audio information;
s5, judging the driving state of the driver by combining the micro-expression and the audio information of the driver at the same time;
and S6, reporting the state information of the driver, and intervening the behavior of the driver which is not beneficial to safe driving through voice.
In this embodiment, in S1, the method for authenticating includes the following steps:
s1.1, acquiring an image with a human face from video information acquired by a camera;
s1.2, detecting the face part, and carrying out face alignment and living body detection processing;
s1.3, extracting the face features, and comparing the face features with the face features stored in a face database in advance;
and S1.4, outputting a face recognition result and confirming the identity of the driver.
If the vehicle is a private car, the face information of the user to which the vehicle belongs and family members of the user can be pre-input into the face database; if the vehicle is an operating vehicle, the face database can pre-input face information of all people with driving qualification in the operating unit, and the safety of the vehicle can be guaranteed and the risk that the vehicle is stolen or driven away by irrelevant people is reduced through the face recognition technology.
Before S2 collecting the action information of the driver, firstly establishing a space coordinate system of the face by taking the fixed position in the vehicle as a relative coordinate to obtain the position of the face in a three-dimensional space; and collecting the feature points of different facial expressions of the human face and the positions of the feature points in the three-dimensional space in advance, inputting the feature points into a convolutional neural network model, and establishing a model library. It is known that different facial expressions correspond to different feature point positions, and if a group of feature point positions are input into a model, a facial expression can be obtained through rough judgment.
In this embodiment, in S2, the method for acquiring information includes the following steps:
s2.1, acquiring a facial expression change video of the driver through a high-speed infrared camera;
s2.2, recording the sound information of the driver through a recording ic chip;
and S2.3, simultaneously transmitting the acquired information to a processor and respectively storing the acquired information.
In this embodiment, in S3, the method for processing image information includes the following steps:
s3.1, obtaining a static image of each frame, and screening out fuzzy image information;
s3.2, converting the facial expression image of the driver into a gray level image, and performing histogram equalization operation on the gray level image;
s3.3, positioning the face area of each frame by using a Viola-Jones face detector, and calculating a group of initial feature points by extracting a response block and low-dimensional projection; directly inputting the initial characteristic points into a model base to obtain a three-dimensional coordinate position Y of the initial characteristic points in a pre-established space coordinate system;
then, the initial characteristic point pixels of the image are strengthened according to the following formula:
Figure BDA0002879765950000081
Qirepresenting the enhanced image, XiRepresenting the original image, Y (X)i) Representing the mapping of the plane position coordinates of a certain pixel point of the original image to the coordinate position, x, of the stereo space11,…,xnnA pixel size representing an initial feature point; and multiplying the pixel intensity of the characteristic points by the matrix, increasing the resolution and reducing the error of the facial expression detection of the human face. And inputting the enhanced image feature points into the model again to obtain more accurate facial expression information.
Calculating a difference Y-Y (X) for the spatial position coordinates of each feature pointi) Removing the characteristic points with the difference value larger than the threshold value, and correcting the position;
s3.4, accurately positioning 68 key feature points of the face region by adopting a DRMF (dry digital multi-function) method for the screened feature points, and simultaneously dividing the face region;
s3.5, preprocessing the graph sequence, decomposing the image into a structure part and a texture part, and calculating the optical flow field of the texture part;
s3.6, deducing the motion change of the facial expression of the driver by detecting the constantly changing pixel intensity between two image frames by adopting an optical flow field calculation mode;
s3.7, correcting the rotation and conversion actions of the face region in the image sequence, and calculating the HOOF characteristics in each block;
and S3.8, carrying out standardization processing on the calculation result to obtain the normalized micro expression video clip MDMO characteristic vector, and realizing detection and identification of the micro expression.
Specifically, in S3.6, the calculation formula of the optical flow field is:
I(x,y,t)=I(x+Δx,y+Δy,t+Δt);
wherein, (x, y, t) is the location of one pixel, the pixel variation strength is I (x, y, t), and Δ x, Δ y, Δ t are the moving amounts of the pixel location (x, y, t) between two frames, respectively.
In this embodiment, in S4, the method for processing audio information includes the following steps:
s4.1, extracting characteristic states in the audio, and dividing the audio information into two types of voice information and other sound information;
s4.2, testing the decibel of each piece of audio information;
s4.3, detecting the speed of speech of the speech information;
s4.4, carrying out voice recognition on the voice information, extracting voice keywords, and comparing the voice keywords with sensitive words prestored in a database;
and S4.5, comparing the characteristics of other sound information with the sound types pre-stored in the database, and judging the sound types.
In S4.5, the sound types include pumping, wheezing, breathing, laughing, crying, squashing, screaming, and the like.
In this embodiment, in S5, the method for determining the driving state of the driver includes the steps of:
s5.1, judging whether the driver is in a distraction state or a state of communicating with other people in the vehicle or not by the whole facial expression of the driver, particularly the face orientation or the eye spirit orientation of the driver and combining the sound information;
s5.2, judging whether the driver is in a yawning state or a fatigue driving state or not according to the whole facial expression of the driver, particularly the eye opening, the lip opening or the blinking frequency of the driver and the existence of haar;
s5.3, judging whether the driver is in a laughing or crying state or not by the whole facial expression of the driver, particularly the inclination directions of the mouth corners and eyebrows and combining the sound information;
and S5.4, judging whether the driver is in a call receiving and making state or not by combining the voice information through the whole facial expression of the driver, particularly the opening degree of lips and the motion frequency of the lips.
In addition, the behaviors of the driver influencing safe driving also comprise excessive laughing, crying, emotional agitation, screaming, physical disorder, left-right desire and the like, and the processor needs to perform detailed division and judgment in combination with the micro expression change and the sound information so as to make an accurate safety prompt.
In this embodiment, in S6, the method for intervening in the behavior of the driver includes the following steps:
s6.1, when detecting that the driver has behaviors influencing safe driving or other overstimulations, sending corresponding voice information to remind the driver of safe driving;
s6.2, when the driver is detected to have a shallow fatigue state, sending corresponding voice information for reminding;
s6.3, detecting whether the frequency of the shallow fatigue state of the driver exceeds a set threshold value within a set period, if so, judging that the driver enters a deep fatigue state and needing to warn;
s6.4, after each voice prompt is sent, the driver needs to respond to the prompt information through voice, and after the system identifies the fed-back voice information, the prompt voice is stopped to be sent;
s6.5, if the system does not receive the voice information of the driver response prompt within a certain time, continuously sending out prompt information, and reporting the driver and vehicle information to a dispatching center through a wireless communication technology;
and S6.6, after the dispatching center receives the report, intervening and correcting the abnormal behavior of the driver in other modes.
The dispatching center can be a nearby traffic management department or a centralized control center of the product.
Wherein, other intervention modes include but are not limited to: broadcast, remote call, nearby traffic police check, etc.
Electronic product and device embodiments
Referring to fig. 11, a schematic diagram of an apparatus for a driver micro-expression detection method is shown, the apparatus including a processor, a memory, and a computer program stored in the memory and running on the processor.
The processor comprises one or more than one processing core, the processor is connected with the processor through a bus, the memory is used for storing program instructions, and the driver micro-expression detection method is realized when the processor executes the program instructions in the memory.
It should be noted that the functions of the graph coding module, the cloud model building module, and the sensing detection module are described in detail with reference to the description of the method portion corresponding to each module, and are not described here again.
Alternatively, the memory may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
In addition, the present invention also provides a computer readable storage medium, which stores a computer program, and the computer program is executed by a processor to implement the steps of the driver micro-expression detection method.
Optionally, the present invention also provides a computer program product containing instructions which, when run on a computer, cause the computer to perform the steps of the above described aspects of the driver micro-expression detection method.
It will be understood by those skilled in the art that all or part of the steps of implementing the above embodiments may be implemented by hardware, or may be implemented by hardware related to instructions of a program, and the program may be stored in a computer readable storage medium, where the above mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The foregoing shows and describes the general principles, essential features, and advantages of the invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, and the preferred embodiments of the present invention are described in the above embodiments and the description, and are not intended to limit the present invention. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (10)

1. A driver micro-expression detection device is characterized in that: including shell (1), fixed the inlaying is equipped with high-speed infrared camera (2) in the middle of the front end of shell (1), be equipped with PCB board (3) in shell (1), regular camera module (31), recording ic chip (32), speaker (33), treater (34), memory (35) and wireless communication module (36) are equipped with on PCB board (3).
2. The driver micro-expression detection device according to claim 1, characterized in that: the high-speed infrared camera is characterized in that sound receiving holes (11) and sound playing ports (12) are respectively formed in two sides of the high-speed infrared camera (2), the high-speed infrared camera (2) corresponds to the camera module (31), the sound receiving holes (11) correspond to the recording IC chip (32), and the sound playing ports (12) correspond to the loudspeaker (33).
3. A driver micro-expression detection method is characterized in that: the method comprises the following steps:
s1, verifying the identity of the driver through an AI face recognition technology;
s2, collecting action information of a driver, including dynamic video and sound;
s3, processing the collected graphic information, and identifying and judging the facial micro expression of the driver;
s4, processing the collected audio information;
s5, judging the driving state of the driver by combining the micro-expression and the audio information of the driver at the same time;
and S6, reporting the state information of the driver, and intervening the behavior of the driver which is not beneficial to safe driving through voice.
4. The driver micro-expression detection method according to claim 3, characterized in that: in S1, the method for authenticating includes the following steps:
s1.1, acquiring an image with a human face from video information acquired by a camera;
s1.2, detecting the face part, and carrying out face alignment and living body detection processing;
s1.3, extracting the face features, and comparing the face features with the face features stored in a face database in advance;
and S1.4, outputting a face recognition result and confirming the identity of the driver.
5. The driver micro-expression detection method according to claim 3, characterized in that: before S2 collecting the action information of the driver, firstly establishing a space coordinate system of the face by taking the fixed position in the vehicle as a relative coordinate to obtain the position of the face in a three-dimensional space; collecting feature points of different facial expressions of the human face and positions of the feature points in a three-dimensional space in advance, inputting the feature points into a convolutional neural network model, and establishing a model base;
in S2, the method for collecting information includes the following steps:
s2.1, acquiring a facial expression change video of the driver through a high-speed infrared camera;
s2.2, recording the sound information of the driver through a recording ic chip;
and S2.3, simultaneously transmitting the acquired information to a processor and respectively storing the acquired information.
6. The driver micro-expression detection method according to claim 5, characterized in that: in S3, the method for processing image information includes the steps of:
s3.1, obtaining a static image of each frame, and screening out fuzzy image information;
s3.2, converting the facial expression image of the driver into a gray level image, and performing histogram equalization operation on the gray level image;
s3.3, positioning the face area of each frame by using a Viola-Jones face detector, and calculating a group of initial feature points by extracting a response block and low-dimensional projection; directly inputting the initial characteristic points into a model base to obtain a three-dimensional coordinate position Y of the initial characteristic points in a pre-established space coordinate system;
then, the initial characteristic point pixels of the image are strengthened according to the following formula:
Figure FDA0002879765940000021
Qirepresenting the enhanced image, XiRepresenting the original image, Y (X)i) Representing the mapping of the plane position coordinates of a certain pixel point of the original image to the coordinate position, x, of the stereo space11,…,xnnA pixel size representing an initial feature point;
calculating a difference Y-Y (X) for the spatial position coordinates of each feature pointi) Removing the characteristic points with the difference value larger than the threshold value, and correcting the position;
s3.4, accurately positioning 68 key feature points of the face region by adopting a DRMF (dry digital multi-function) method for the screened feature points, and simultaneously dividing the face region;
s3.5, preprocessing the graph sequence, decomposing the image into a structure part and a texture part, and calculating the optical flow field of the texture part;
s3.6, deducing the motion change of the facial expression of the driver by detecting the constantly changing pixel intensity between two image frames by adopting an optical flow field calculation mode;
s3.7, correcting the rotation and conversion actions of the face region in the image sequence, and calculating the HOOF characteristics in each block;
and S3.8, carrying out standardization processing on the calculation result to obtain the normalized micro expression video clip MDMO characteristic vector, and realizing detection and identification of the micro expression.
7. The driver micro-expression detection method according to claim 6, characterized in that: in S3.6, the calculation formula of the optical flow field is:
I(x,y,t)=I(x+Δx,y+Δy,t+Δt);
wherein, (x, y, t) is the location of one pixel, the pixel variation strength is I (x, y, t), and Δ x, Δ y, Δ t are the moving amounts of the pixel location (x, y, t) between two frames, respectively.
8. The driver micro-expression detection method according to claim 3, characterized in that: in S4, the method for processing audio information includes the steps of:
s4.1, extracting characteristic states in the audio, and dividing the audio information into two types of voice information and other sound information;
s4.2, testing the decibel of each piece of audio information;
s4.3, detecting the speed of speech of the speech information;
s4.4, carrying out voice recognition on the voice information, extracting voice keywords, and comparing the voice keywords with sensitive words prestored in a database;
and S4.5, comparing the characteristics of other sound information with the sound types pre-stored in the database, and judging the sound types.
9. The driver micro-expression detection method according to claim 3, characterized in that: in S5, the method for determining the driving state of the driver includes:
s5.1, judging whether the driver is in a distraction state or a state of communicating with other people in the vehicle or not by the whole facial expression of the driver, particularly the face orientation or the eye spirit orientation of the driver and combining the sound information;
s5.2, judging whether the driver is in a yawning state or a fatigue driving state or not according to the whole facial expression of the driver, particularly the eye opening, the lip opening or the blinking frequency of the driver and the existence of haar;
s5.3, judging whether the driver is in a laughing or crying state or not by the whole facial expression of the driver, particularly the inclination directions of the mouth corners and eyebrows and combining the sound information;
and S5.4, judging whether the driver is in a call receiving and making state or not by combining the voice information through the whole facial expression of the driver, particularly the opening degree of lips and the motion frequency of the lips.
10. The driver micro-expression detection method according to claim 3, characterized in that: in S6, the method for intervening in the driver behavior includes the following steps:
s6.1, when detecting that the driver has behaviors influencing safe driving or other overstimulations, sending corresponding voice information to remind the driver of safe driving;
s6.2, when the driver is detected to have a shallow fatigue state, sending corresponding voice information for reminding;
s6.3, detecting whether the frequency of the shallow fatigue state of the driver exceeds a set threshold value within a set period, if so, judging that the driver enters a deep fatigue state and needing to warn;
s6.4, after each voice prompt is sent, the driver needs to respond to the prompt information through voice, and after the system identifies the fed-back voice information, the prompt voice is stopped to be sent;
s6.5, if the system does not receive the voice information of the driver response prompt within a certain time, continuously sending out prompt information, and reporting the driver and vehicle information to a dispatching center through a wireless communication technology;
and S6.6, after the dispatching center receives the report, intervening and correcting the abnormal behavior of the driver in other modes.
CN202011629369.0A 2020-12-31 2020-12-31 Driver micro-expression detection device and method Pending CN112699802A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011629369.0A CN112699802A (en) 2020-12-31 2020-12-31 Driver micro-expression detection device and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011629369.0A CN112699802A (en) 2020-12-31 2020-12-31 Driver micro-expression detection device and method

Publications (1)

Publication Number Publication Date
CN112699802A true CN112699802A (en) 2021-04-23

Family

ID=75513413

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011629369.0A Pending CN112699802A (en) 2020-12-31 2020-12-31 Driver micro-expression detection device and method

Country Status (1)

Country Link
CN (1) CN112699802A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113276827A (en) * 2021-05-26 2021-08-20 朱芮叶 Control method and system for electric automobile energy recovery system and automobile
CN113408389A (en) * 2021-06-10 2021-09-17 西华大学 Method for intelligently recognizing drowsiness action of driver
CN114081496A (en) * 2021-11-09 2022-02-25 中国第一汽车股份有限公司 Test system, method, equipment and medium for driver state monitoring device
CN114475620A (en) * 2022-01-26 2022-05-13 南京科融数据系统股份有限公司 Driver verification method and system for money box escort system

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105292125A (en) * 2015-10-30 2016-02-03 北京九五智驾信息技术股份有限公司 Driver state monitoring method
CN105760852A (en) * 2016-03-14 2016-07-13 江苏大学 Driver emotion real time identification method fusing facial expressions and voices
CN106657648A (en) * 2016-12-28 2017-05-10 上海斐讯数据通信技术有限公司 Mobile terminal for preventing fatigue driving and realization method thereof
CN107705808A (en) * 2017-11-20 2018-02-16 合光正锦(盘锦)机器人技术有限公司 A kind of Emotion identification method based on facial characteristics and phonetic feature
CN108053615A (en) * 2018-01-10 2018-05-18 山东大学 Driver tired driving condition detection method based on micro- expression
CN108830223A (en) * 2018-06-19 2018-11-16 山东大学 A kind of micro- expression recognition method based on batch mode Active Learning
CN111547063A (en) * 2020-05-12 2020-08-18 武汉艾瓦客机器人有限公司 Intelligent vehicle-mounted emotion interaction device for fatigue detection
CN111605556A (en) * 2020-06-05 2020-09-01 吉林大学 Road rage prevention recognition and control system
CN112016457A (en) * 2020-08-27 2020-12-01 青岛慕容信息科技有限公司 Driver distraction and dangerous driving behavior recognition method, device and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105292125A (en) * 2015-10-30 2016-02-03 北京九五智驾信息技术股份有限公司 Driver state monitoring method
CN105760852A (en) * 2016-03-14 2016-07-13 江苏大学 Driver emotion real time identification method fusing facial expressions and voices
CN106657648A (en) * 2016-12-28 2017-05-10 上海斐讯数据通信技术有限公司 Mobile terminal for preventing fatigue driving and realization method thereof
CN107705808A (en) * 2017-11-20 2018-02-16 合光正锦(盘锦)机器人技术有限公司 A kind of Emotion identification method based on facial characteristics and phonetic feature
CN108053615A (en) * 2018-01-10 2018-05-18 山东大学 Driver tired driving condition detection method based on micro- expression
CN108830223A (en) * 2018-06-19 2018-11-16 山东大学 A kind of micro- expression recognition method based on batch mode Active Learning
CN111547063A (en) * 2020-05-12 2020-08-18 武汉艾瓦客机器人有限公司 Intelligent vehicle-mounted emotion interaction device for fatigue detection
CN111605556A (en) * 2020-06-05 2020-09-01 吉林大学 Road rage prevention recognition and control system
CN112016457A (en) * 2020-08-27 2020-12-01 青岛慕容信息科技有限公司 Driver distraction and dangerous driving behavior recognition method, device and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
都伊林: "《改变汽车的100个黑科技》", 华中科技大学出版社 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113276827A (en) * 2021-05-26 2021-08-20 朱芮叶 Control method and system for electric automobile energy recovery system and automobile
CN113408389A (en) * 2021-06-10 2021-09-17 西华大学 Method for intelligently recognizing drowsiness action of driver
CN114081496A (en) * 2021-11-09 2022-02-25 中国第一汽车股份有限公司 Test system, method, equipment and medium for driver state monitoring device
CN114475620A (en) * 2022-01-26 2022-05-13 南京科融数据系统股份有限公司 Driver verification method and system for money box escort system
CN114475620B (en) * 2022-01-26 2024-03-12 南京科融数据系统股份有限公司 Driver verification method and system for money box escort system

Similar Documents

Publication Publication Date Title
CN112699802A (en) Driver micro-expression detection device and method
JP6933668B2 (en) Driving condition monitoring methods and devices, driver monitoring systems, and vehicles
CN108791299B (en) Driving fatigue detection and early warning system and method based on vision
Mbouna et al. Visual analysis of eye state and head pose for driver alertness monitoring
CN107704805A (en) method for detecting fatigue driving, drive recorder and storage device
US9662977B2 (en) Driver state monitoring system
US9928404B2 (en) Determination device, determination method, and non-transitory storage medium
CN112016457A (en) Driver distraction and dangerous driving behavior recognition method, device and storage medium
WO2010103584A1 (en) Device for detecting entry and/or exit monitoring device, and method for detecting entry and/or exit
CN110728218A (en) Dangerous driving behavior early warning method and device, electronic equipment and storage medium
CN113752983B (en) Vehicle unlocking control system and method based on face recognition/eye recognition
CN110826370A (en) Method and device for identifying identity of person in vehicle, vehicle and storage medium
CN112754498B (en) Driver fatigue detection method, device, equipment and storage medium
Lashkov et al. Driver dangerous state detection based on OpenCV & dlib libraries using mobile video processing
CN113076856A (en) Bus safety guarantee system based on face recognition
CN113239754A (en) Dangerous driving behavior detection and positioning method and system applied to Internet of vehicles
CN111553214B (en) Method and system for detecting smoking behavior of driver
CN108108651B (en) Method and system for detecting driver non-attentive driving based on video face analysis
KR102051136B1 (en) Artificial intelligence dashboard robot base on cloud server for recognizing states of a user
CN114170585B (en) Dangerous driving behavior recognition method and device, electronic equipment and storage medium
CN106874831A (en) Driving behavior method for detecting and its system
Rani et al. Development of an Automated Tool for Driver Drowsiness Detection
CN112633387A (en) Safety reminding method, device, equipment, system and storage medium
CN111325058B (en) Driving behavior detection method, device, system and storage medium
CN113312958B (en) Method and device for adjusting dispatch priority based on driver state

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210423