CN108764169A - A kind of driver's Emotion identification based on machine learning and display device and method - Google Patents

A kind of driver's Emotion identification based on machine learning and display device and method Download PDF

Info

Publication number
CN108764169A
CN108764169A CN201810544319.9A CN201810544319A CN108764169A CN 108764169 A CN108764169 A CN 108764169A CN 201810544319 A CN201810544319 A CN 201810544319A CN 108764169 A CN108764169 A CN 108764169A
Authority
CN
China
Prior art keywords
face
model
driver
emotion identification
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810544319.9A
Other languages
Chinese (zh)
Inventor
王宁
曾建平
陈明明
曾涛
陈育智
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen University
Original Assignee
Xiamen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen University filed Critical Xiamen University
Priority to CN201810544319.9A priority Critical patent/CN108764169A/en
Publication of CN108764169A publication Critical patent/CN108764169A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

Driver's Emotion identification that the present invention relates to a kind of based on machine learning and display device and method, described device includes face detection system, Emotion identification system and mood display system;The face detection system is connected to Emotion identification system, the face detection system is used to obtain the image information of driver's face, and processing is carried out to the image information of acquisition and obtains facial image model, and facial image model is sent to Emotion identification system by treated;The Emotion identification system is connected to mood display system, the Emotion identification system is used to extract the emotional characteristics point of facial image model, face Emotion identification in face information is obtained by Characteristic Analysis Model and mood disaggregated model as a result, and face Emotion identification result is sent to mood display system;The mood display system is used to, according to the different type of the face Emotion identification result received, show the emotional information of driver.It is also convenient for preferably controlling road conditions in driver, makes corresponding strategy.

Description

A kind of driver's Emotion identification based on machine learning and display device and method
Technical field
The present invention relates to image real time transfer and machine learning fields, and in particular to a kind of driving based on machine learning Member's Emotion identification and display device and method.
Background technology
With the arriving in big data epoch, the fields such as artificial intelligence, pattern-recognition, computer vision are also faced with huge choose War, has thus more highlighted the importance of machine learning.Currently, using the face image data of magnanimity, in Face datection and table Many model methods are had existed in the research of feelings identification, however are but more lacked for application of the Expression Recognition in traffic prompt It is weary.In particular, the getting worse of the congestion phenomenon with urban transportation, and personal social pressures increase sharply, in drive the cross Cheng Zhong, driver cannot recognize the emotional state of oneself, can not also obtain the emotional information of other drivers in time, so that handing over Logical conflict frequently occurs.
Invention content
For this reason, it may be necessary to provide a kind of driver's Emotion identification based on machine learning and display device and method, solve existing There is driver in driving procedure, can not recognize oneself emotional state in time, and is easy to cause what traffic conflict frequently occurred Problem.
To achieve the above object, a kind of driver's Emotion identification based on machine learning is inventor provided to fill with display It sets, including face detection system, Emotion identification system and mood display system;
The face detection system is connected to Emotion identification system, and the face detection system is for obtaining driver's face Image information, and processing is carried out to the image information of acquisition and obtains facial image model, and will treated facial image mould Type is sent to Emotion identification system;
The Emotion identification system is connected to mood display system, and the Emotion identification system is for extracting facial image mould The emotional characteristics point of type, face Emotion identification in face information is obtained by Characteristic Analysis Model and mood disaggregated model as a result, And face Emotion identification result is sent to mood display system;
The mood display system is used for the different type according to the face Emotion identification result received, shows driver Emotional information.
It advanced optimizes, the face identification system includes image collection module, Face detection module and image preprocessing Module;
Facial expression information of the described image acquisition module for obtaining driver in real time, and convert thereof into facial image Information is sent to Face detection module;
The Face detection module is rejected and is made an uproar for detecting human face region to be treated in human face image information Acoustic jamming, by treated, human face image information is sent to image pre-processing module;
Described image preprocessing module is used for by gathering method for normalizing and gray scale normalization method to treated people Face image information is handled, and the facial image model of facial size and human face light information unification is obtained.
It advanced optimizes, the Emotion identification system includes face database, face emotional characteristics extraction module, people Face emotional characteristics selecting module and face emotional characteristics sort module;
The face database is used to store the Characteristic Analysis Model and mood disaggregated model of human face image information;
The face emotional characteristics extraction module be used for by Gabor filtering methods to the facial image model that receives into Row characteristic vector pickup obtains feature vector model, and the feature vector model of acquisition is sent to the selection of face emotional characteristics Module;
The face emotional characteristics selecting module is used to carry out dimensionality reduction to the feature vector model extracted by PCA algorithms Processing, deletes the redundancy data that obtain that treated, and will treated that data are sent to face emotional characteristics sort module;
The face classification modules are used for according to trained Characteristic Analysis Model and mood disaggregated model to receiving To data classify, obtain current mood generic title, and item name is sent to mood display system.
It advanced optimizes, the Emotion identification system further includes image tranining database and image dynamic data base;
Described image tranining database is used to preserve for face emotional characteristics sort module training characteristics analysis model Training image;
Described image dynamic data base is used to record the human face image information of the driver in certain time, and preserves the people The classification results of face image information, and by the classification results of preservation in set time more new images tranining database, to face figure As the Characteristic Analysis Model and mood disaggregated model of information optimize.
It advanced optimizes, the face Emotion identification knot that the mood display system is used to be sent according to Emotion identification system Fruit shows mood classification and mood degree on output display screen.
Inventor additionally provides another technical solution:A kind of driver's Emotion identification based on machine learning and display side Method includes the following steps:
The human face image information of driver is obtained, and processing is carried out to the human face image information of acquisition and obtains facial image mould Type;
The emotional characteristics point for extracting facial image model obtains face letter by Feature Selection Model and mood disaggregated model Face Emotion identification result in breath;
According to the different type of face Emotion identification result, the emotional information of driver is shown.
Advanced optimize, it is described " obtain driver human face image information, and to the human face image information of acquisition at Reason obtains facial image model " it specifically includes:
The facial expression information of driver is obtained in real time, and is converted into human face image information;
Human face region to be treated in human face image information is detected, while cancelling noise interferes;
By set method for normalizing and gray scale normalization method, to treated, human face image information is handled, and is obtained The facial image model of facial size and human face light information unification.
It advanced optimizes, it is described " the facial image model of extraction to be passed through into the feature extraction mould in face database Type and mood disaggregated model obtain face Emotion identification result in face information " it specifically includes:
Characteristic vector pickup is carried out to the facial image model received by Gabor filtering methods, obtains feature vector Model;
Dimension-reduction treatment is carried out to the feature vector model extracted by PCA algorithms, after deletion redundancy obtains processing Data;
According in face database trained Characteristic Analysis Model and mood disaggregated model to receiving Data are classified, and current mood generic title is obtained;
The face database is used to store the Feature Selection Model and tagsort model of human face image information.
It advanced optimizes, the Emotion identification system further includes image tranining database and image dynamic data base;
Described image tranining database is used to preserve the training image model for face emotional characteristics analysis module;
Described image dynamic data base is used to record the human face image information of the driver in certain time, and preserves the people The classification results of face image information, and by the classification results of preservation in set time more new images tranining database, to face figure As the Feature Selection Model and mood disaggregated model of information optimize.
It advanced optimizes, it is described " according to the different type of face Emotion identification result, showing the emotional information of driver " Further include specifically:
According to face Emotion identification as a result, the mood classification and mood degree of display driver.
It is different from the prior art, above-mentioned technical proposal, when the emotional information for obtaining driver by the device, and to driving Member mood shown, the driver of driving can be reminded, be also convenient on highway other drivers and traffic Administrative staff preferably control road conditions, and make corresponding emergency policy.
Description of the drawings
Fig. 1 is a kind of structural representation of driver's Emotion identification and display based on machine learning described in specific implementation mode Figure;
Fig. 2 is a kind of flow signal of driver's Emotion identification and display based on machine learning described in specific implementation mode Figure.
Specific implementation mode
For the technology contents of technical solution, construction feature, the objects and the effects are described in detail, below in conjunction with specific reality It applies example and attached drawing is coordinated to be explained in detail.
Referring to Fig. 1, driver's Emotion identification and display device based on machine learning described in the present embodiment, including face Detecting system, Emotion identification system and mood display system;
The face detection system is connected to Emotion identification system, and the face detection system is used to obtain the people of driver Face image information, and processing is carried out to the human face image information of acquisition and obtains facial image model, and will treated face figure As model is sent to Emotion identification system;
The Emotion identification system is connected to mood display system, and the Emotion identification system is for extracting facial image mould The emotional characteristics point of type, face Emotion identification in face information is obtained by Characteristic Analysis Model and mood disaggregated model as a result, And face Emotion identification result is sent to mood display system;
The mood display system is used for the different type according to the face Emotion identification result received, shows driver Emotional information.
Human face detection device, Emotion identification system and mood display system form organic whole, face by cascade system Detecting system mainly obtains the human face image information for including driver by data collection station (such as vehicle-mounted front camera), and Being obtained by image procossing has the facial image model for the face information that can be further effectively treated, by what is obtained after processing Facial image model is sent to Emotion identification system;Emotion identification system is carried according to obtained facial image model by feature Modulus type with the relevant effective emotional characteristics point extraction of mood, obtained by Characteristic Analysis Model and mood disaggregated model The recognition result of face mood in facial image model, and after the recognition result of Emotion identification system acquisition face mood, it will The recognition result is sent to mood display system, and mood display system is according to the different type of obtained recognition result, Ke Yitong Cross the emotional information of vehicle-mounted display screen display driver.
Specifically, the face identification system includes image collection module, Face detection module and image pre-processing module;
Facial expression information of the described image acquisition module for obtaining driver in real time, and convert thereof into facial image Information is sent to Face detection module;
The Face detection module is rejected and is made an uproar for detecting human face region to be treated in human face image information Acoustic jamming, by treated, human face image information is sent to image pre-processing module;
Described image preprocessing module is used for by gathering method for normalizing and gray scale normalization method to treated people Face image information is handled, and the facial image model of facial size and human face light information unification is obtained.
The Emotion identification system includes face database, face emotional characteristics extraction module, face emotional characteristics Selecting module and face emotional characteristics sort module;
The face database is used to store the Characteristic Analysis Model and mood disaggregated model of human face image information;
The face emotional characteristics extraction module be used for by Gabor filtering methods to the facial image model that receives into Row characteristic vector pickup obtains feature vector model, and the feature vector model of acquisition is sent to the selection of face emotional characteristics Module;
The face emotional characteristics selecting module is used to carry out dimensionality reduction to the feature vector model extracted by PCA algorithms Processing, deletes the redundancy data that obtain that treated, and will treated that data are sent to face emotional characteristics sort module;
Model and mood disaggregated model are to receiving about the face classification modules are used for according to trained feature point To data classify, obtain current mood generic title, and item name is sent to mood display system.
When driver's Emotion identification and display device are in the state of operation, such as make the dress when driver opens the device The state in operation is set, or when driver starts automobile, automatically turns on the state that the device makes the device be in operation; Image collection module obtains the facial expression information of driver in real time, and converts thereof into human face image information, i.e., image obtains Module obtains the facial image of current driver's in real time, and since the driver of a vehicle is relatively fixed, image collection module can be with The image information of driver is accurately obtained, image collection module is that image obtains terminal, such as vehicle-mounted front camera;Wherein, Image collection module can each 5 minutes image informations for obtaining a current driver;When what image collection module obtained works as After the image information of preceding driver, it is sent to Face detection module.
After Face detection module receives the image information of current driver's, according to image information is obtained, face is detected Human face region to be treated in image information determines human face region to be identified, rejects background information and complex environment is dry It disturbs, and is sent to image pre-processing module and is further processed.
Image pre-processing module is further handled the human face image information after Face detection resume module, according to system One mode carries out geometrical normalization to human face image information and gray scale normalization is handled, and obtains facial size and human face light letter Unified facial image model is ceased, the facial image model of acquisition is sent to face emotional characteristics extraction module.
The facial image model received is carried out emotional characteristics point by face emotional characteristics extraction module using Gabor methods Extraction obtains the feature vector of facial image model, and is sent to face emotional characteristics selecting module by feature vector is obtained.
Face emotional characteristics selecting module carries out dimension-reduction treatment to the feature vector received by PCA algorithms, deletes superfluous Data that remaining information obtains that treated, will treated that data are sent into face emotional characteristics sort module.
When face emotional characteristics sort module is according to trained Characteristic Analysis Model pair in face database Treated, and data are analyzed, and analysis result is carried out acquisition current driver's by trained mood disaggregated model Emotion identification as a result, as described in current mood item name, be then sent to mood display system and carry out result to identification It is shown.
For example, when driver is during driving, a running for encountering front is excessively slow, causes at the driver In angry mood, and driver and the driver of other vehicles may also be unaware that the driver is in angry mood In, driver may aggravate its angry degree, such as the whistle of vehicle below because of some behaviors of other automobiles, and work as When driver is in the mood of indignation, it is dangerous to be easy to cause driving.And work as the emotional information that driver is obtained by the device, And the mood of driver is shown, wherein driver can be shown to the mood situation of driver by vehicle-carrying display screen It is checked, vehicle back can also be shown into the mood situation for being about to driver by the way that display screen is arranged behind vehicle The drivers of other vehicles check, driving of the display screen by front vehicles can also be set in vehicle front by being arranged Member knows the mood situation of the driver of the vehicle, or the display screen by the way that face forward and rear are arranged in the top of vehicle The mood situation that display driver is carried out to the vehicle of front and back, can remind the driver of driving, be also convenient in public affairs Other drivers and traffic administration personnel of road preferably control road conditions, and make corresponding emergency policy.
In the present embodiment, in order to improve the Emotion identification speed of the facial image to newly obtaining, the Emotion identification system System further includes image tranining database and image dynamic data base;
Described image tranining database is used to preserve for face emotional characteristics sort module training characteristics analysis model Training image;
Described image dynamic data base is used to record the human face image information of the driver in certain time, and preserves the people The classification results of face image information, and so that image is instructed in set time more new images tranining database the classification results of preservation Practice database to optimize the Characteristic Analysis Model and mood disaggregated model of human face image information.
The image information for obtaining driver and analysis are obtained its generic and are stored in image dynamic data base, image by device Dynamic data base include each time the image information after Emotion identification and Emotion identification as a result, and all images have been sent to figure As in tranining database, making it re-establish Characteristic Analysis Model and mood disaggregated model, and empty image dynamic data simultaneously Library;And when image training module optimized to Characteristic Analysis Model and mood disaggregated model, train number for image According to every piece image in library, feature point extraction is carried out using Gabor methods, to obtain different feature vectors;For each The feature vector of width image carries out dimension-reduction treatment, to obtain new feature vector using PCA methods to it;It is instructed for image Practice all images in database, according to the classification belonging to feature vector and the image, using machine learning to the power of feature Value is iterated training, to obtain Characteristic Analysis Model and mood disaggregated model, realizes to Characteristic Analysis Model and mood point The optimization of class model.
In the present embodiment, in order to help driver to recognize the mood of oneself, the mood display system is for basis The face Emotion identification that Emotion identification system is sent on output display screen as a result, show mood classification and mood degree.It is logical Mood classification and mood degree on output display screen by driver is crossed to show, can according to the mood classification of display and Mood degree preferably helps driver to recognize the mood of oneself, can deal in time.
Referring to Fig. 2, in another embodiment, a kind of driver's Emotion identification based on machine learning and display side Method includes the following steps:
Step S210:The human face image information of driver is obtained, and processing acquisition is carried out to the human face image information of acquisition Facial image model;
Step S220:The emotional characteristics point for extracting facial image model, passes through Feature Selection Model and mood disaggregated model Obtain face Emotion identification result in face information;
Step S230:According to the different type of face Emotion identification result, the emotional information of driver is shown.
The human face image information for including driver is obtained by data collection station (such as vehicle-mounted front camera), and is passed through Image procossing obtains the facial image model with the face information that can be further effectively treated, the face that will be obtained after processing Iconic model is sent to Emotion identification system;Emotion identification system passes through feature extraction mould according to obtained facial image model Type carries out, with the relevant effective emotional characteristics point extraction of mood, face being obtained by Characteristic Analysis Model and mood disaggregated model The recognition result of face mood in iconic model, and after the recognition result of Emotion identification system acquisition face mood, by the knowledge Other result is sent to mood display system, and mood display system can pass through vehicle according to the different type of obtained recognition result The emotional information of the display screen display driver of load.
Specifically, described " obtain the human face image information of driver, and carry out processing to the human face image information of acquisition and obtain Obtain facial image model " it specifically includes:
The facial expression information of driver is obtained in real time, and is converted into human face image information;
Human face region to be treated in human face image information is detected, while cancelling noise interferes;
By set method for normalizing and gray scale normalization method, to treated, human face image information is handled, and is obtained The facial image model of facial size and human face light information unification.
It is described " the facial image model of extraction to be classified by Feature Selection Model in face database and mood Model obtains face Emotion identification result in face information " it specifically includes:
Characteristic vector pickup is carried out to the facial image model received by Gabor filtering methods, obtains feature vector Model;
Dimension-reduction treatment is carried out to the feature vector model extracted by PCA algorithms, after deletion redundancy obtains processing Data;
According in face database trained Characteristic Analysis Model and mood disaggregated model to receiving Data are classified, and current mood generic title is obtained;
The face database is used to store the Feature Selection Model and tagsort model of human face image information.
When driver's Emotion identification is in the state of operation with display device, such as make the device when driver opens the device State in operation, or when driver starts automobile, automatically turn on the state that the device makes the device be in operation;Figure As acquisition module obtains the facial expression information of driver in real time, and human face image information is converted thereof into, i.e., image obtains mould Block obtains the facial image of current driver's in real time, and since the driver of a vehicle is relatively fixed, image collection module can essence The image information of driver is obtained accurately, and image collection module is that image obtains terminal, such as vehicle-mounted front camera;Wherein, scheme As acquisition module can each 5 minutes image informations for obtaining a current driver;When image collection module obtain it is current After the image information of driver, it is sent to Face detection module.
After Face detection module receives the image information of current driver's, according to image information is obtained, face is detected Human face region to be treated in image information determines human face region to be identified, rejects background information and complex environment is dry It disturbs, and is sent to image pre-processing module and is further processed.
Image pre-processing module is further handled the human face image information after Face detection resume module, according to system One mode carries out geometrical normalization to human face image information and gray scale normalization is handled, and obtains facial size and human face light letter Unified facial image model is ceased, the facial image model of acquisition is sent to face emotional characteristics extraction module.
The facial image model received is carried out emotional characteristics point by face emotional characteristics extraction module using Gabor methods Extraction obtains the feature vector of facial image model, and is sent to face emotional characteristics selecting module by feature vector is obtained.
Face emotional characteristics selecting module carries out dimension-reduction treatment to the feature vector received by PCA algorithms, deletes superfluous Data that remaining information obtains that treated, will treated that data are sent into face emotional characteristics sort module.
When face emotional characteristics sort module is according to trained Characteristic Analysis Model pair in face database Treated, and data are analyzed, and analysis result is carried out acquisition current driver's by trained mood disaggregated model Emotion identification as a result, as described in current mood item name, be then sent to mood display system and carry out result to identification It is shown.
For example, when driver is during driving, a running for encountering front is excessively slow, causes at the driver In angry mood, and driver and the driver of other vehicles may also be unaware that the driver is in angry mood In, driver may aggravate its angry degree, such as the whistle of vehicle below because of some behaviors of other automobiles, and work as When driver is in the mood of indignation, it is dangerous to be easy to cause driving.And work as the emotional information that driver is obtained by the device, And the mood of driver is shown, wherein driver can be shown to the mood situation of driver by vehicle-carrying display screen It is checked, vehicle back can also be shown into the mood situation for being about to driver by the way that display screen is arranged behind vehicle The drivers of other vehicles check, driving of the display screen by front vehicles can also be set in vehicle front by being arranged Member knows the mood situation of the driver of the vehicle, or the display screen by the way that face forward and rear are arranged in the top of vehicle The mood situation that display driver is carried out to the vehicle of front and back, can remind the driver of driving, be also convenient in public affairs Other drivers and traffic administration personnel of road preferably control road conditions, and make corresponding emergency policy.
In the present embodiment, in order to improve the Emotion identification speed of the facial image to newly obtaining, the Emotion identification system System further includes image tranining database and image dynamic data base;
Described image tranining database is used to preserve the training image model for face emotional characteristics analysis module;
Described image dynamic data base is used to record the human face image information of the driver in certain time, and preserves the people The classification results of face image information, and by the classification results of preservation in set time more new images tranining database, to face figure As the Feature Selection Model and mood disaggregated model of information optimize.
Image dynamic data base include each time the image information after Emotion identification and Emotion identification as a result, and will be all Image has been sent into image tranining database, so that it is re-established Characteristic Analysis Model and mood disaggregated model, and empty simultaneously Image dynamic data base;And when image training module optimized to Characteristic Analysis Model and mood disaggregated model, needle To per piece image, feature point extraction is carried out using Gabor methods in image tranining database, to obtain different features to Amount;For the feature vector of every piece image, dimension-reduction treatment is carried out to it using PCA methods, to obtain new feature vector; For all images in image tranining database, according to the classification belonging to feature vector and the image, using machine learning Training is iterated to the weights of feature, to obtain Characteristic Analysis Model and mood disaggregated model, is realized to signature analysis mould The optimization of type and mood disaggregated model.
In the present embodiment, described " according to face Emotion identification result in order to help driver to recognize the mood of oneself Different type, show the emotional information of driver " further include specifically:According to face Emotion identification as a result, showing driver's Mood classification and mood degree.It, can by showing the mood classification of driver and mood degree on output display screen Preferably driver to be helped to recognize the mood of oneself according to the mood classification and mood degree of display, source can be made in time Reason.
It should be noted that although the various embodiments described above have been described herein, it is not intended to limit The scope of patent protection of the present invention.Therefore, based on the present invention innovative idea, to embodiment described herein carry out change and repair Change, or using equivalent structure or equivalent flow shift made by description of the invention and accompanying drawing content, it directly or indirectly will be with Upper technical solution is used in other related technical areas, is included within the scope of patent protection of the present invention.

Claims (10)

1. a kind of driver's Emotion identification and display device based on machine learning, which is characterized in that including face detection system, Emotion identification system and mood display system;
The face detection system is connected to Emotion identification system, and the face detection system is used to obtain the figure of driver's face As information, and processing is carried out to the image information of acquisition and obtains facial image model, and facial image model is sent out by treated It send to mood identifying system;
The Emotion identification system is connected to mood display system, and the Emotion identification system is for extracting facial image model Emotional characteristics point obtains face Emotion identification in face information as a result, and will by Characteristic Analysis Model and mood disaggregated model Face Emotion identification result is sent to mood display system;
The mood display system is used to, according to the different type of the face Emotion identification result received, show the feelings of driver Thread information.
2. driver's Emotion identification and display device based on machine learning according to claim 1, which is characterized in that described Face identification system includes image collection module, Face detection module and image pre-processing module;
Facial expression information of the described image acquisition module for obtaining driver in real time, and convert thereof into human face image information It is sent to Face detection module;
The Face detection module is for detecting human face region to be treated in human face image information, while cancelling noise is dry It disturbs, by treated, human face image information is sent to image pre-processing module;
Described image preprocessing module is used for by gathering method for normalizing and gray scale normalization method to treated face figure As information is handled, the facial image model of facial size and human face light information unification is obtained.
3. driver's Emotion identification and display device based on machine learning according to claim 1, which is characterized in that described Emotion identification system includes face database, face emotional characteristics extraction module, face emotional characteristics selecting module and people Face emotional characteristics sort module;
The face database is used to store the Characteristic Analysis Model and mood disaggregated model of human face image information;
The face emotional characteristics extraction module is used to carry out the facial image model received by Gabor filtering methods special Sign vector extraction, obtains feature vector model, and the feature vector model of acquisition is sent to face emotional characteristics selecting module;
The face emotional characteristics selecting module is used to carry out at dimensionality reduction the feature vector model extracted by PCA algorithms Reason, deletes the redundancy data that obtain that treated, and will treated that data are sent to face emotional characteristics sort module;
Model and mood disaggregated model are to receiving about the face classification modules are used for according to trained feature point Data are classified, and obtain current mood generic title, and item name is sent to mood display system.
4. driver's Emotion identification and display device based on machine learning according to claim 3, which is characterized in that described Emotion identification system further includes image tranining database and image dynamic data base;
Described image tranining database is used to preserve the training for face emotional characteristics sort module training characteristics analysis model Image;
Described image dynamic data base is used to record the human face image information of the driver in certain time, and preserves the face figure Facial image is believed in set time more new images tranining database as the classification results of information, and by the classification results of preservation The Characteristic Analysis Model and mood disaggregated model of breath optimize.
5. driver's Emotion identification and display device based on machine learning according to claim 1, which is characterized in that described The face Emotion identification that mood display system is used to be sent according to Emotion identification system on output display screen as a result, show mood Classification and mood degree.
6. a kind of driver's Emotion identification and display methods based on machine learning, which is characterized in that include the following steps:
The human face image information of driver is obtained, and processing is carried out to the human face image information of acquisition and obtains facial image model;
The emotional characteristics point for extracting facial image model, is obtained by Feature Selection Model and mood disaggregated model in face information Face Emotion identification result;
According to the different type of face Emotion identification result, the emotional information of driver is shown.
7. driver's Emotion identification and display methods based on machine learning according to claim 6, which is characterized in that described " obtain the human face image information of driver, and carry out processing to the human face image information of acquisition and obtain facial image model " is specific Including:
The facial expression information of driver is obtained in real time, and is converted into human face image information;
Human face region to be treated in human face image information is detected, while cancelling noise interferes;
By set method for normalizing and gray scale normalization method, to treated, human face image information is handled, and obtains face The facial image model of size and human face light information unification.
8. driver's Emotion identification and display methods based on machine learning according to claim 6, which is characterized in that described " the facial image model of extraction is obtained into face by Feature Selection Model in face database and mood disaggregated model Face Emotion identification result in information " specifically includes:
Characteristic vector pickup is carried out to the facial image model received by Gabor filtering methods, obtains feature vector model;
Dimension-reduction treatment is carried out to the feature vector model that extracts by PCA algorithms, deletes the redundancy number that obtains that treated According to;
According in face database trained Characteristic Analysis Model and mood disaggregated model to the data that receive Classify, obtains current mood generic title;
The face database is used to store the Feature Selection Model and tagsort model of human face image information.
9. driver's Emotion identification and display methods based on machine learning according to claim 8, which is characterized in that described Emotion identification system further includes image tranining database and image dynamic data base;
Described image tranining database is used to preserve the training image model for face emotional characteristics analysis module;
Described image dynamic data base is used to record the human face image information of the driver in certain time, and preserves the face figure Facial image is believed in set time more new images tranining database as the classification results of information, and by the classification results of preservation The Feature Selection Model and mood disaggregated model of breath optimize.
10. driver's Emotion identification and display methods based on machine learning according to claim 6, which is characterized in that institute Stating " according to the different type of face Emotion identification result, showing the emotional information of driver " further includes specifically:
According to face Emotion identification as a result, the mood classification and mood degree of display driver.
CN201810544319.9A 2018-05-31 2018-05-31 A kind of driver's Emotion identification based on machine learning and display device and method Pending CN108764169A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810544319.9A CN108764169A (en) 2018-05-31 2018-05-31 A kind of driver's Emotion identification based on machine learning and display device and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810544319.9A CN108764169A (en) 2018-05-31 2018-05-31 A kind of driver's Emotion identification based on machine learning and display device and method

Publications (1)

Publication Number Publication Date
CN108764169A true CN108764169A (en) 2018-11-06

Family

ID=64000819

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810544319.9A Pending CN108764169A (en) 2018-05-31 2018-05-31 A kind of driver's Emotion identification based on machine learning and display device and method

Country Status (1)

Country Link
CN (1) CN108764169A (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109766771A (en) * 2018-12-18 2019-05-17 深圳壹账通智能科技有限公司 It can operation object control method, device, computer equipment and storage medium
CN109815817A (en) * 2018-12-24 2019-05-28 北京新能源汽车股份有限公司 Driver emotion recognition method and music pushing method
CN110197677A (en) * 2019-05-16 2019-09-03 北京小米移动软件有限公司 A kind of control method for playing back, device and playback equipment
CN110222623A (en) * 2019-05-31 2019-09-10 深圳市恩钛控股有限公司 Micro- expression analysis method and system
CN110944149A (en) * 2019-11-12 2020-03-31 上海博泰悦臻电子设备制造有限公司 Child care system and method for vehicle
CN111310730A (en) * 2020-03-17 2020-06-19 扬州航盛科技有限公司 Driving behavior early warning system based on facial expressions
CN111382608A (en) * 2018-12-28 2020-07-07 广州盈可视电子科技有限公司 Intelligent detection system with emotion recognition function
CN112257588A (en) * 2020-10-22 2021-01-22 浙江合众新能源汽车有限公司 Method for automatically identifying expression of driver
CN112927721A (en) * 2019-12-06 2021-06-08 观致汽车有限公司 Human-vehicle interaction method, system, vehicle and computer readable storage medium
CN113128534A (en) * 2019-12-31 2021-07-16 北京中关村科金技术有限公司 Method, device and storage medium for emotion recognition
CN113378733A (en) * 2021-06-17 2021-09-10 杭州海亮优教教育科技有限公司 System and device for constructing emotion diary and daily activity recognition
CN115359532A (en) * 2022-08-23 2022-11-18 润芯微科技(江苏)有限公司 Human face emotion capturing and outputting device based on 3D sensing

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103714660A (en) * 2013-12-26 2014-04-09 苏州清研微视电子科技有限公司 System for achieving fatigue driving judgment on basis of image processing and fusion between heart rate characteristic and expression characteristic
CN104537336A (en) * 2014-12-17 2015-04-22 厦门立林科技有限公司 Face identification method and system with self-learning function
CN104548309A (en) * 2015-01-05 2015-04-29 浙江工业大学 Device and method for adjusting driver emotional state through different affective characteristic music
CN105551499A (en) * 2015-12-14 2016-05-04 渤海大学 Emotion visualization method facing voice and facial expression signal
CN106529421A (en) * 2016-10-21 2017-03-22 燕山大学 Emotion and fatigue detecting auxiliary driving system based on hybrid brain computer interface technology
CN106627589A (en) * 2016-12-27 2017-05-10 科世达(上海)管理有限公司 Vehicle driving safety auxiliary method and system and vehicle
KR20170094836A (en) * 2016-02-12 2017-08-22 한국전자통신연구원 Apparatus and Method for recognizing a driver’s emotional state
CN107235045A (en) * 2017-06-29 2017-10-10 吉林大学 Consider physiology and the vehicle-mounted identification interactive system of driver road anger state of manipulation information
CN107766835A (en) * 2017-11-06 2018-03-06 贵阳宏益房地产开发有限公司 traffic safety detection method and device
CN107776579A (en) * 2017-09-14 2018-03-09 中国第汽车股份有限公司 A kind of direct feeling driver status alarm set

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103714660A (en) * 2013-12-26 2014-04-09 苏州清研微视电子科技有限公司 System for achieving fatigue driving judgment on basis of image processing and fusion between heart rate characteristic and expression characteristic
CN104537336A (en) * 2014-12-17 2015-04-22 厦门立林科技有限公司 Face identification method and system with self-learning function
CN104548309A (en) * 2015-01-05 2015-04-29 浙江工业大学 Device and method for adjusting driver emotional state through different affective characteristic music
CN105551499A (en) * 2015-12-14 2016-05-04 渤海大学 Emotion visualization method facing voice and facial expression signal
KR20170094836A (en) * 2016-02-12 2017-08-22 한국전자통신연구원 Apparatus and Method for recognizing a driver’s emotional state
CN106529421A (en) * 2016-10-21 2017-03-22 燕山大学 Emotion and fatigue detecting auxiliary driving system based on hybrid brain computer interface technology
CN106627589A (en) * 2016-12-27 2017-05-10 科世达(上海)管理有限公司 Vehicle driving safety auxiliary method and system and vehicle
CN107235045A (en) * 2017-06-29 2017-10-10 吉林大学 Consider physiology and the vehicle-mounted identification interactive system of driver road anger state of manipulation information
CN107776579A (en) * 2017-09-14 2018-03-09 中国第汽车股份有限公司 A kind of direct feeling driver status alarm set
CN107766835A (en) * 2017-11-06 2018-03-06 贵阳宏益房地产开发有限公司 traffic safety detection method and device

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109766771A (en) * 2018-12-18 2019-05-17 深圳壹账通智能科技有限公司 It can operation object control method, device, computer equipment and storage medium
CN109815817A (en) * 2018-12-24 2019-05-28 北京新能源汽车股份有限公司 Driver emotion recognition method and music pushing method
CN111382608A (en) * 2018-12-28 2020-07-07 广州盈可视电子科技有限公司 Intelligent detection system with emotion recognition function
CN110197677A (en) * 2019-05-16 2019-09-03 北京小米移动软件有限公司 A kind of control method for playing back, device and playback equipment
CN110222623A (en) * 2019-05-31 2019-09-10 深圳市恩钛控股有限公司 Micro- expression analysis method and system
CN110944149A (en) * 2019-11-12 2020-03-31 上海博泰悦臻电子设备制造有限公司 Child care system and method for vehicle
CN112927721A (en) * 2019-12-06 2021-06-08 观致汽车有限公司 Human-vehicle interaction method, system, vehicle and computer readable storage medium
CN113128534A (en) * 2019-12-31 2021-07-16 北京中关村科金技术有限公司 Method, device and storage medium for emotion recognition
CN111310730A (en) * 2020-03-17 2020-06-19 扬州航盛科技有限公司 Driving behavior early warning system based on facial expressions
CN112257588A (en) * 2020-10-22 2021-01-22 浙江合众新能源汽车有限公司 Method for automatically identifying expression of driver
CN113378733A (en) * 2021-06-17 2021-09-10 杭州海亮优教教育科技有限公司 System and device for constructing emotion diary and daily activity recognition
CN115359532A (en) * 2022-08-23 2022-11-18 润芯微科技(江苏)有限公司 Human face emotion capturing and outputting device based on 3D sensing

Similar Documents

Publication Publication Date Title
CN108764169A (en) A kind of driver's Emotion identification based on machine learning and display device and method
Lu et al. Driver action recognition using deformable and dilated faster R-CNN with optimized region proposals
Moslemi et al. Driver distraction recognition using 3d convolutional neural networks
CN106611169B (en) A kind of dangerous driving behavior real-time detection method based on deep learning
Baheti et al. Detection of distracted driver using convolutional neural network
CN110059582B (en) Driver behavior identification method based on multi-scale attention convolution neural network
Sun et al. Traffic sign detection and recognition based on convolutional neural network
CN105956626A (en) Deep learning based vehicle license plate position insensitive vehicle license plate recognition method
CN110222596B (en) Driver behavior analysis anti-cheating method based on vision
CN106203330A (en) A kind of vehicle classification method based on convolutional neural networks
CN106469309A (en) The method and apparatus of vehicle monitoring, processor, image capture device
KR101845769B1 (en) Car rear detection system using convolution neural network, and method thereof
CN105740767A (en) Driver road rage real-time identification and warning method based on facial features
CN109635784A (en) Traffic sign recognition method based on improved convolutional neural networks
KR102105954B1 (en) System and method for accident risk detection
CN113920491A (en) Fatigue detection system, method, medium and detection device based on facial skeleton model
CN113642646A (en) Image threat article classification and positioning method based on multiple attention and semantics
Gan et al. Bi-directional vectors from apex in cnn for micro-expression recognition
CN104794432A (en) Method and system for rapid cascade type car logo vision inspection and identification
CN116129405A (en) Method for identifying anger emotion of driver based on multi-mode hybrid fusion
Chen Traffic lights detection method based on the improved yolov5 network
Al Nasim et al. An automated approach for the recognition of bengali license plates
CINAR et al. Feature extraction and recognition on traffic sign images
CN114120250B (en) Video-based motor vehicle illegal manned detection method
Padalia Detection and Number Plate Recognition of Non-Helmeted Motorcyclists using YOLO

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination