CN107092895A - A kind of multi-modal emotion identification method based on depth belief network - Google Patents

A kind of multi-modal emotion identification method based on depth belief network Download PDF

Info

Publication number
CN107092895A
CN107092895A CN201710322847.5A CN201710322847A CN107092895A CN 107092895 A CN107092895 A CN 107092895A CN 201710322847 A CN201710322847 A CN 201710322847A CN 107092895 A CN107092895 A CN 107092895A
Authority
CN
China
Prior art keywords
belief network
emotion recognition
depth belief
emotion
grader
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710322847.5A
Other languages
Chinese (zh)
Inventor
黄�俊
张若凡
刘科征
崔浩然
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Post and Telecommunications
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN201710322847.5A priority Critical patent/CN107092895A/en
Publication of CN107092895A publication Critical patent/CN107092895A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a kind of multi-modal emotion identification method based on depth belief network, step is as follows:First, a multi-modal emotion recognition database is set up, the sample of 3 class emotions is included, is respectively:Speech emotion recognition database, electrocardio emotion recognition database and breathing emotion recognition database;2nd, obtain the depth belief network grader of every kind of emotion recognition database and enter the training of line data set to grader, the wherein grader includes the grader that M depth belief network model and M depth belief network model output end are connected jointly;3rd, the depth belief network grader of 3 kinds of emotion recognition databases is subjected to Decision-level fusion using the method for ballot, obtains final emotion recognition result.The present invention carries out emotion recognition for multi-modal affection data storehouse sample, including voice, electrocardio and breathing, employ depth belief network structural classification device and replace traditional artificial extraction feature method, affective feature extraction is reduced to artificial experience and the dependence of experiment number, the combination for depth belief network and multi-modal emotion recognition provides new approaches.

Description

A kind of multi-modal emotion identification method based on depth belief network
Zhang Ruofan
Technical field:
It is more particularly to a kind of based on the multi-modal of depth belief network the invention belongs to signal transacting, emotion recognition field Emotion identification method.
Background technology
Emotion recognition is always the hot issue of area of pattern recognition, it is therefore an objective to be the physiology to user by computer Signal is analyzed and handled, and draws the affective state of user.Monotype emotion currently for voice or physiological signal is known Other technology relative maturity, but the result of the single identification of existence information is less reliable, accurate shortcoming.Therefore, difference is utilized The multi-modal emotion recognition technical value of the multi-modal feature of property, which is obtained, further to be studied.
The key step of multi-modal emotion recognition includes information characteristics and extracted and classifier design.Grader mainly has support Vector machine (SVM), neutral net, k nearest neighbor algorithm, bayes method etc..Domestic and international researcher is solving multi-modal emotion During identification problem, mostly using these sorting algorithms.Have in existing disclosed patent document one it is entitled " one kind be based on multinuclear The patent of invention of the multi-modal emotion identification method of study ", the invention on the basis of expression, voice and physiological characteristic is extracted, Nuclear matrix group corresponding to three kinds of mode is merged, the multi-modal affective characteristics merged, finally using multinuclear support to Amount machine is trained and recognized as grader, effectively identify angry, nausea, fear, it is glad, sad and surprised etc. basic Emotion;In Shao Jie, Zhao Qian patent of invention " mankind's nature emotion recognition side combined based on expression and behavior bimodal In method ", trunk motion feature is extracted and by face subregion by characteristic point movement locus using clustering method Image carries out human face expression feature extraction, then carries out emotion recognition by multi-modal emotion recognition technology.
This kind of multi-modal emotion identification method depend heavilys on the extraction to affective characteristics, and the feature used at present is taken out It is mostly engineer to take method, then rejects redundancy or incoherent feature by feature selecting algorithm, draw it is optimal or Person's suboptimum character subset, the step for purpose be in order to improve recognition accuracy and reduction characteristic dimension.This process is very big Ground relies on the experience of human expert and tested repeatedly, has both needed substantial amounts of manpower and computing resource, optimal feelings are hardly resulted in again Feature representation is felt, so as to have impact on the final effect of emotion recognition.
The present invention is directed to the deficiency of feature extracting method in existing multi-modal emotion recognition technology, utilizes depth belief network The advantage of characteristic aspect is being automatically extracted, with reference to multi-modal emotion recognition technology, is being realized a kind of based on many of depth belief network Mode emotion identification method.Both the correlation and complementarity of multi-modal feature had been make use of, has realized that the emotion of relatively reliable stabilization is known Not, it can preferably learn structure and the distribution of complex data by the nonlinear organization of depth belief network again, automatically extract more Senior feature and then classification, reduce dependence of the affective feature extraction to people.
The content of the invention
It is an object of the invention to overcome the shortcoming and defect of prior art higher based on depth there is provided a kind of accuracy rate The multi-modal emotion identification method of belief network, emotion recognition result is finally drawn using the method for Decision-level fusion.Specific skill Art scheme is realized:A kind of multi-modal emotion identification method based on depth belief network, step is as follows:
First, a multi-modal emotion recognition database is set up, the sample of 3 class emotions is included, is respectively:Speech emotion recognition Database, electrocardio emotion recognition database and breathing emotion recognition database, are n per class emotion sample number;
2nd, distinguish extraction feature for 3 kinds of emotion recognition databases, obtain speech emotion recognition database, electrocardio emotion Identification database characteristic vector corresponding with each sample in breathing emotion recognition database, takes out from every kind of emotion recognition database 60% sample is taken to collect as checking;
The 3rd, subspace scale M and each sampling feature vectors in subspace are set, and the dimension being extracted every time is n;
4th, the characteristic vector for each sample carries out M times randomly select and constitutes M sub-spaces, i.e., each each sample Eigen vector is extracted part combination and constitutes a sub-spaces, and a sub-spaces are correspondingly formed a new training set;Wherein Tieed up for the dimension that each sampling feature vectors are randomly selected for n;
5th, M depth belief network model is generated, and in the connection one jointly of M depth belief network model output end Grader is trained, and respectively obtains the depth belief network grader of 3 kinds of emotion recognition databases;
6th, the depth belief network grader of 3 kinds of emotion recognition databases is subjected to Decision fusion according to certain criterion, Obtain final recognition result.
The present invention has the following advantages and effect relative to prior art:
(1) the inventive method design generation depth belief network grader, extracts affective characteristics, instead of and manually take out automatically The mode of feature is taken, so as to finally lift the accuracy of classification.
(1) every kind of emotion recognition database is directed in the inventive method, by M depth belief network model and M depth The grader that belief network model output end is connected jointly constitutes depth after the training of every kind of emotion recognition database data set Belief network grader, then the characteristic vector of measured signal is exported into depth belief network grader, believed by depth Read the result that network classifier gets final multi-modal emotion recognition.
(2) present invention classifies to the feature of multiple modalities by training and grader, in decision-making level by weighting accordingly Model is merged, and obtains the final result of emotion recognition, improves the effect of emotion recognition.
Brief description of the drawings
Fig. 1 is the generation block diagram of every kind of emotion recognition database depth belief network grader in the present invention.
Fig. 2 is Decision-level fusion algorithm flow chart in the present invention.
Embodiment
The present invention is used to provide a kind of multi-modal emotion identification method based on depth belief network, to make the mesh of the present invention , technical scheme and effect it is clearer, clear and definite, the present invention is described in more detail below.It should be appreciated that described herein Embodiment be used only for explain the present invention, be not intended to limit the present invention.
This example discloses a kind of multi-modal emotion identification method based on depth belief network, and step is as follows:
First, a multi-modal emotion recognition database is set up, the sample of 3 class emotions is included, is respectively:Speech emotion recognition Database, electrocardio emotion recognition database and breathing emotion recognition database, are n per class emotion sample number;
2nd, distinguish extraction feature for 3 kinds of emotion recognition databases, obtain speech emotion recognition database, electrocardio emotion Identification database characteristic vector corresponding with each sample in breathing emotion recognition database, takes out from every kind of emotion recognition database 60% sample is taken to collect as checking;
The 3rd, subspace scale M and each sampling feature vectors in subspace are set, and the dimension being extracted every time is n;
4th, the characteristic vector for each sample carries out M times randomly select and constitutes M sub-spaces, i.e., each each sample Eigen vector is extracted part combination and constitutes a sub-spaces, and a sub-spaces are correspondingly formed a new training set;Wherein Tieed up for the dimension that each sampling feature vectors are randomly selected for n;
5th, M depth belief network model is generated, and in the connection one jointly of M depth belief network model output end Grader is trained, and respectively obtains the depth belief network grader of 3 kinds of emotion recognition databases;
6th, the depth belief network grader of 3 kinds of emotion recognition databases is subjected to Decision fusion according to certain criterion, Obtain final recognition result.

Claims (5)

1. a kind of multi-modal emotion identification method based on depth belief network, it is characterised in that step is as follows:
Step 1: setting up a multi-modal emotion recognition database, the sample of 3 class emotions is included, is respectively:Speech emotion recognition Database, electrocardio emotion recognition database and breathing emotion recognition database, are n per class emotion sample number;
Step 2: for 3 kinds of emotion recognition database difference extraction features, obtaining speech emotion recognition database, electrocardio emotion Identification database characteristic vector corresponding with each sample in breathing emotion recognition database, takes out from every kind of emotion recognition database 60% sample is taken to collect as checking;
Step 3: setting subspace scale M and each sampling feature vectors in subspace, the dimension being extracted every time is n;
Step 4: the characteristic vector for each sample carries out M time and randomly selects composition M sub-spaces, i.e., each each sample Eigen vector is extracted part combination and constitutes a sub-spaces, and a sub-spaces are correspondingly formed a new training set;Wherein Tieed up for the dimension that each sampling feature vectors are randomly selected for n;
Step 5: M depth belief network model of generation, and in the connection one jointly of M depth belief network model output end Grader is trained, and respectively obtains the depth belief network grader of 3 kinds of emotion recognition databases;
Step 6: the depth belief network grader of 3 kinds of emotion recognition databases is subjected to Decision fusion according to certain criterion, Obtain final recognition result.
2. the multi-modal emotion identification method according to claim 1 based on depth belief network, it is characterised in that M The grader that depth belief network model output end is connected jointly is the SVMs based on RBF.
3. the multi-modal emotion identification method according to claim 1 based on depth belief network, it is characterised in that be directed to Every kind of emotion recognition database will carry out the design of depth belief network grader.
4. the multi-modal emotion identification method according to claim 1 based on depth belief network, it is characterised in that for The depth belief network grader of the every kind of emotion recognition database constructed carries out Decision-level fusion, obtains final identification knot Really.
5. the multi-modal emotion identification method according to claim 1 based on depth belief network, it is characterised in that for The Decision-level fusion of a variety of emotion classifiers takes ballot method.
CN201710322847.5A 2017-05-09 2017-05-09 A kind of multi-modal emotion identification method based on depth belief network Pending CN107092895A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710322847.5A CN107092895A (en) 2017-05-09 2017-05-09 A kind of multi-modal emotion identification method based on depth belief network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710322847.5A CN107092895A (en) 2017-05-09 2017-05-09 A kind of multi-modal emotion identification method based on depth belief network

Publications (1)

Publication Number Publication Date
CN107092895A true CN107092895A (en) 2017-08-25

Family

ID=59637254

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710322847.5A Pending CN107092895A (en) 2017-05-09 2017-05-09 A kind of multi-modal emotion identification method based on depth belief network

Country Status (1)

Country Link
CN (1) CN107092895A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109460737A (en) * 2018-11-13 2019-03-12 四川大学 A kind of multi-modal speech-emotion recognition method based on enhanced residual error neural network
CN109785863A (en) * 2019-02-28 2019-05-21 中国传媒大学 A kind of speech-emotion recognition method and system of deepness belief network
CN113486752A (en) * 2021-06-29 2021-10-08 吉林大学 Emotion identification method and system based on electrocardiosignals

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103778807A (en) * 2012-10-23 2014-05-07 天津市天堰医教科技开发有限公司 Senior nursing scene simulation training system
CN106250855A (en) * 2016-08-02 2016-12-21 南京邮电大学 A kind of multi-modal emotion identification method based on Multiple Kernel Learning
CN106297825A (en) * 2016-07-25 2017-01-04 华南理工大学 A kind of speech-emotion recognition method based on integrated degree of depth belief network
CN106503646A (en) * 2016-10-19 2017-03-15 竹间智能科技(上海)有限公司 Multi-modal emotion identification system and method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103778807A (en) * 2012-10-23 2014-05-07 天津市天堰医教科技开发有限公司 Senior nursing scene simulation training system
CN106297825A (en) * 2016-07-25 2017-01-04 华南理工大学 A kind of speech-emotion recognition method based on integrated degree of depth belief network
CN106250855A (en) * 2016-08-02 2016-12-21 南京邮电大学 A kind of multi-modal emotion identification method based on Multiple Kernel Learning
CN106503646A (en) * 2016-10-19 2017-03-15 竹间智能科技(上海)有限公司 Multi-modal emotion identification system and method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
黄程韦等: "基于语音信号与心电信号的多模态情感识别方法", 《东南大学学报(自然科学版)》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109460737A (en) * 2018-11-13 2019-03-12 四川大学 A kind of multi-modal speech-emotion recognition method based on enhanced residual error neural network
CN109785863A (en) * 2019-02-28 2019-05-21 中国传媒大学 A kind of speech-emotion recognition method and system of deepness belief network
CN113486752A (en) * 2021-06-29 2021-10-08 吉林大学 Emotion identification method and system based on electrocardiosignals

Similar Documents

Publication Publication Date Title
Al-masni et al. Detection and classification of the breast abnormalities in digital mammograms via regional convolutional neural network
CN103942577B (en) Based on the personal identification method for establishing sample database and composite character certainly in video monitoring
CN104376326B (en) A kind of feature extracting method for image scene identification
CN107085716A (en) Across the visual angle gait recognition method of confrontation network is generated based on multitask
CN106951825A (en) A kind of quality of human face image assessment system and implementation method
CN110827260B (en) Cloth defect classification method based on LBP characteristics and convolutional neural network
CN105139004A (en) Face expression identification method based on video sequences
CN104573669A (en) Image object detection method
CN105005765A (en) Facial expression identification method based on Gabor wavelet and gray-level co-occurrence matrix
Haq et al. A hybrid approach based on deep cnn and machine learning classifiers for the tumor segmentation and classification in brain MRI
CN104834940A (en) Medical image inspection disease classification method based on support vector machine (SVM)
CN106295694A (en) A kind of face identification method of iteration weight set of constraints rarefaction representation classification
CN106529504B (en) A kind of bimodal video feeling recognition methods of compound space-time characteristic
CN102156871A (en) Image classification method based on category correlated codebook and classifier voting strategy
CN114758288A (en) Power distribution network engineering safety control detection method and device
CN101364263A (en) Method and system for detecting skin texture to image
CN104156690B (en) A kind of gesture identification method based on image space pyramid feature bag
CN110288028B (en) Electrocardio detection method, system, equipment and computer readable storage medium
CN105117707A (en) Regional image-based facial expression recognition method
CN107220971A (en) A kind of Lung neoplasm feature extracting method based on convolutional neural networks and PCA
CN113095382B (en) Interpretable tuberculosis classification network identification method based on CT image
CN110008819A (en) A kind of facial expression recognizing method based on figure convolutional neural networks
CN107092895A (en) A kind of multi-modal emotion identification method based on depth belief network
CN103186776A (en) Human detection method based on multiple features and depth information
CN110414541B (en) Method, apparatus, and computer-readable storage medium for identifying an object

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20170825