CN111931616A - Emotion recognition method and system based on mobile intelligent terminal sensor equipment - Google Patents

Emotion recognition method and system based on mobile intelligent terminal sensor equipment Download PDF

Info

Publication number
CN111931616A
CN111931616A CN202010742205.2A CN202010742205A CN111931616A CN 111931616 A CN111931616 A CN 111931616A CN 202010742205 A CN202010742205 A CN 202010742205A CN 111931616 A CN111931616 A CN 111931616A
Authority
CN
China
Prior art keywords
sensor
emotion
classifier
intelligent terminal
mobile intelligent
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010742205.2A
Other languages
Chinese (zh)
Inventor
李修建
董洛兵
衣文军
何施俊
朱炬波
朱梦均
刘吉英
董朝华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
National University of Defense Technology
Original Assignee
Xidian University
National University of Defense Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University, National University of Defense Technology filed Critical Xidian University
Priority to CN202010742205.2A priority Critical patent/CN111931616A/en
Publication of CN111931616A publication Critical patent/CN111931616A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/02Preprocessing
    • G06F2218/04Denoising
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/165Evaluating the state of mind, e.g. depression, anxiety
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7271Specific aspects of physiological measurement analysis
    • A61B5/7275Determining trends in physiological measurement data; Predicting development of a medical condition based on physiological measurements, e.g. determining a risk factor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/2431Multiple classes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction
    • G06F2218/10Feature extraction by analysing the shape of a waveform, e.g. extracting parameters relating to peaks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching

Abstract

The invention provides an emotion recognition method and system based on mobile intelligent terminal sensor equipment, wherein the method comprises the steps of collecting original sensor data under different emotion types through sensors on a mobile intelligent terminal to serve as training samples, and marking real emotion types corresponding to the training samples; respectively extracting features from the preprocessed sensor data to obtain feature vectors corresponding to training samples under different emotion categories, and constructing a training set; obtaining a final emotion recognition classifier based on the training set; and predicting the original sensor data acquired by each sensor on the mobile intelligent terminal in real time by using the emotion recognition classifier to obtain an emotion prediction result. The invention fully utilizes the multi-dimensionality of the sensor data acquired by the mobile intelligent terminal and the low participation and convenience of the acquisition mode to obtain a better emotion recognition prediction result under the emotion recognition classifier.

Description

Emotion recognition method and system based on mobile intelligent terminal sensor equipment
Technical Field
The invention belongs to the technical field of emotion recognition, and relates to an emotion recognition method under mobile intelligent terminal equipment, which can be used in the fields of emotion recognition, psychological diagnosis and treatment and the like.
Background
At present, emotion recognition is developed and completely improved, but most applications are distributed in the aspects of semantic recognition, voice recognition, facial expression recognition and the like, and the applications in the aspect of mobile group perception technology are still insufficient.
Due to the continuous development of the internet and the wide popularization of mobile intelligent terminal devices capable of carrying various sensors, the mobile sensing technology is becoming more and more mature. The sensor carried on the mobile intelligent terminal device carried by people is used for acquiring the environmental information of people and the surrounding, related information is uploaded to the server background through the network, and information data is analyzed and processed at the server side, so that reliable and convenient service is provided for users. Thus, in general, the motion perception technology enables all users to perceive surrounding information by means of portable mobile devices at their sides, and thereby provides better services to humans.
The main advantages of the mobile sensing technology based on the mobile intelligent terminal equipment are as follows: the mobile intelligent terminal device basically carries various sensors, such as acceleration, direction, light, temperature, GPS and the like, and compared with other old devices in the past, such as a PC (personal computer), a server and the like, the mobile intelligent terminal device has the advantages that the mobile intelligent terminal device is naturally incomparable: the mobile intelligent terminal equipment can conveniently sense the information of people and surrounding environments at any time and place by utilizing various types of sensors of the mobile intelligent terminal equipment, so that people can more quickly and conveniently acquire the relevant information of the mobile intelligent terminal equipment and the surrounding conditions, and the aim of better providing required services for people is fulfilled.
And due to the characteristics of a large number of ubiquitous mobile intelligent terminal devices, the mobile sensing technology can collect a large number of real-time multi-dimensional data to overcome the defect of a small amount of low-precision data of a single sensing terminal, so that the dimensionality of the data is expanded, the data volume is increased, and a learning training sample base of an algorithm is enriched.
The emotion analysis of texts, facial expressions and voices has been developed in the market, and no particularly mature research results and applications exist in the market for emotion analysis of other data forms on the mobile intelligent terminal device, particularly emotion analysis methods and tools derived from a plurality of data information of a mobile terminal of a mobile phone. Nowadays, mobile intelligent terminals are rapidly developed, more and more sensors are carried on the mobile intelligent terminals, which means that more and more information of users and surrounding environments can be sensed through the mobile intelligent terminals, and by sensing a large amount of information of the users and the surrounding environments and performing integrated analysis on the information, advantages of the mobile intelligent terminals can be exerted on information dimension and information quantity, so that better emotion recognition results can be obtained. In addition, compared to an active input type emotion recognition scene such as video, voice, text, etc., the user needs emotion diagnosis in a low-participation non-interference type scene, and the collection of information data of the mobile smart terminal is more favorable for the user in a non-interference type scene.
Disclosure of Invention
The invention provides an emotion recognition method and system based on a mobile intelligent terminal sensor device, which are used for acquiring information data of a multi-dimensional sensor device and performing real-time emotion recognition on a user in a low-participation non-interference scene.
In order to achieve the purpose, the invention provides an emotion recognition method based on mobile intelligent terminal sensor equipment, which comprises the following steps:
s1, defining n emotion categories, collecting original sensor data under different emotion categories through sensors on a mobile intelligent terminal to serve as training samples, and marking real emotion categories corresponding to the training samples;
s2, respectively preprocessing the original sensor data acquired by each sensor; respectively extracting features from the preprocessed sensor data to obtain feature vectors corresponding to training samples under different emotion categories, and constructing a training set;
s3, obtaining a final emotion recognition classifier based on the training set;
and S4, predicting the original sensor data acquired by each sensor on the mobile intelligent terminal in real time by using the emotion recognition classifier obtained in the step S3 to obtain an emotion prediction result, namely classifying the currently input sensor data into the emotion category with the highest probability.
The mobile intelligent terminal comprises but is not limited to a smart phone, a tablet or a smart watch.
The sensor carried on the mobile intelligent terminal comprises but is not limited to an optical sensor, a GPS sensor, a gravity sensor, an acceleration sensor, a direction sensor, a distance sensor, a temperature sensor, a light sensor, a magnetic sensor, an acoustic sensor, a touch sensor and a network sensor.
In the invention, the step of respectively preprocessing the raw sensor data acquired by each sensor in the step S2 comprises the following steps: and respectively carrying out illegal data filtering and median filtering operation on the original sensor data acquired by each sensor. And filtering illegal data, namely rejecting null values in the original sensor data acquired by each sensor and data obviously exceeding the sensor data value range. In general, data collected from hardware requires filtering operations. The filtering method mainly includes mean filtering and median filtering. The mean filtering is implemented based on calculating a neighborhood mean, so that edge information and feature information in the oscillogram are blurred. The median filtering can highlight characteristic waveforms, so that the waveforms are more concave-convex.
The sensors in the S2 of the present invention include motion sensors, light sensors, GPS sensors or/and network sensors. The method of extracting features for each sensor data separately is as follows:
the acceleration sensor and the gyroscope belong to a motion sensor; for the motion sensor, data of the motion sensor are divided into three dimensions of x, y and z, the three dimensions correspond to a real three-dimensional space, the three dimensions are drawn into a curve waveform diagram, peak values and trough values of the three dimensions are respectively counted as severe fluctuation values of the motion sensor, the number of the peak values and the trough values are counted as fluctuation times, the time occupied by the peak values and the trough values is counted as fluctuation time, and therefore the feature vector corresponding to the motion sensor data is obtained.
For the light sensor, the following features are extracted for the light sensor data as corresponding feature vectors: the service environment information of the mobile intelligent terminal at different moments is extracted according to the data of the optical sensor, and the service environment information is divided into that the mobile intelligent terminal is not used, the mobile intelligent terminal is used indoors, and the mobile intelligent terminal is used outdoors.
For a GPS sensor, extracting information entropy characteristics of the position of the mobile intelligent terminal, wherein the information entropy formula is
Figure BDA0002607120620000041
Wherein p isiThe number of times of the mobile intelligent terminal at the ith position accounts for the proportion of all historical records; the information entropy h (u) is defined as the probability of occurrence of information, i.e., a measure of the ordering of information.
For a network sensor (such as a wireless network sensor), the network speed condition of the mobile intelligent terminal is extracted, and three characteristics of whether the network speed fluctuates greatly, the condition that the network speed is basically higher than or lower than the average value of the network transmission speed in a past period (such as in the past 7 days) and the network signal strength are extracted.
In the invention, the implementation method of S3 is as follows:
s3.1, selecting N classifier algorithms for constructing N base classifiers, wherein the N base classifiers form a classifier standby candidate pool.
N classifier algorithms, such as decision trees, perceptrons, support vector machines, etc., are selected for constructing N basis classifiers, which form a classifier backup candidate pool. For each selected classifier algorithm, a Bagging method is adopted to increase diversity among the classifier algorithms, and in each Bagging base model, a 10-fold cross-validation method is adopted to reduce an overfitting phenomenon, so that the contingency of data division is avoided.
3.2, selecting K base classifiers from the base classifier standby candidate pool.
S3.2.1 combine the K-means clustering algorithm with the contour coefficients to determine the number K of optimal base classifiers.
Setting a value K of the centroid, and setting a plurality of values for K, wherein the value K is set to be 2, 3, 4, 5, 6, 7, 8 and 9. The method comprises the steps of preprocessing original sensor data acquired by each sensor carried on the mobile intelligent terminal equipment to obtain an original data set, carrying out K-means clustering algorithm for clustering, and dividing the original data set into K clusters.
For different set K values, contour coefficients are adopted
Figure BDA0002607120620000051
To evaluate the clustering effect. Wherein, the definitions of a (i), b (i) are as follows: for a certain data point i after clustering, a (i) represents the average value of the distances from the point to all other data points in the cluster, and b (i) represents the average value of the distances from all points in one other cluster closest to the point i.
And (3) calculating the contour coefficients s (i) under different K values, and taking the K value under the maximum contour coefficient as the number K of the optimal base classifiers.
S3.2.2 determination of any two base classifiers d in the base classifier backup candidate pooliAnd djMeasure of dissimilarity d betweenij
And inputting the feature vector corresponding to each training sample in the training set into each base classifier in a standby candidate pool of the base classifiers, outputting the emotion classification predicted by each base classifier, comparing the predicted emotion classification corresponding to each training sample with the real emotion classification, and judging whether the prediction is correct.
Any two base classifiers d in the base classifier standby candidate pooliAnd djMeasure of dissimilarity d betweenijThe calculation method of (2) is as follows:
Figure BDA0002607120620000061
wherein A is00Presentation base classifier diAnd djThe proportion of the number of samples which are all subjected to error prediction to the total number of samples in the training set; a. the01Presentation base classifier diPrediction error however djPredicting the proportion of the number of correct samples to the total number of samples in the training set; a. the10Presentation base classifier diPrediction is correct yet djThe proportion of the number of samples with prediction errors to the total number of samples in the training set; a. the11Presentation base classifier diAnd djThe number of samples that are all making the correct prediction is a proportion of the total number of samples in the training set.
S3.2.3 selecting K average dissimilarity measures theta from the candidate pool of the base classifieriLargest base classifier of which thetaiPresentation base classifier diThe average of the differences of all base classifiers in the base classifier backup candidate pool is:
Figure BDA0002607120620000062
n represents the number of base classifiers in the base classifier spare candidate pool.
And S3.3, fusing the selected K base classifiers by adopting an integration strategy, and obtaining the final emotion recognition classifier after fusion.
S3.3.1, constructing a confusion matrix.
Extracting a part of feature vectors corresponding to training samples in a training set to construct a test data set, and counting the number of testsThe data in the data set are respectively input into K base classifiers selected in S3.2, the K base classifiers respectively output the predicted emotion classes corresponding to the training samples, and a confusion matrix CM is constructed based on the prediction resultsk
Figure BDA0002607120620000071
Wherein the content of the first and second substances,
Figure BDA0002607120620000072
the element which represents the ith row and the jth column in the confusion matrix of the kth classifier is the probability that the training sample originally belonging to the ith emotion class is identified as the jth emotion class by the classifier in the kth classifier; diagonal elements in the confusion matrix represent the probability of correctly predicting the respective class, while non-diagonal elements represent the probability of being confused by it.
S3.3.2, calculating the inverse reliability of the current input test sample.
A feature vector corresponding to a training sample in a given test data set is used as an input test sample and is input into the K base classifiers selected in the S3.2, and each base classifier selected in the S3.2 outputs a classification confidence coefficient P of each base classifierk(Ci| X) as its class conditional probability, which represents the probability that the input test sample is recognized as the ith class emotion class after being input into the kth base classifier, defines
Figure BDA0002607120620000073
As the inverse reliability of the kth classifier for its class i emotion classes and class j emotion classes:
Figure BDA0002607120620000074
s3.3.3, calculating the fusion weight of the kth classifier in the K base classifiers selected in S3.2:
Figure BDA0002607120620000075
s3.3.4, fusing the K base classifiers selected in S3.2 according to the fusion weights corresponding to the K base classifiers selected in S3.2 to obtain the final emotion recognition classifier, wherein the confidence P (C) of the fused classifieri| X) is:
Figure BDA0002607120620000081
s3.3.5, calculating probabilities corresponding to the input test samples belonging to the emotion classes according to the fusion weights of the different base classifiers, taking the emotion class with the highest probability as the emotion class of which the input test sample is classified as the emotion class with the highest probability, and taking the probability of correct classification of all the test samples as the approximate accuracy of the emotion recognition classifier.
In the invention S4, corresponding feature vectors are obtained for original sensor data acquired by each sensor on the mobile intelligent terminal in real time according to the preprocessing and feature extraction method in S2, the feature vectors are input into the emotion recognition classifier obtained in S3, the emotion recognition classifier calculates the probability that the current input data belongs to each emotion category, and the emotion category with the highest probability is taken as the emotion category with the highest probability of the current input data to be classified into the emotion category with the highest probability.
The invention also provides an emotion recognition system based on the mobile intelligent terminal sensor equipment, which comprises the following components:
the data acquisition module is used for acquiring the original sensor data of each sensor on the user mobile intelligent terminal;
the data processing module is used for carrying out preprocessing operations such as illegal data filtering and median filtering operations on the original sensor data acquired by each sensor;
the characteristic engineering module is used for extracting corresponding characteristic vectors from the preprocessed sensor data respectively;
and the emotion recognition module is used for training the emotion recognition classifier, predicting the original sensor data acquired by each sensor on the user mobile intelligent terminal in real time by using the emotion recognition classifier obtained by training, and obtaining an emotion prediction result.
Wherein, the data acquisition module includes:
the registration module is used for registering the management objects of the sensor equipment on the user mobile intelligent terminal;
the monitoring module is used for monitoring the value change of each sensor on the user mobile intelligent terminal;
and the cancellation module is used for canceling the monitor of the sensing equipment after the data acquisition is finished.
The emotion recognition module includes:
the base classifier pool generating module is used for constructing a base classifier candidate pool;
the base classifier selection module is used for selecting K base classifiers from a base classifier standby candidate pool;
and the base classifier integration module is used for fusing the selected K base classifiers by adopting an integration strategy, and the fused result is the final emotion recognition classifier.
The invention also provides intelligent equipment which comprises a memory and a processor, wherein the memory stores a computer program, and the processor realizes the steps of the emotion recognition method based on the mobile intelligent terminal sensor equipment when executing the computer program.
A readable storage medium, on which a computer program is stored, which computer program, when being executed by a processor, carries out the steps of the above-mentioned emotion recognition method based on a mobile intelligent terminal sensor device.
The emotion recognition method and system based on the mobile intelligent terminal sensor device provided by the invention comprise the steps that sensor information data are collected through the sensor device on the mobile intelligent terminal, and the data are processed, wherein the data comprise illegal data elimination, median filtering operation and the like; extracting relevant feature vectors from the sensor data through a feature extraction process; inputting the feature vectors into an emotion recognition classifier integrated by a plurality of classifiers for prediction; in the emotion recognition classifier, a diversity strategy is used for constructing a candidate pool of a base classifier, the number of the base classifier is selected by combining a K-means clustering algorithm and a contour coefficient, the type of the base classifier is selected by pairwise difference measurement, the weight of each base classifier is fused by a confusion matrix and class condition probability, the emotion recognition prediction result is finally completed, the multi-dimensionality of sensor data collected by a mobile intelligent terminal and the low participation and convenience of a collection mode are fully utilized, and a better emotion recognition prediction result is obtained under the emotion recognition classifier.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the structures shown in the drawings without creative efforts.
FIG. 1 is a flow chart of an implementation of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In addition, the technical solutions in the embodiments of the present invention may be combined with each other, but it must be based on the realization of those skilled in the art, and when the technical solutions are contradictory or cannot be realized, such a combination of technical solutions should not be considered to exist, and is not within the protection scope of the present invention.
The invention provides an emotion recognition method based on mobile intelligent terminal sensor equipment, which comprises the following steps:
s1, defining n emotion categories, collecting original sensor data under different emotion categories through sensors on a mobile intelligent terminal to serve as training samples, and marking real emotion categories corresponding to the training samples;
s2, respectively preprocessing the original sensor data acquired by each sensor; respectively extracting features from the preprocessed sensor data to obtain feature vectors corresponding to training samples under different emotion categories, and constructing a training set;
s3, obtaining a final emotion recognition classifier based on the training set;
and S4, predicting the original sensor data acquired by each sensor on the mobile intelligent terminal in real time by using the emotion recognition classifier obtained in the step S3 to obtain an emotion prediction result, namely classifying the currently input sensor data into the emotion category with the highest probability.
In the first embodiment, the adopted mobile intelligent terminal is a smart phone, and the sensor device on the smart phone is used for collecting raw sensor data.
Referring to fig. 1, in a user mobile intelligent terminal, an embodiment of the present invention provides a data acquisition method, including the following steps:
step 1, receiving an emotion recognition task from a user and starting a data acquisition service.
And 2, acquiring a corresponding sensor management object corresponding to data acquisition.
And 3, registering a monitor for the sensor equipment of the smart phone, starting a monitoring service, monitoring data of each sensor in real time, recording real-time change of the data, and acquiring all sensor data required in the time period.
And 4, sending the sensor information data obtained by monitoring to a server side, and logging off the sensor monitor after data acquisition is finished.
And 5, receiving the emotion recognition result from the server side, and displaying the emotion recognition result to the user.
The embodiment of the invention realizes the sensor data acquisition function of the mobile smart phone end of the user, performs information interaction with the server end, sends the sensor data to the server and receives the emotion recognition result from the server.
In a second embodiment, at a server, an embodiment of the present invention provides an emotion recognition method, including the following steps:
step 1, performing data processing work on the acquired original sensor data acquired by each sensor on the mobile smart phone end of the user in real time, and performing processes such as cleaning and preprocessing of the data.
And 11, filtering illegal data, adding a code checking function in a server background, filtering the illegal data such as null value, obviously exceeding the data value field of the sensor, and removing the data.
And step 12, performing median filtering operation on the data. In general, data collected from hardware requires filtering operations. The filtering method mainly includes mean filtering and median filtering. The mean filtering is implemented based on calculating a neighborhood mean, so that edge information and feature information in the oscillogram are blurred. The median filtering can highlight characteristic waveforms, so that the waveforms are more concave-convex.
And 2, after the data is preprocessed, performing characteristic engineering operation on the preprocessed data, extracting characteristic vectors from the preprocessed sensor data, storing the characteristic vectors into a server database, and taking the characteristic vectors as input of the emotion recognition classifier.
And 3, inputting the characteristic vector, calling an emotion recognition classifier to perform predictive analysis, obtaining an emotion prediction result, namely classifying the currently input sensor data into the emotion category with the highest probability, and storing the prediction result into a database.
In step 2 of embodiment 2, the sensor includes a motion sensor, a light sensor, a GPS sensor, or/and a network sensor. The method of extracting features for each sensor data separately is as follows:
the acceleration sensor and the gyroscope belong to a motion sensor; for the motion sensor, data of the motion sensor are divided into three dimensions of x, y and z, the three dimensions correspond to a real three-dimensional space, the three dimensions are drawn into a curve waveform diagram, peak values and trough values of the three dimensions are respectively counted as severe fluctuation values of the motion sensor, the number of the peak values and the trough values are counted as fluctuation times, the time occupied by the peak values and the trough values is counted as fluctuation time, and therefore the feature vector corresponding to the motion sensor data is obtained.
For the light sensor, the following features are extracted for the light sensor data as corresponding feature vectors: the service environment information of the mobile intelligent terminal at different moments is extracted according to the data of the optical sensor, and the service environment information is divided into that the mobile intelligent terminal is not used, the mobile intelligent terminal is used indoors, and the mobile intelligent terminal is used outdoors.
For a GPS sensor, extracting information entropy characteristics of the position of the mobile intelligent terminal, wherein the information entropy formula is
Figure BDA0002607120620000131
Wherein p isiThe number of times of the mobile intelligent terminal at the ith position accounts for the proportion of all historical records; the information entropy h (u) is defined as the probability of occurrence of information, i.e., a measure of the ordering of information. The faster the entropy grows with more locations, the more chaotic the data is.
For a network sensor (such as a wireless network sensor), the network speed condition of the mobile intelligent terminal is extracted, and three characteristics of whether the network speed fluctuates greatly, the condition that the network speed is basically higher than or lower than the average value of the network transmission speed in a past period (such as in the past 7 days) and the network signal strength are extracted.
And finally, storing the extracted feature vectors into a Mysql database of the server, and taking the feature vectors as input data of a next emotion recognition classifier.
Embodiment 3, this embodiment provides a method for training a emotion recognition classifier, including
S1, defining n emotion categories, collecting original sensor data under different emotion categories through sensors on a mobile intelligent terminal to serve as training samples, and marking real emotion categories corresponding to the training samples;
s2, respectively preprocessing the original sensor data acquired by each sensor; respectively extracting features from the preprocessed sensor data to obtain feature vectors corresponding to training samples under different emotion categories, and constructing a training set;
s3, obtaining a final emotion recognition classifier based on the training set;
s3.1, selecting N classifier algorithms for constructing N base classifiers, wherein the N base classifiers form a classifier standby candidate pool.
Firstly, a diversity strategy is adopted to construct a base classifier pool. And selecting N algorithms for constructing N base classifiers to obtain a standby candidate pool of the classifiers. For a single classifier algorithm Ni: the Bagging method is adopted to increase the diversity among the classifiers; in each basic model of Bagging, a 10-fold cross verification method is adopted to reduce the overfitting phenomenon, and the contingency of data division is avoided.
And S3.2, combining the K-means clustering algorithm with the contour coefficient, and selecting a proper number of base classifiers of a specific type from the classifier standby candidate pool by adopting a pairwise difference measurement method.
S3.2.1, combining a K-means clustering algorithm and the contour coefficient to determine the number K of the optimal base classifiers;
setting a value K of the centroid, and setting a plurality of values for K, wherein the value K is set to be 2, 3, 4, 5, 6, 7, 8 and 9. The method comprises the steps of preprocessing original sensor data acquired by each sensor carried on the mobile intelligent terminal equipment to obtain an original data set, carrying out K-means clustering algorithm for clustering, and dividing the original data set into K clusters.
For different set K values, contour coefficients are adopted
Figure BDA0002607120620000151
To evaluate the clustering effect. Wherein, the definitions of a (i), b (i) are as follows: for a certain data point i after clustering, a (i) represents the average value of the distance from the point to all other data points in the cluster, b (i) represents all points in one other cluster closest to the pointAverage of the distances to point i.
And (3) calculating the contour coefficients s (i) under different K values, and taking the K value under the maximum contour coefficient as the number K of the optimal base classifiers.
S3.2.2 determination of any two base classifiers d in the base classifier backup candidate pooliAnd djMeasure of dissimilarity d betweenij
And inputting the feature vector corresponding to each training sample in the training set into each base classifier in a standby candidate pool of the base classifiers, outputting the emotion classification predicted by each base classifier, comparing the predicted emotion classification corresponding to each training sample with the real emotion classification, and judging whether the prediction is correct.
Any two base classifiers d in the base classifier standby candidate pooliAnd djMeasure of dissimilarity d betweenijThe calculation method of (2) is as follows:
Figure BDA0002607120620000161
wherein A is00Presentation base classifier diAnd djThe proportion of the number of samples which are all subjected to error prediction to the total number of samples in the training set; a. the01Presentation base classifier diPrediction error however djPredicting the proportion of the number of correct samples to the total number of samples in the training set; a. the10Presentation base classifier diPrediction is correct yet djThe proportion of the number of samples with prediction errors to the total number of samples in the training set; a. the11Presentation base classifier diAnd djThe number of samples that are all making the correct prediction is a proportion of the total number of samples in the training set.
S3.2.3 selecting K average dissimilarity measures theta from the candidate pool of the base classifieriLargest base classifier of which thetaiPresentation base classifier diThe average of the differences of all base classifiers in the base classifier backup candidate pool is:
Figure BDA0002607120620000162
n denotes base classifier alternate candidatesThe number of base classifiers in the pool.
And S3.3, fusing the selected K base classifiers by adopting an integration strategy, and obtaining the final emotion recognition classifier after fusion.
S3.3.1, constructing a confusion matrix;
extracting feature vectors corresponding to a part of training samples in a training set to construct a test data set, respectively inputting data in the test data set into K base classifiers selected in S3.2, respectively outputting predicted emotion classes corresponding to the training samples by the K base classifiers, and constructing a confusion matrix CM (CM) based on the prediction resultk
Figure BDA0002607120620000171
Wherein the content of the first and second substances,
Figure BDA0002607120620000172
the element which represents the ith row and the jth column in the confusion matrix of the kth classifier is the probability that the training sample originally belonging to the ith emotion class is identified as the jth emotion class by the classifier in the kth classifier; diagonal elements in the confusion matrix represent the probability of correctly predicting the respective class, while non-diagonal elements represent the probability of being confused by them;
s3.3.2, calculating the inverse reliability of the current input test sample;
a feature vector corresponding to a training sample in a given test data set is used as an input test sample and is input into the K base classifiers selected in the S3.2, and each base classifier selected in the S3.2 outputs a classification confidence coefficient P of each base classifierk(Ci| X) as its class conditional probability, which represents the probability that the input test sample is recognized as the ith class emotion class after being input into the kth base classifier, defines
Figure BDA0002607120620000173
As the inverse reliability of the kth classifier for its class i emotion classes and class j emotion classes:
Figure BDA0002607120620000174
s3.3.3, calculating the fusion weight of the kth classifier in the K base classifiers selected in S3.2:
Figure BDA0002607120620000175
s3.3.4, fusing the K base classifiers selected in S3.2 according to the fusion weights corresponding to the K base classifiers selected in S3.2 to obtain the final emotion recognition classifier, wherein the confidence P (C) of the fused classifieri| X) is:
Figure BDA0002607120620000176
s3.3.5, calculating probabilities corresponding to the input test samples belonging to the emotion classes according to the fusion weights of different base classifiers, taking the emotion class with the highest probability as the emotion class of which the input test sample is classified as the emotion class with the highest probability, and taking the probability of correct classification of all the test samples as the approximate accuracy of the emotion recognition classifier.
Example four:
the emotion recognition method based on the mobile intelligent terminal sensor equipment comprises the steps of obtaining corresponding feature vectors for original sensor data collected by each sensor on a mobile intelligent terminal in real time according to the preprocessing and feature extraction methods, inputting the corresponding feature vectors into an emotion recognition classifier, calculating the probability that the current input data belong to each emotion category by the emotion recognition classifier, and taking the emotion category with the highest probability as the current input data to be classified into the emotion category with the highest probability.
Example five:
a emotion recognition system based on a mobile intelligent terminal sensor device comprises:
the data acquisition module is used for acquiring the original sensor data of each sensor on the user mobile intelligent terminal;
the data processing module is used for carrying out preprocessing operations such as illegal data filtering and median filtering operations on the original sensor data acquired by each sensor;
the characteristic engineering module is used for extracting corresponding characteristic vectors from the preprocessed sensor data respectively;
and the emotion recognition module is used for training the emotion recognition classifier, predicting the original sensor data acquired by each sensor on the user mobile intelligent terminal in real time by using the emotion recognition classifier obtained by training, and obtaining an emotion prediction result.
Wherein, the data acquisition module includes:
the registration module is used for registering the management objects of the sensor equipment on the user mobile intelligent terminal;
the monitoring module is used for monitoring the value change of each sensor on the user mobile intelligent terminal;
and the cancellation module is used for canceling the monitor of the sensing equipment after the data acquisition is finished.
Wherein the emotion recognition module includes:
the base classifier pool generating module is used for constructing a base classifier candidate pool;
the base classifier selection module is used for selecting K base classifiers from a base classifier standby candidate pool;
and the base classifier integration module is used for fusing the selected K base classifiers by adopting an integration strategy, and the fused result is the final emotion recognition classifier.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention, and all modifications and equivalents of the present invention, which are made by the contents of the present specification and the accompanying drawings, or directly/indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (14)

1. A emotion recognition method based on a mobile intelligent terminal sensor device is characterized by comprising the following steps:
s1, defining n emotion categories, collecting original sensor data under different emotion categories through sensors on a mobile intelligent terminal to serve as training samples, and marking real emotion categories corresponding to the training samples;
s2, respectively preprocessing the original sensor data acquired by each sensor; respectively extracting features from the preprocessed sensor data to obtain feature vectors corresponding to training samples under different emotion categories, and constructing a training set;
s3, obtaining a final emotion recognition classifier based on the training set;
and S4, predicting the original sensor data acquired by each sensor on the mobile intelligent terminal in real time by using the emotion recognition classifier obtained in the step S3 to obtain an emotion prediction result, namely classifying the currently input sensor data into the emotion category with the highest probability.
2. The emotion recognition method based on a mobile intelligent terminal sensor device, according to claim 1, wherein the mobile intelligent terminal is a smart phone, a tablet or a smart watch; the sensor carried on the mobile intelligent terminal comprises an optical sensor, a GPS sensor, a gravity sensor, an acceleration sensor, a direction sensor, a distance sensor, a temperature sensor, a light sensor, a magnetic sensor, an acoustic sensor, a touch sensor or/and a network sensor.
3. The emotion recognition method based on mobile intelligent terminal sensor equipment as claimed in claim 1, wherein the preprocessing of the raw sensor data collected by each sensor in S2 includes: and respectively carrying out illegal data filtering and median filtering operation on the original sensor data acquired by each sensor.
4. The emotion recognition method based on mobile intelligent terminal sensor equipment, according to claim 1, wherein, in S2, each sensor comprises a motion sensor, a light sensor, a GPS sensor or/and a network sensor;
the method of extracting features for each sensor data separately is as follows:
the acceleration sensor and the gyroscope belong to a motion sensor; for a motion sensor, data of the motion sensor is divided into three dimensions of x, y and z, the three dimensions correspond to a real three-dimensional space, the three dimensions are drawn into a curve waveform diagram, wave crest values and wave trough values of the motion sensor are respectively counted as severe fluctuation values of the motion sensor, the number of the wave crest values and the wave trough values are counted as fluctuation times, time occupied by the wave crest values and the wave trough values is counted as fluctuation time, and therefore a feature vector corresponding to the motion sensor data is obtained;
for the light sensor, the following features are extracted for the light sensor data as corresponding feature vectors: extracting the use environment information of the mobile intelligent terminal at different moments according to the data of the optical sensor, and dividing the use environment information into the unused state of the mobile intelligent terminal, the indoor use of the mobile intelligent terminal and the outdoor use of the mobile intelligent terminal;
for a GPS sensor, extracting information entropy characteristics of the position of the mobile intelligent terminal, wherein the information entropy formula is
Figure FDA0002607120610000021
Wherein p isiThe number of times of the mobile intelligent terminal at the ith position accounts for the proportion of all historical records; the information entropy H (u) is defined as the probability of information occurrence, namely the measure of information ordering;
for a network sensor, the network speed condition of the mobile intelligent terminal is extracted, and three characteristics of whether the network speed fluctuates greatly, the network speed is basically above or below the average value of the network transmission speed in a past period of time and the network signal intensity are extracted.
5. The emotion recognition method based on mobile intelligent terminal sensor equipment, according to claim 1, wherein the implementation method of S3 is as follows:
s3.1, selecting N classifier algorithms for constructing N base classifiers, wherein the N base classifiers form a classifier standby candidate pool;
s3.2, selecting K base classifiers from a base classifier standby candidate pool;
and S3.3, fusing the selected K base classifiers by adopting an integration strategy, and obtaining the final emotion recognition classifier after fusion.
6. The emotion recognition method based on mobile intelligent terminal sensor equipment as claimed in claim 5, wherein the implementation method of S3.2 is as follows:
combining a K-means clustering algorithm and the contour coefficient to determine the number K of the optimal base classifiers;
determination of any two base classifiers d in the base classifier backup candidate pooliAnd djMeasure of dissimilarity d betweenij
Selecting K average dissimilarity measures theta from a spare candidate pool of a base classifieriLargest base classifier of which thetaiPresentation base classifier diAnd the average of the sums of the variance values of all base classifiers in the base classifier spare candidate pool.
7. The emotion recognition method based on mobile intelligent terminal sensor devices as claimed in claim 6, wherein in S3.2: inputting the feature vector corresponding to each training sample in the training set into each base classifier in a standby candidate pool of the base classifiers, outputting the emotion classification predicted by each base classifier, comparing the predicted emotion classification corresponding to each training sample with the real emotion classification of each training sample, and judging whether the prediction is correct or not;
any two base classifiers d in the base classifier standby candidate pooliAnd djMeasure of dissimilarity d betweenijThe calculation method of (2) is as follows:
Figure FDA0002607120610000031
wherein A is00Presentation base classifier diAnd djThe proportion of the number of samples which are all subjected to error prediction to the total number of samples in the training set; a. the01Presentation base classifier diPrediction error however djPredicting the proportion of the number of correct samples to the total number of samples in the training set; a. the10Presentation base classifier diPrediction is correct yet djThe proportion of the number of samples with prediction errors to the total number of samples in the training set; a. the11Presentation base classifier diAnd djThe number of samples that are all making the correct prediction is a proportion of the total number of samples in the training set.
8. The emotion recognition method based on mobile intelligent terminal sensor equipment as claimed in claim 5, wherein the implementation method of S3.3 is as follows:
s3.3.1, constructing a confusion matrix;
extracting feature vectors corresponding to a part of training samples in a training set to construct a test data set, respectively inputting data in the test data set into K base classifiers selected in S3.2, respectively outputting predicted emotion classes corresponding to the training samples by the K base classifiers, and constructing a confusion matrix CM (CM) based on the prediction resultk
Figure FDA0002607120610000041
Wherein the content of the first and second substances,
Figure FDA0002607120610000042
the element which represents the ith row and the jth column in the confusion matrix of the kth classifier is the probability that the training sample originally belonging to the ith emotion class is identified as the jth emotion class by the classifier in the kth classifier; diagonal elements in the confusion matrix represent the probability of correctly predicting the respective class, while non-diagonal elements represent the probability of being confused by them;
s3.3.2, calculating the inverse reliability of the current input test sample;
a feature vector corresponding to a training sample in a given test data set is used as an input test sample and is input into the K base classifiers selected in the S3.2, and each base classifier selected in the S3.2 outputs the classification confidence coefficient thereofPk(Ci| X) as its class conditional probability, which represents the probability that the input test sample is recognized as the ith class emotion class after being input into the kth base classifier, defines
Figure FDA0002607120610000051
As the inverse reliability of the kth classifier for its class i emotion classes and class j emotion classes:
Figure FDA0002607120610000052
s3.3.3, calculating the fusion weight of the kth classifier in the K base classifiers selected in S3.2:
Figure FDA0002607120610000053
s3.3.4, fusing the K base classifiers selected in S3.2 according to the fusion weights corresponding to the K base classifiers selected in S3.2 to obtain the final emotion recognition classifier, wherein the confidence P (C) of the fused classifieri| X) is:
Figure FDA0002607120610000054
s3.3.5, calculating probabilities corresponding to the input test samples belonging to the emotion classes according to the fusion weights of the different base classifiers, taking the emotion class with the highest probability as the emotion class of which the input test sample is classified as the emotion class with the highest probability, and taking the probability of correct classification of all the test samples as the approximate accuracy of the final emotion recognition classifier.
9. The emotion recognition method based on mobile intelligent terminal sensor equipment as claimed in claim 8, wherein in S4, the corresponding feature vectors are obtained from the raw sensor data collected in real time by each sensor on the mobile intelligent terminal according to the preprocessing and feature extraction method in S2, and are input into the emotion recognition classifier obtained in S3, and the emotion recognition classifier calculates the probability that the current input data belongs to each emotion class, and takes the emotion class with the highest probability as the emotion class with the highest probability for classifying the current input data.
10. The utility model provides an emotion recognition system based on mobile intelligent terminal sensor equipment which characterized in that includes:
the data acquisition module is used for acquiring the original sensor data of each sensor on the user mobile intelligent terminal;
the data processing module is used for carrying out preprocessing operations such as illegal data filtering and median filtering operations on the original sensor data acquired by each sensor;
the characteristic engineering module is used for extracting corresponding characteristic vectors from the preprocessed sensor data respectively;
and the emotion recognition module is used for training the emotion recognition classifier, predicting the original sensor data acquired by each sensor on the user mobile intelligent terminal in real time by using the emotion recognition classifier obtained by training, and obtaining an emotion prediction result.
11. The emotion recognition system based on mobile intelligent terminal sensor equipment of claim 10, wherein the data acquisition module comprises:
the registration module is used for registering the management objects of the sensor equipment on the user mobile intelligent terminal;
the monitoring module is used for monitoring the value change of each sensor on the user mobile intelligent terminal;
and the cancellation module is used for canceling the monitor of the sensing equipment after the data acquisition is finished.
12. The emotion recognition system based on mobile intelligent terminal sensor equipment of claim 10, wherein the emotion recognition module comprises:
the base classifier pool generating module is used for constructing a base classifier candidate pool;
the base classifier selection module is used for selecting K base classifiers from a base classifier standby candidate pool;
and the base classifier integration module is used for fusing the selected K base classifiers by adopting an integration strategy, and the fused result is the final emotion recognition classifier.
13. Smart device comprising a memory and a processor, said memory storing a computer program, characterized in that said processor when executing said computer program performs the steps of a method for mobile smart terminal sensor device based emotion recognition as claimed in any of claims 1 to 9.
14. A readable storage medium, on which a computer program is stored, characterized in that said computer program, when being executed by a processor, carries out the steps of the emotion recognition method based on a mobile intelligent terminal sensor device according to any of claims 1 to 9.
CN202010742205.2A 2020-07-29 2020-07-29 Emotion recognition method and system based on mobile intelligent terminal sensor equipment Pending CN111931616A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010742205.2A CN111931616A (en) 2020-07-29 2020-07-29 Emotion recognition method and system based on mobile intelligent terminal sensor equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010742205.2A CN111931616A (en) 2020-07-29 2020-07-29 Emotion recognition method and system based on mobile intelligent terminal sensor equipment

Publications (1)

Publication Number Publication Date
CN111931616A true CN111931616A (en) 2020-11-13

Family

ID=73315543

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010742205.2A Pending CN111931616A (en) 2020-07-29 2020-07-29 Emotion recognition method and system based on mobile intelligent terminal sensor equipment

Country Status (1)

Country Link
CN (1) CN111931616A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112515675A (en) * 2020-12-14 2021-03-19 西安理工大学 Emotion analysis method based on intelligent wearable device
CN112686048A (en) * 2020-12-23 2021-04-20 沈阳新松机器人自动化股份有限公司 Emotion recognition method and device based on fusion of voice, semantics and facial expressions
CN114022909A (en) * 2022-01-07 2022-02-08 首都师范大学 Emotion recognition method and system based on sensor data
CN114241603A (en) * 2021-12-17 2022-03-25 中南民族大学 Shuttlecock action recognition and level grade evaluation method and system based on wearable equipment

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108288048A (en) * 2018-02-09 2018-07-17 中国矿业大学 Based on the facial emotions identification feature selection method for improving brainstorming optimization algorithm

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108288048A (en) * 2018-02-09 2018-07-17 中国矿业大学 Based on the facial emotions identification feature selection method for improving brainstorming optimization algorithm

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
何施俊: "基于机器学习的移动端情绪分析系统的设计与实现", 中国优秀硕士学位论文全文数据库信息科技辑 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112515675A (en) * 2020-12-14 2021-03-19 西安理工大学 Emotion analysis method based on intelligent wearable device
CN112515675B (en) * 2020-12-14 2022-05-27 西安理工大学 Emotion analysis method based on intelligent wearable device
CN112686048A (en) * 2020-12-23 2021-04-20 沈阳新松机器人自动化股份有限公司 Emotion recognition method and device based on fusion of voice, semantics and facial expressions
CN114241603A (en) * 2021-12-17 2022-03-25 中南民族大学 Shuttlecock action recognition and level grade evaluation method and system based on wearable equipment
CN114241603B (en) * 2021-12-17 2022-08-26 中南民族大学 Shuttlecock action recognition and level grade evaluation method and system based on wearable equipment
CN114022909A (en) * 2022-01-07 2022-02-08 首都师范大学 Emotion recognition method and system based on sensor data

Similar Documents

Publication Publication Date Title
CN111931616A (en) Emotion recognition method and system based on mobile intelligent terminal sensor equipment
CN107688823B (en) A kind of characteristics of image acquisition methods and device, electronic equipment
Lester et al. A hybrid discriminative/generative approach for modeling human activities
US9278255B2 (en) System and method for activity recognition
CN102804208A (en) Automatically mining person models of celebrities for visual search applications
CN109740573B (en) Video analysis method, device, equipment and server
CN110674875A (en) Pedestrian motion mode identification method based on deep hybrid model
CN113360701B (en) Sketch processing method and system based on knowledge distillation
CN110414550B (en) Training method, device and system of face recognition model and computer readable medium
US11416717B2 (en) Classification model building apparatus and classification model building method thereof
CN111954250B (en) Lightweight Wi-Fi behavior sensing method and system
CN115688760B (en) Intelligent diagnosis guiding method, device, equipment and storage medium
CN114202791A (en) Training method of facial emotion recognition model, emotion recognition method and related equipment
CN111797861A (en) Information processing method, information processing apparatus, storage medium, and electronic device
CN111340213B (en) Neural network training method, electronic device, and storage medium
CN111797854A (en) Scene model establishing method and device, storage medium and electronic equipment
CN111796925A (en) Method and device for screening algorithm model, storage medium and electronic equipment
CN111797849A (en) User activity identification method and device, storage medium and electronic equipment
CN111797856A (en) Modeling method, modeling device, storage medium and electronic equipment
CN112328907A (en) Learning content recommendation method
CN113011503A (en) Data evidence obtaining method of electronic equipment, storage medium and terminal
CN111797303A (en) Information processing method, information processing apparatus, storage medium, and electronic device
CN111797077A (en) Data cleaning method and device, storage medium and electronic equipment
CN111797290A (en) Data processing method, data processing device, storage medium and electronic equipment
CN111797299A (en) Model training method, webpage classification method, device, storage medium and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20201113

WD01 Invention patent application deemed withdrawn after publication