CN112949575A - Emotion recognition model generation method, emotion recognition device, emotion recognition equipment and emotion recognition medium - Google Patents

Emotion recognition model generation method, emotion recognition device, emotion recognition equipment and emotion recognition medium Download PDF

Info

Publication number
CN112949575A
CN112949575A CN202110335765.0A CN202110335765A CN112949575A CN 112949575 A CN112949575 A CN 112949575A CN 202110335765 A CN202110335765 A CN 202110335765A CN 112949575 A CN112949575 A CN 112949575A
Authority
CN
China
Prior art keywords
sliding
screen
historical
user
screen sliding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110335765.0A
Other languages
Chinese (zh)
Inventor
詹志丹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CCB Finetech Co Ltd
Original Assignee
CCB Finetech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CCB Finetech Co Ltd filed Critical CCB Finetech Co Ltd
Priority to CN202110335765.0A priority Critical patent/CN112949575A/en
Publication of CN112949575A publication Critical patent/CN112949575A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a method, a device, equipment and a medium for generating an emotion recognition model and recognizing emotion, and relates to the field of big data. The method comprises the following steps: acquiring historical operation data of a historical screen sliding user in screen sliding operation and historical emotion categories of the historical screen sliding user; extracting the characteristics of the historical operation data to obtain characteristic parameter values of the historical operation data; and taking the characteristic parameter value of the historical operation data as the input of the standard recognition model, taking the historical emotion category of the historical screen sliding user as the output of the standard recognition model, and training the standard recognition model to obtain the emotion recognition model. By the technical scheme, the emotion recognition model can be obtained through training, the emotion category of the screen sliding user can be accurately recognized through the emotion recognition model, and therefore personalized service is provided for the user according to the recognized emotion of the user, and user experience is improved.

Description

Emotion recognition model generation method, emotion recognition device, emotion recognition equipment and emotion recognition medium
Technical Field
The embodiment of the invention relates to the field of big data, in particular to a method, a device, equipment and a medium for generating an emotion recognition model and recognizing emotion.
Background
Nowadays, the roles played by terminal devices such as smart phones and tablet computers in daily life of people are more and more important, and user experience brought by the terminal devices becomes a key factor for users to measure the terminal devices.
At present, some terminal devices have been developed to provide personalized services to users according to the recognized emotion of the users from the perspective of recognizing the emotional states of the users; whether a reasonable personalized service can be provided for the user depends mainly on the accuracy of emotion recognition of the user. At present, according to a common method for recognizing emotion based on facial expression, along with changes of factors such as ambient light, relative position between a user's face and a terminal device and the like, accuracy of facial expression recognition is changed, that is, the method cannot guarantee accurate recognition of facial expression of the user, and recognition of emotion category based on facial expression is not accurate. In addition, methods for determining the emotion category of the user, such as speech emotion recognition, physiological mode recognition, text emotion recognition and the like, also exist in the prior art.
However, in general, due to the limitation of the use scenario, the conventional emotion recognition has limited accuracy in recognizing the emotion of the user, and does not really achieve the purpose of improving the user experience. Therefore, how to accurately recognize the emotion of the user becomes a technical problem to be solved urgently.
Disclosure of Invention
The invention provides a method, a device, equipment and a medium for generating an emotion recognition model and recognizing emotion, which are used for accurately recognizing user emotion, providing personalized service for a user according to the recognized user emotion and improving user experience.
In a first aspect, an embodiment of the present invention provides a method for generating an emotion recognition model, where the method includes:
acquiring historical operation data of a historical screen sliding user in screen sliding operation and historical emotion categories of the historical screen sliding user;
performing feature extraction on the historical operation data to obtain a feature parameter value of the historical operation data;
and taking the characteristic parameter value of the historical operation data as the input of a standard recognition model, taking the historical emotion category of the historical screen sliding user as the output of the standard recognition model, and training the standard recognition model to obtain an emotion recognition model.
In a second aspect, an embodiment of the present invention further provides an emotion recognition method, where the method includes:
acquiring current operation data of a current screen sliding user in screen sliding operation;
inputting the current operation data into an emotion recognition model generated by the emotion recognition model generation method according to any embodiment of the present invention;
and acquiring the emotion of the current screen sliding user corresponding to the current operation data in the screen sliding operation. :
in a third aspect, an embodiment of the present invention further provides an apparatus for generating an emotion recognition model, where the apparatus includes:
the historical data acquisition module is used for acquiring operation data of a historical screen sliding user in screen sliding operation and historical emotion categories of the historical screen sliding user;
the characteristic extraction module is used for extracting the characteristics of the historical operation data to obtain the characteristic parameter values of the historical operation data;
and the model training module is used for taking the characteristic parameter values of the historical operation data as the input of a standard recognition model, taking the historical emotion categories of the historical screen sliding users as the output of the standard recognition model, and training the standard recognition model to obtain an emotion recognition model.
In a fourth aspect, an embodiment of the present invention further provides an emotion recognition apparatus, where the apparatus includes:
the current data acquisition module is used for acquiring current operation data of a current screen sliding user in screen sliding operation;
the emotion recognition module is used for inputting the current operation data into an emotion recognition model generated by the emotion recognition model generation method in any embodiment of the invention;
and the emotion acquisition module is used for acquiring the emotion of the current screen sliding user corresponding to the current operation data in the screen sliding operation.
In a fifth aspect, an embodiment of the present invention further provides an electronic device, where the electronic device includes:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement a method of generating an emotion recognition model according to any of the embodiments of the present invention.
In a sixth aspect, an embodiment of the present invention further provides an electronic device, where the electronic device includes:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the emotion recognition method according to any of the embodiments of the present invention.
In a seventh aspect, the embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the program, when executed by a processor, implements the method for generating the emotion recognition model according to any embodiment of the present invention.
In an eighth aspect, the embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the emotion recognition method according to any embodiment of the present invention.
The embodiment of the invention provides a method, a device, equipment and a storage medium for generating an emotion recognition model and recognizing emotion. Historical operation data of a historical screen sliding user in screen sliding operation and historical emotion categories of the historical screen sliding user are obtained; performing feature extraction on the historical operation data to obtain a feature parameter value of the historical operation data; and taking the characteristic parameter value of the historical operation data as the input of a standard recognition model, taking the historical emotion category of the historical screen sliding user as the output of the standard recognition model, and training the standard recognition model to obtain an emotion recognition model. According to the embodiment of the invention, the emotion type of the screen sliding user can be accurately identified through the emotion identification model, so that personalized service is provided for the user according to the identified emotion of the user, the user experience is improved, and a new thought is provided for emotion identification.
Drawings
Fig. 1 is a flowchart of a method for generating an emotion recognition model according to an embodiment of the present invention;
fig. 2 is a flowchart of an emotion recognition method according to a second embodiment of the present invention;
fig. 3 is a schematic structural diagram of an emotion recognition model generation device provided in the third embodiment of the present invention;
fig. 4 is a schematic structural diagram of an emotion recognition apparatus according to a fourth embodiment of the present invention;
fig. 5 is a schematic structural diagram of an electronic device according to a fifth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Example one
Fig. 1 is a flowchart of a method for generating an emotion recognition model according to an embodiment of the present invention, where the embodiment is applicable to a case of generating an emotion recognition model for recognizing an emotion category of a screen-sliding user in a screen-sliding operation, and the method may be implemented by an emotion recognition model generation apparatus according to an embodiment of the present invention, which may be implemented in software and/or hardware, and the apparatus may be configured in an electronic device, such as a computer.
Specifically, as shown in fig. 1, the method for generating an emotion recognition model provided in the embodiment of the present invention may include the following steps:
s110, historical operation data of the historical screen sliding user in screen sliding operation and historical emotion categories of the historical screen sliding user are obtained.
Wherein the historical screen sliding user is a user who generates screen sliding behavior on a display screen of the electronic equipment in historical time. The display screen of the electronic device may be a liquid crystal display screen or an electronic ink display screen, the input device may be a touch layer or a touch pad covered on the display screen, and the electronic device may be a mobile phone or a tablet computer. The historical operation data refers to data generated when a historical screen sliding user performs screen sliding operation.
The emotion is a reaction of a person when being stimulated by external things, and is a wide and complex physiological and psychological state. The emotional states of a person are very diverse and roughly defined by psychologist eckmann as happy, angry, frightened, disgust and sad. In fact, there are many other emotional states, such as shame, retardation, disappointment, anxiety, and the like. The emotional state of a person often reflects the attitude and the opinion of a person to foreign objects, and the emotion recognition can help people to improve the use safety of equipment and analyze the potential emotional factors of daily behaviors. For example, in the rehabilitation field, it is possible to help doctors diagnose and prevent problems such as depression, post-traumatic stress disorder, and the like. Meanwhile, medical care personnel can provide better medical care according to the emotional response of the patient, so that the rehabilitation of the patient is assisted. In the aspect of transportation, the mental state of a driver, such as whether the driver is tired, how awake, whether the driver is nervous and anxious, and the like, can be analyzed through emotion detection to ensure the driving safety. In the education industry, students' specialties can be developed through the emotional reactions of the students to different subjects, and guidance with more pertinence is performed. Meanwhile, when the brain of the student is heavily loaded and shows fatigue emotion, the teacher is prompted to live a classroom or have a short rest, and classroom efficiency is ensured. Accurate recognition of emotions can improve our quality of life from many aspects.
The historical emotion category is the emotion category corresponding to the historical screen sliding user and obtained according to a preset emotion classification standard. Specifically, the obtained emotion categories may be different according to different classification criteria. Exemplarily, the traditional Chinese culture recognizes that people all have seven emotions, wherein the seven emotions refer to seven emotions of joy, anger, worry, thinking, sadness, terror and fright; according to the wording of Buddhism, the emotions are divided into happiness, anger, worry, fear, love, loathing and desire. The application does not limit the specific emotion classification standard and can adjust according to the actual situation.
In order to train the emotion recognition model, historical operation data of historical screen sliding users in screen sliding operation in a historical record and corresponding historical emotion categories of the historical screen sliding users are acquired.
In order to generate an accurate emotion recognition model, accurate historical operation data is first required, and in an optional implementation manner of this embodiment, obtaining historical operation data of a historical screen sliding user in a screen sliding operation includes: acquiring identity information of a historical screen sliding user; and acquiring a screen sliding signal and a physiological signal of a historical screen sliding user in screen sliding operation. Wherein the identity information of the historical slide-screen user comprises at least one of the following items: age, gender, occupation, academic history, and income of the historical screen-sliding user.
The screen sliding action related in the screen sliding operation data can reflect the difference of different emotions to a certain extent, and meanwhile, the physiological signals can reflect the emotion of a person and are not easily influenced by human factors. Therefore, the screen sliding signals and physiological signals of historical screen sliding users in screen sliding operation can be acquired and used for training the emotion recognition model. Optionally, acquiring a screen sliding signal and a physiological signal of a historical screen sliding user in a screen sliding operation includes: acquiring sliding length, sliding speed, sliding angle, sliding pressure and interval duration of a historical screen sliding user in screen sliding operation as screen sliding signals; acquiring electroencephalogram signals, heart rate, blood pressure and pupil response data in eye movement tracks of historical screen sliding users in screen sliding operation as physiological signals.
For example, physiological signals of a historical screen sliding user in screen sliding operation can be collected through the wearable device. Specifically, physiological signals such as electroencephalogram signals, heart rate, blood pressure and the like can be acquired by using a physiological signal data acquisition electrode, a physiological signal amplification device and an analog-to-digital conversion device. However, there are many challenges to using physiological signals for emotion analysis in practical applications. Firstly, the physiological signal acquisition process is difficult, and the acquisition of stable physiological signals has certain requirements on acquisition equipment, a testee and an external environment, and is easily influenced by external noise. Secondly, the physiological signal has the characteristics of weak strength and strong background noise, and the effective physiological signal needs to be denoised when the physiological signal is required to be obtained. Meanwhile, in the classification work, some processing techniques are required to extract features from a large number of physiological signal samples so as to establish the corresponding relationship between the physiological signals and emotional states.
In this embodiment, the screen sliding signal of the history screen sliding user in the screen sliding operation may also be obtained as follows:
firstly, coordinate values, pressure values, angle values and time of track points of a sliding screen of a historical sliding screen user in the sliding screen operation are obtained. Specifically, a coordinate system can be established on the screen, and the coordinate value of each track point is marked. And recording the coordinate values, the angle values, the pressure values and the time of the starting point, the passing point and the ending point of the screen sliding operation during each screen sliding operation. Specifically, the sliding length refers to the length of a track formed from a starting point to an end point in one sliding operation, that is, the sum of distances between track points generated when a finger or other touch object touches the screen and leaves the screen. The slip speed refers to a ratio of a slip length of one slip operation to a slip duration. The one-time sliding duration refers to the time length from the time when the touch object touches the screen to the time when the touch object leaves the screen. The angle value refers to the size of the angle between the touch object and the screen when the touch object slides to each track point in the sliding operation. The pressure value refers to the pressure applied to each trace point in the sliding operation. The interval time refers to the interval time between two adjacent sliding operations.
And secondly, obtaining the sliding length of the sliding screen track according to the coordinate values of the sliding screen track points of the historical sliding screen user in the sliding screen operation. Because the coordinate value of the track point of each sliding screen is recorded in each sliding screen operation, the distance between two adjacent track points can be calculated as the sliding length according to the recorded coordinate value of each track point and the corresponding time. In order to avoid the influence of the discontinuous sliding screen on data acquisition, a continuous sliding screen threshold time can be set, the sliding screen track within the preset threshold time is used as a primary sliding screen track, the distances from a starting point to all adjacent two points in a terminal point in the primary sliding screen operation are added to obtain a total distance, and the total distance is used as the sliding length of the primary sliding screen operation. It is understood that the continuous screen sliding threshold time is set to be shorter, and may be 1s for example, although the present application is not limited thereto and may be set according to the actual situation.
And obtaining the duration of the screen sliding operation according to the time of the screen sliding track points of the historical screen sliding user in the screen sliding operation. Specifically, the difference between the end time and the start time in each screen sliding operation may be calculated to obtain the sliding duration of each screen sliding operation. Thirdly, obtaining the sliding speed of the screen sliding operation according to the sliding length and the duration of the screen sliding operation of the historical screen sliding user; and finally, obtaining the interval time of two adjacent screen sliding operations according to the time of the historical screen sliding user for sliding the screen track points in the screen sliding operation.
In another optional embodiment, after acquiring the screen sliding signal of the historical screen sliding user in the screen sliding operation within the preset time or in the specified scene, the method further includes:
obtaining the average sliding length, the maximum sliding length and the minimum sliding length of the sliding screen track in a preset time or in a specified scene according to the sliding length of the sliding screen track; obtaining an average sliding speed, a maximum sliding speed and a minimum sliding speed of the screen sliding operation within a preset time or in a specified scene according to the sliding speed of the screen sliding operation; obtaining the average interval time, the maximum interval time and the minimum interval time of screen sliding operation in a preset time or in a specified scene according to the interval time of two adjacent screen sliding operations; obtaining an average pressure value, a maximum pressure value and a minimum pressure value according to the pressure value of the sliding track point; and obtaining an average angle value, a maximum angle value and a minimum angle value according to the angle values of the sliding track points. The advantage of setting up like this is that, through introducing maximum value, minimum and mean value, can avoid the influence of few data errors to whole data collection to lead to the inaccurate condition of the eigenvalue of extraction.
For physiological signals, for example, the number of local maxima of the slope of the rising edge of the physiological signal can be found by zero crossing from positive to negative by a second order differential rule. The number of local maxima of the slope of the rising edge of the physiological signal may reflect to some extent the correlation with mood.
And S120, performing feature extraction on the historical operation data to obtain a feature parameter value of the historical operation data.
The physiological signals in the historical operation data have the characteristics of weak strength and strong background noise, and the effective physiological signals need to be denoised. Specifically, denoising is performed on the acquired physiological signal in the screen sliding operation, and then feature extraction is performed on the denoised physiological signal to obtain a feature parameter value of the physiological signal.
In order to find out representative features from the features as effective features for generating the emotion recognition model, in an alternative embodiment, the weight of each feature can be calculated through a feature selection algorithm (Reflieff algorithm); and comparing the weight of each feature with a preset weight threshold, and removing the features with the weights smaller than the weight threshold. This has the advantage that the effect of the less weighted features on the efficiency of model generation can be reduced.
Specifically, the weight threshold may be preset, or may be set according to the calculated weight of each feature and the estimated number of samples. Illustratively, if there are 12 features in total and 8 features need to be selected, the weight of each feature is arranged from large to small, the 8 th data is taken as the weight threshold, and the special features smaller than the weight threshold are removed. The specific weight threshold setting may be adjusted according to actual conditions, and the application is not limited in this respect.
S130, taking the characteristic parameter values of the historical operation data as the input of the standard recognition model, taking the historical emotion categories of the historical screen sliding users as the output of the standard recognition model, and training the standard recognition model to obtain the emotion recognition model.
Wherein the standard recognition model can be understood as an original machine learning model. Taking characteristic parameter values of historical operation data as input of a standard recognition model, taking historical emotion categories of historical screen sliding users as output of the standard recognition model, training the standard recognition model to obtain an emotion recognition model, and the method comprises the following steps: and training the original machine learning model to obtain an emotion recognition model by taking the characteristic parameter values of the historical operation data as the input of the original machine learning model and taking the historical emotion categories of the historical sliding screen user as the output of the original machine learning model through a neural network algorithm or a support vector machine algorithm.
Emotion classification is established by subsets generated based on genetic algorithms using linear prediction coefficients and cepstral coefficients. And classifying the emotion by combining a Reflieff algorithm and an Artificial Neural Network (ANN) algorithm Support Vector Machine (SVM) algorithm, so as to establish an emotion recognition model.
Due to the superiority of the RefliefF algorithm in feature selection and the superiority of the ANN algorithm in handling the irregular problem, the two are combined to process the classification of the emotional touch data of the intelligent mobile terminal. Firstly, the weight of each feature is calculated through a Reflieff algorithm, and features with smaller weights are removed. And then, using an ANN algorithm to obtain the classification recognition rate. And finally, adjusting the parameters until the optimal recognition rate is obtained, thereby finally determining the emotion recognition model. Further, quantitative analysis can be performed on the emotion of the screen sliding user for 7 days, or 30 days, so as to determine the emotional preference of the screen sliding user.
According to the technical scheme of the embodiment, historical operation data of a historical screen sliding user in screen sliding operation and historical emotion categories of the historical screen sliding user are obtained; extracting the characteristics of the historical operation data to obtain characteristic parameter values of the historical operation data; the characteristic parameter values of historical operation data are used as the input of the standard recognition model, the historical emotion categories of the historical slide screen users are used as the output of the standard recognition model, the standard recognition model is trained to obtain the emotion recognition model, the problem that the existing emotion recognition model is limited in recognition accuracy is solved, and a foundation is laid for accurately recognizing the emotion of the user subsequently.
In order to accurately collect historical screen sliding data of a historical screen sliding user and corresponding emotion types, in an optional implementation manner, an emotion induction experiment can be designed, common emotions in an intelligent mobile terminal can be obtained, and a scientific data collection process is designed. For example, a game may be designed for data collection. Except for the collection of operation data in the screen sliding operation, the social population coverage of experimenters needs to be ensured, and 5 user identity labels of the age, the gender, the occupation, the academic calendar and the income of users need to be covered. The emotion data of the screen sliding user can be quantitatively analyzed by combining the user identity tag. Illustratively, data collection may be performed by randomly selecting 50 users, with the male and female proportions each in half.
In order to obtain ideal data, in an alternative embodiment, the sounds in the international emotion sound library can be used as background music of a certain game scene (or other application scenes) when a game is designed, a stronger emotional state of an experimenter is induced in the game process, and the experimenter is required to select the emotional state generated by the game scene from calmness, irritability, happiness and anger after each pass of the experiment, so as to perform emotion tagging on the data, namely, the operation data and the corresponding emotion category of each sliding screen user in the sliding screen operation are obtained.
In another alternative embodiment, experimenters can watch different emotion videos, different emotions are generated under the stimulation of the videos, and then operation data and corresponding emotion categories of various screen sliding users in screen sliding operation are obtained under the condition of emotion induction.
In order to make the classification algorithm independent of the thinking of the particular algorithm, in an alternative embodiment, the user's label may be multi-labeled. The idea of the conversion method is mainly as follows: and converting the multi-label into one or more single-label data for classification, so that the data samples can be continuously classified by using the traditional single-label classification algorithm. A plurality of mature classification algorithms (such as neural networks, support vector machines, K-means and Bayesian classifiers and the like) exist for the traditional single label, and the classification algorithm can be free from the thought limit of a specific algorithm by utilizing a problem transformation method.
Example two
Fig. 2 is a flowchart of an emotion recognition method according to a second embodiment of the present invention, where the method is applicable to recognizing the emotion of a user sliding on a screen, and the method may be executed by an emotion recognition device, where the emotion recognition device may be implemented in a software and/or hardware manner, and the emotion recognition device may be configured in an electronic device, such as a typical mobile phone, a typical tablet computer, and the like.
Specifically, as shown in fig. 2, the method includes:
s210, current operation data of a current screen sliding user in screen sliding operation are obtained.
Wherein the current screen sliding user is a user who generates screen sliding behavior on a display screen of the electronic equipment at the current time. The display screen of the electronic device may be a liquid crystal display screen or an electronic ink display screen, the input device may be a touch layer or a touch pad covered on the display screen, and the electronic device may be a mobile phone or a tablet computer. The current operation data refers to data generated when a current screen sliding user performs screen sliding operation.
The emotion is a reaction of a person when being stimulated by external things, and is a wide and complex physiological and psychological state. The emotional states of a person are very diverse and roughly defined by psychologist eckmann as happy, angry, frightened, disgust and sad. In fact, there are many other emotional states, such as shame, retardation, disappointment, anxiety, and the like.
The current emotion category is the emotion category corresponding to the current screen sliding user and obtained according to a preset emotion classification standard. Specifically, the obtained emotion categories may be different according to different classification criteria. Exemplarily, the traditional Chinese culture recognizes that people all have seven emotions, wherein the seven emotions refer to seven emotions of joy, anger, worry, thinking, sadness, terror and fright; according to the wording of Buddhism, the emotions are divided into happiness, anger, worry, fear, love, loathing and desire. The application does not limit the specific emotion classification standard and can adjust according to the actual situation.
Acquiring current operation data of a current screen sliding user in screen sliding operation, wherein the current operation data comprises the following steps: acquiring identity information of a current screen sliding user; and acquiring a screen sliding signal and a physiological signal of the current screen sliding user in the screen sliding operation. Wherein the identity information of the current sliding screen user comprises at least one of the following items: the age, gender, occupation, academic history, and income of the current screen-sliding user.
Optionally, acquiring a screen sliding signal and a physiological signal of a current screen sliding user in a screen sliding operation includes: acquiring the sliding length, the sliding speed, the sliding angle, the sliding pressure and the interval duration of a current screen sliding user in screen sliding operation as screen sliding signals; acquiring electroencephalogram signals, heart rate, blood pressure and reaction data of pupils in eye movement tracks of a current screen sliding user in screen sliding operation as physiological signals.
For example, the physiological signal of the current screen sliding user in the screen sliding operation can be acquired through the wearable device. Specifically, physiological signals such as electroencephalogram signals, heart rate, blood pressure and the like can be acquired by using a physiological signal data acquisition electrode, a physiological signal amplification device and an analog-to-digital conversion device.
In this embodiment, the screen sliding signal of the current screen sliding user during the screen sliding operation may also be obtained as follows:
firstly, coordinate values, pressure values, angle values and time of track points of the sliding screen of a current sliding screen user in the sliding screen operation are obtained. Specifically, a coordinate system can be established on the screen, and the coordinate value of each track point is marked. And recording the coordinate values, the angle values, the pressure values and the time of the starting point, the passing point and the ending point of the screen sliding operation during each screen sliding operation. Specifically, the sliding length refers to the length of a track formed from a starting point to an end point in one sliding operation, that is, the sum of distances between track points generated when a finger or other touch object touches the screen and leaves the screen. The slip speed refers to a ratio of a slip length of one slip operation to a slip duration. The one-time sliding duration refers to the time length from the time when the touch object touches the screen to the time when the touch object leaves the screen. The angle value refers to the size of the angle between the touch object and the screen when the touch object slides to each track point in the sliding operation. The pressure value refers to the pressure applied to each trace point in the sliding operation. The interval time refers to the interval time between two adjacent sliding operations.
And secondly, obtaining the sliding length of the screen sliding track according to the coordinate value of the screen sliding track point of the current screen sliding user in the screen sliding operation. Because the coordinate value of the track point of each sliding screen is recorded in each sliding screen operation, the distance between two adjacent track points can be calculated as the sliding length according to the recorded coordinate value of each track point and the corresponding time. In order to avoid the influence of the discontinuous sliding screen on data acquisition, a continuous sliding screen threshold time can be set, the sliding screen track within the preset threshold time is used as a primary sliding screen track, the distances from a starting point to all adjacent two points in a terminal point in the primary sliding screen operation are added to obtain a total distance, and the total distance is used as the sliding length of the primary sliding screen operation. It is understood that the continuous screen sliding threshold time is set to be shorter, and may be 1s for example, although the present application is not limited thereto and may be set according to the actual situation.
And obtaining the duration of the screen sliding operation according to the time of the screen sliding track points of the current screen sliding user in the screen sliding operation. Specifically, the difference between the end time and the start time in each screen sliding operation may be calculated to obtain the sliding duration of each screen sliding operation. Thirdly, obtaining the sliding speed of the screen sliding operation according to the sliding length and the duration of the screen sliding operation of the current screen sliding user; and finally, obtaining the interval time of two adjacent screen sliding operations according to the time of the screen sliding track points of the current screen sliding user in the screen sliding operation.
In another optional implementation manner, after acquiring the screen sliding signal of the current screen sliding user in the screen sliding operation within the preset time or in the specified scene, the method further includes the step of introducing a maximum value, a minimum value and an average value to avoid the condition that the extracted characteristic value is inaccurate due to the influence of few data errors on the overall acquired data.
S220, inputting the current operation data into the emotion recognition model generated by the method for generating an emotion recognition model according to any embodiment of the present invention.
After current operation data of a current screen sliding user in screen sliding operation are obtained, feature extraction is carried out on the current operation data, and feature parameter values of the current operation data are obtained. Next, the current operation data is input to the emotion recognition model generated by the emotion recognition model generation method according to any embodiment of the present invention, and the emotion type of the current screen sliding user can be obtained.
And S230, obtaining the emotion category of the current screen sliding user corresponding to the current operation data in the screen sliding operation.
And after classifying and identifying the current operation data by the emotion identification model, outputting the emotion category of the current operation data in the screen sliding operation to the current screen sliding user.
According to the technical scheme of the embodiment, the current operation data of the current screen sliding user in the screen sliding operation is acquired, the current operation data is input into the emotion recognition model generated by the emotion recognition model generation method in any embodiment of the invention, the emotion category of the current screen sliding user corresponding to the current operation data in the screen sliding operation is acquired, the problem that the traditional emotion recognition method is limited in recognition accuracy is solved, the effect of improving emotion recognition accuracy is achieved, personalized service is provided for the user according to the recognized emotion of the user, user experience is improved, and a new thought is provided for emotion recognition.
EXAMPLE III
Fig. 3 is a schematic structural diagram of an apparatus for generating an emotion recognition model according to a third embodiment of the present invention, which is suitable for executing the method for generating an emotion recognition model according to the third embodiment of the present invention, and can be configured in an electronic device to generate an emotion recognition model. As shown in fig. 3, the apparatus includes a historical data acquisition module 310, a feature extraction module 320, and a model training module 330. Wherein the content of the first and second substances,
a historical data acquisition module 310, configured to acquire operation data of a historical screen sliding user in a screen sliding operation and historical emotion categories of the historical screen sliding user;
the feature extraction module 320 is configured to perform feature extraction on the historical operation data to obtain a feature parameter value of the historical operation data;
and the model training module 330 is configured to train the standard recognition model to obtain the emotion recognition model by taking the characteristic parameter value of the historical operation data as the input of the standard recognition model and taking the historical emotion category of the historical slide screen user as the output of the standard recognition model.
According to the technical scheme of the embodiment, historical operation data of a historical screen sliding user in screen sliding operation and historical emotion categories of the historical screen sliding user are obtained; extracting the characteristics of the historical operation data to obtain characteristic parameter values of the historical operation data; the characteristic parameter values of historical operation data are used as the input of a standard recognition model, the historical emotion categories of historical screen sliding users are used as the output of the standard recognition model, the standard recognition model is trained to obtain the emotion recognition model, the problem that the existing emotion recognition model is limited in recognition accuracy is solved, a foundation is laid for accurately recognizing the emotion of the user subsequently, and a new thought is provided for emotion recognition.
Preferably, the historical data obtaining module 310 specifically includes: the device comprises an identity information acquisition unit, a screen sliding signal acquisition unit and a physiological signal acquisition unit. The system comprises an identity information acquisition unit, a screen sliding unit and a screen sliding unit, wherein the identity information acquisition unit is used for acquiring identity information of a historical screen sliding user; the screen sliding signal acquisition unit is used for acquiring screen sliding signals of historical screen sliding users in screen sliding operation; and the physiological signal acquisition unit is used for acquiring the physiological signals of the historical screen sliding user in the screen sliding operation.
Further, the identity information of the history slide screen user comprises at least one of the following items: age, gender, occupation, academic history, and income of the historical screen-sliding user.
Further, the screen sliding signal acquiring unit is specifically configured to acquire a sliding length, a sliding speed, a sliding angle, a sliding pressure and an interval duration of a historical screen sliding user in a screen sliding operation, and use the sliding length, the sliding speed, the sliding angle, the sliding pressure and the interval duration as screen sliding signals; the physiological signal acquisition unit is specifically used for acquiring electroencephalogram signals, heart rates, blood pressures and reaction data of pupils in eye movement tracks of historical screen sliding users in screen sliding operation as physiological signals.
Further, the screen sliding signal acquiring unit is specifically configured to: obtaining coordinate values, pressure values, angle values and time of track points of a sliding screen during sliding operation of a historical sliding screen user; obtaining the sliding length of the sliding screen track according to the coordinate value of the sliding screen track point of the historical sliding screen user in the sliding screen operation; obtaining the duration time of the screen sliding operation according to the time of the screen sliding track points of the historical screen sliding user in the screen sliding operation; obtaining the sliding speed of the screen sliding operation according to the sliding length and the duration of the screen sliding operation of the historical screen sliding user; and obtaining the interval time of two adjacent screen sliding operations according to the time of screen sliding track points of the historical screen sliding user in the screen sliding operation.
Further, the sliding screen signal obtaining unit is further specifically configured to: obtaining coordinate values, pressure values, angle values and time of track points of a sliding screen of a historical sliding screen user in a preset time or in a specified scene during sliding screen operation; obtaining the average sliding length, the maximum sliding length and the minimum sliding length of the sliding screen track in a preset time or in a specified scene according to the sliding length of the sliding screen track; obtaining an average sliding speed, a maximum sliding speed and a minimum sliding speed of the screen sliding operation within a preset time or in a specified scene according to the sliding speed of the screen sliding operation; obtaining the average interval time, the maximum interval time and the minimum interval time of screen sliding operation in a preset time or in a specified scene according to the interval time of two adjacent screen sliding operations; obtaining an average pressure value, a maximum pressure value and a minimum pressure value according to the pressure value of the sliding track point; and obtaining an average angle value, a maximum angle value and a minimum angle value according to the angle values of the sliding track points.
Further, in the model training module 330, the standard recognition model is an original machine learning model. The model training module 330 is specifically configured to train the original machine learning model to obtain the emotion recognition model by using a neural network algorithm or a support vector machine algorithm, using a characteristic parameter value of the historical operation data as an input of the original machine learning model, and using a historical emotion category of the historical slide screen user as an output of the original machine learning model.
Further, the feature extraction module 320 includes: the device comprises a signal denoising unit and a feature extraction unit. The signal denoising unit is used for denoising the acquired physiological signal in the screen sliding operation; the feature extraction unit is used for extracting features of the denoised physiological signal to obtain a feature parameter value of the physiological signal.
Further, the feature extraction module 320 further includes a weight calculation unit and a feature selection unit. The weight calculation unit is used for calculating the weight of each feature through a feature selection algorithm; and the characteristic selection unit is used for comparing the weight of each characteristic with a preset weight threshold value and removing the characteristic of which the weight is smaller than the weight threshold value.
The emotion recognition model generation device provided by the embodiment of the invention can execute the emotion recognition model generation method provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.
Example four
Fig. 4 is a schematic structural diagram of an emotion recognition apparatus according to a fourth embodiment of the present invention, which is suitable for executing the emotion recognition method according to the fourth embodiment of the present invention, and can be configured in an electronic device for recognizing emotion. As shown in fig. 4, the apparatus includes a current data acquisition module 410, an emotion recognition module 420, and an emotion acquisition module 430. Wherein the content of the first and second substances,
a current data obtaining module 410, configured to obtain current operation data of a current screen sliding user in a screen sliding operation;
an emotion recognition module 420 for inputting current operation data to an emotion recognition model generated by the method for generating an emotion recognition model according to any embodiment of the present invention;
and the emotion acquiring module 430 is configured to acquire an emotion of the current screen sliding user corresponding to the current operation data in the screen sliding operation.
According to the technical scheme of the embodiment, the current operation data of the current screen sliding user in the screen sliding operation is acquired, the current operation data is input into the emotion recognition model generated by the emotion recognition model generation method in any embodiment of the invention, the emotion category of the current screen sliding user corresponding to the current operation data in the screen sliding operation is acquired, the problem that the traditional emotion recognition method is limited in recognition accuracy is solved, the effect of improving emotion recognition accuracy is achieved, personalized service is provided for the user according to the recognized emotion of the user, user experience is improved, and a new thought is provided for emotion recognition.
The emotion recognition device provided by the embodiment of the invention can execute the emotion recognition method provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.
EXAMPLE five
Fig. 5 is a schematic structural diagram of an electronic device according to a fifth embodiment of the present invention. FIG. 5 illustrates a block diagram of an exemplary electronic device 12 suitable for use in implementing embodiments of the present invention. The electronic device 12 shown in fig. 5 is only an example and should not bring any limitation to the function and the scope of use of the embodiment of the present invention.
As shown in FIG. 5, electronic device 12 is embodied in the form of a general purpose computing device. The components of electronic device 12 may include, but are not limited to: one or more processors or processing units 16, a system memory 28, and a bus 18 that couples various system components including the system memory 28 and the processing unit 16.
Bus 18 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, micro-channel architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Electronic device 12 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by electronic device 12 and includes both volatile and nonvolatile media, removable and non-removable media.
The system memory 28 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM)30 and/or cache memory 32. The electronic device 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 5, and commonly referred to as a "hard drive"). Although not shown in FIG. 5, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to bus 18 by one or more data media interfaces. System memory 28 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention.
A program/utility 40 having a set (at least one) of program modules 42 may be stored, for example, in system memory 28, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may comprise an implementation of a network environment. Program modules 42 generally carry out the functions and/or methodologies of the described embodiments of the invention.
Electronic device 12 may also communicate with one or more external devices 14 (e.g., keyboard, pointing device, display 24, etc.), with one or more devices that enable a user to interact with electronic device 12, and/or with any devices (e.g., network card, modem, etc.) that enable electronic device 12 to communicate with one or more other computing devices. Such communication may be through an input/output (I/O) interface 22. Also, the electronic device 12 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet) via the network adapter 20. As shown, the network adapter 20 communicates with other modules of the electronic device 12 via the bus 18. It should be understood that although not shown in the figures, other hardware and/or software modules may be used in conjunction with electronic device 12, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
The processing unit 16 executes various functional applications and data processing, such as a generation method of an emotion recognition model and an emotion recognition method provided by an embodiment of the present invention, by executing a program stored in the system memory 28.
EXAMPLE six
An embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements a method for generating an emotion recognition model and an emotion recognition method as provided in any of the embodiments of the present invention.
Computer storage media for embodiments of the invention may employ any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring historical operation data of a historical screen sliding user in screen sliding operation and historical emotion categories of the historical screen sliding user; extracting the characteristics of the historical operation data to obtain characteristic parameter values of the historical operation data; and taking the characteristic parameter value of the historical operation data as the input of the standard recognition model, taking the historical emotion category of the historical screen sliding user as the output of the standard recognition model, and training the standard recognition model to obtain the emotion recognition model.
Or the computer readable medium carrying one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring current operation data of a current screen sliding user in screen sliding operation; inputting current operation data into an emotion recognition model generated by the emotion recognition model generation method according to any embodiment of the present invention; and acquiring the emotion category of the current screen sliding user corresponding to the current operation data in the screen sliding operation.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (15)

1. A method of generating an emotion recognition model, the method comprising:
acquiring historical operation data of a historical screen sliding user in screen sliding operation and historical emotion categories of the historical screen sliding user;
performing feature extraction on the historical operation data to obtain a feature parameter value of the historical operation data;
and taking the characteristic parameter value of the historical operation data as the input of a standard recognition model, taking the historical emotion category of the historical screen sliding user as the output of the standard recognition model, and training the standard recognition model to obtain an emotion recognition model.
2. The method of claim 1, wherein obtaining historical operation data of a historical screen sliding user in screen sliding operation comprises:
acquiring identity information of the historical screen sliding user;
and acquiring a screen sliding signal and a physiological signal of the historical screen sliding user in screen sliding operation.
3. The method of claim 2, wherein the identity information of the historical slide user comprises at least one of: age, gender, occupation, academic history, and income of the historical screen-sliding user;
acquiring a screen sliding signal and a physiological signal of the historical screen sliding user in screen sliding operation, wherein the screen sliding signal and the physiological signal comprise:
acquiring the sliding length, the sliding speed, the sliding angle, the sliding pressure and the interval duration of the historical screen sliding user in screen sliding operation as the screen sliding signal;
and acquiring response data of electroencephalogram signals, heart rate, blood pressure and pupils in eye movement tracks of the historical screen-sliding users in screen-sliding operation as physiological signals.
4. The method of claim 3, wherein the screen sliding signals of the historical screen sliding user in screen sliding operation are further acquired by:
acquiring coordinate values, pressure values, angle values and time of track points of the sliding screen of the historical sliding screen user in the sliding screen operation;
obtaining the sliding length of the sliding screen track according to the coordinate value of the sliding screen track point of the historical sliding screen user in the sliding screen operation;
obtaining the duration time of the screen sliding operation according to the time of the screen sliding track points of the historical screen sliding user in the screen sliding operation;
obtaining the sliding speed of the screen sliding operation according to the sliding length and the duration of the screen sliding operation of the historical screen sliding user;
and obtaining the interval time of two adjacent screen sliding operations according to the time of screen sliding track points of the historical screen sliding user in the screen sliding operation.
5. The method of claim 4, wherein after acquiring the screen sliding signal of the historical screen sliding user in the screen sliding operation, further comprising:
obtaining coordinate values, pressure values, angle values and time of screen sliding track points of the historical screen sliding user in a preset time or in a specified scene during screen sliding operation;
obtaining the average sliding length, the maximum sliding length and the minimum sliding length of the sliding screen track in a preset time or in a specified scene according to the sliding length of the sliding screen track;
obtaining an average sliding speed, a maximum sliding speed and a minimum sliding speed of the screen sliding operation within a preset time or in a specified scene according to the sliding speed of the screen sliding operation;
obtaining the average interval time, the maximum interval time and the minimum interval time of screen sliding operation in a preset time or in a specified scene according to the interval time of two adjacent screen sliding operations;
obtaining an average pressure value, a maximum pressure value and a minimum pressure value according to the pressure value of the sliding track point;
and obtaining an average angle value, a maximum angle value and a minimum angle value according to the angle values of the sliding track points.
6. The method of claim 1, wherein the standard recognition model is a raw machine learning model;
taking the characteristic parameter value of the historical operation data as the input of a standard recognition model, taking the historical emotion category of the historical screen sliding user as the output of the standard recognition model, and training the standard recognition model to obtain an emotion recognition model, wherein the emotion recognition model comprises the following steps:
and training the original machine learning model to obtain an emotion recognition model by taking the characteristic parameter values of the historical operation data as the input of the original machine learning model and taking the historical emotion category of the historical sliding screen user as the output of the original machine learning model through a neural network algorithm or a support vector machine algorithm.
7. The method of claim 2, wherein performing feature extraction on the historical operation data to obtain a feature parameter value of the historical operation data comprises:
denoising the acquired physiological signal in the screen sliding operation;
and performing feature extraction on the denoised physiological signal to obtain a feature parameter value of the physiological signal.
8. The method of claim 1, wherein performing feature extraction on the historical operation data to obtain a feature parameter value of the historical operation data comprises:
calculating the weight of each feature through a feature selection algorithm;
and comparing the weight of each feature with a preset weight threshold, and removing the features with the weights smaller than the weight threshold.
9. A method of emotion recognition, comprising:
acquiring current operation data of a current screen sliding user in screen sliding operation;
inputting the current operation data to an emotion recognition model generated by the emotion recognition model generation method according to any one of claims 1 to 8;
and acquiring the emotion category of the current screen sliding user corresponding to the current operation data in the screen sliding operation.
10. An apparatus for generating a model for emotion recognition, the apparatus comprising:
the historical data acquisition module is used for acquiring operation data of a historical screen sliding user in screen sliding operation and historical emotion categories of the historical screen sliding user;
the characteristic extraction module is used for extracting the characteristics of the historical operation data to obtain the characteristic parameter values of the historical operation data;
and the model training module is used for taking the characteristic parameter values of the historical operation data as the input of a standard recognition model, taking the historical emotion categories of the historical screen sliding users as the output of the standard recognition model, and training the standard recognition model to obtain an emotion recognition model.
11. An emotion recognition apparatus, characterized in that the apparatus comprises:
the current data acquisition module is used for acquiring current operation data of a current screen sliding user in screen sliding operation;
an emotion recognition module for inputting the current operation data to an emotion recognition model generated by the method of generating an emotion recognition model according to any one of claims 1 to 8;
and the emotion acquisition module is used for acquiring the emotion of the current screen sliding user corresponding to the current operation data in the screen sliding operation.
12. An electronic device, characterized in that the device comprises:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement a method of generating an emotion recognition model as recited in any of claims 1-8.
13. An electronic device, characterized in that the device comprises:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the emotion recognition method as recited in claim 9.
14. A computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, is adapted to carry out the method of generating a mood recognition model as claimed in any one of claims 1 to 8.
15. A computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, carries out the emotion recognition method as claimed in claim 9.
CN202110335765.0A 2021-03-29 2021-03-29 Emotion recognition model generation method, emotion recognition device, emotion recognition equipment and emotion recognition medium Pending CN112949575A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110335765.0A CN112949575A (en) 2021-03-29 2021-03-29 Emotion recognition model generation method, emotion recognition device, emotion recognition equipment and emotion recognition medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110335765.0A CN112949575A (en) 2021-03-29 2021-03-29 Emotion recognition model generation method, emotion recognition device, emotion recognition equipment and emotion recognition medium

Publications (1)

Publication Number Publication Date
CN112949575A true CN112949575A (en) 2021-06-11

Family

ID=76227240

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110335765.0A Pending CN112949575A (en) 2021-03-29 2021-03-29 Emotion recognition model generation method, emotion recognition device, emotion recognition equipment and emotion recognition medium

Country Status (1)

Country Link
CN (1) CN112949575A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116943226A (en) * 2023-09-20 2023-10-27 小舟科技有限公司 Game difficulty adjusting method, system, equipment and medium based on emotion recognition

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105549885A (en) * 2015-12-10 2016-05-04 重庆邮电大学 Method and device for recognizing user emotion during screen sliding operation
CN110134316A (en) * 2019-04-17 2019-08-16 华为技术有限公司 Model training method, Emotion identification method and relevant apparatus and equipment
CN110313907A (en) * 2018-03-28 2019-10-11 宏碁股份有限公司 Electronic device operating method, electronic device and electronic system
US20200125976A1 (en) * 2018-10-18 2020-04-23 International Business Machines Corporation Machine learning model for predicting an action to be taken by an autistic individual

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105549885A (en) * 2015-12-10 2016-05-04 重庆邮电大学 Method and device for recognizing user emotion during screen sliding operation
CN110313907A (en) * 2018-03-28 2019-10-11 宏碁股份有限公司 Electronic device operating method, electronic device and electronic system
US20200125976A1 (en) * 2018-10-18 2020-04-23 International Business Machines Corporation Machine learning model for predicting an action to be taken by an autistic individual
CN110134316A (en) * 2019-04-17 2019-08-16 华为技术有限公司 Model training method, Emotion identification method and relevant apparatus and equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116943226A (en) * 2023-09-20 2023-10-27 小舟科技有限公司 Game difficulty adjusting method, system, equipment and medium based on emotion recognition
CN116943226B (en) * 2023-09-20 2024-01-05 小舟科技有限公司 Game difficulty adjusting method, system, equipment and medium based on emotion recognition

Similar Documents

Publication Publication Date Title
Shoumy et al. Multimodal big data affective analytics: A comprehensive survey using text, audio, visual and physiological signals
Cimtay et al. Cross-subject multimodal emotion recognition based on hybrid fusion
Joshi et al. Multimodal assistive technologies for depression diagnosis and monitoring
Zhang et al. Video-based stress detection through deep learning
Deng et al. Sensor feature selection and combination for stress identification using combinatorial fusion
CN114787883A (en) Automatic emotion recognition method, system, computing device and computer-readable storage medium
US20220319536A1 (en) Emotion recognition method and emotion recognition device using same
Siddiqui et al. A survey on databases for multimodal emotion recognition and an introduction to the VIRI (visible and InfraRed image) database
Zhang et al. Multi-modal interactive fusion method for detecting teenagers’ psychological stress
Tzafilkou et al. Mobile sensing for emotion recognition in smartphones: a literature review on non-intrusive methodologies
Bjørnsten et al. Uncertainties of facial emotion recognition technologies and the automation of emotional labour
Madanian et al. Automatic speech emotion recognition using machine learning: digital transformation of mental health
Gu et al. American sign language translation using wearable inertial and electromyography sensors for tracking hand movements and facial expressions
Yadav et al. Review of automated depression detection: Social posts, audio and video, open challenges and future direction
Afzal et al. 26 Emotion Data Collection and Its Implications for Affective Computing
CN112949575A (en) Emotion recognition model generation method, emotion recognition device, emotion recognition equipment and emotion recognition medium
Flores et al. Depression screening using deep learning on follow-up questions in clinical interviews
Böck et al. Anticipating the user: acoustic disposition recognition in intelligent interactions
Rishu et al. Multimodal emotion recognition system using machine learning and psychological signals: A review
McTear et al. Affective conversational interfaces
CN114469139A (en) Electroencephalogram signal recognition model training method, electroencephalogram signal recognition device and medium
Hollenstein Leveraging Cognitive Processing Signals for Natural Language Understanding
Rumahorbo et al. Exploring Recurrent Neural Network Models for Depression Detection Through Facial Expressions: A Systematic Literature Review
Zhai et al. The Syncretic Effect of Dual-Source Data on Affective Computing in Online Learning Contexts: A Perspective From Convolutional Neural Network With Attention Mechanism
Wahid et al. Emotion Detection Using Unobtrusive Methods: An Integrated Approach

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination