CN109730701B - Emotion data acquisition method and device - Google Patents

Emotion data acquisition method and device Download PDF

Info

Publication number
CN109730701B
CN109730701B CN201910005279.5A CN201910005279A CN109730701B CN 109730701 B CN109730701 B CN 109730701B CN 201910005279 A CN201910005279 A CN 201910005279A CN 109730701 B CN109730701 B CN 109730701B
Authority
CN
China
Prior art keywords
data
video
face
physiological
facial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910005279.5A
Other languages
Chinese (zh)
Other versions
CN109730701A (en
Inventor
邹博超
吕相文
田子
谢海永
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Academy of Electronic and Information Technology of CETC
Original Assignee
China Academy of Electronic and Information Technology of CETC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Academy of Electronic and Information Technology of CETC filed Critical China Academy of Electronic and Information Technology of CETC
Priority to CN201910005279.5A priority Critical patent/CN109730701B/en
Publication of CN109730701A publication Critical patent/CN109730701A/en
Application granted granted Critical
Publication of CN109730701B publication Critical patent/CN109730701B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention discloses a method and a device for acquiring emotion data, wherein the method comprises the following steps: the method comprises the steps of inducing micro-expression based on a video stimulation material, synchronously acquiring physiological data of a tested body and recording a facial video of the facial expression, wherein the facial video comprises the following steps: face RGB video and depth video, the physiological data including at least: electroencephalogram data, peripheral physiological electrical data, eye movement data; receiving a peak frame, a starting frame and an ending frame of a micro expression sequence in a marked face video of a tested body through the playback process of the face video and the stimulation material video, and acquiring face video data between the starting frame and the ending frame; and acquiring physiological data in a time range corresponding to the face video data, and determining emotion data according to the physiological data and the face video data. The invention constructs complete emotion data which can be used for researching human micro-expression, explores potential relevance between the micro-expression and physiological data and provides precious data resources for subsequent scientific research work.

Description

Emotion data acquisition method and device
Technical Field
The invention relates to the field of data acquisition, in particular to a method and a device for acquiring emotion data.
Background
In the research of intelligent human-computer interaction, one of the essential intelligence is the ability to identify, analyze, understand and express emotion. The external expression of human emotion is reflected in brain activity besides expression and sound, and causes detectable changes in physiology such as electrocardio and respiration, so that on the basis of analyzing visual behaviors of different modes, a plurality of modes are effectively fused, richer emotional information can be obtained, and conditions are created for realizing higher machine intelligence.
At present, the application of deep learning in the field of artificial intelligence is rapidly developed, and a deep learning method often needs data support. However, because emotions are rare and last for a very short time, manually marking these emotion samples is a very time-consuming and error-prone task. Because of these difficulties, most of the current research on human emotion recognition is based on "artificial" emotion samples, i.e. a series of emotional states are presented by the subject in front of the camera. However, there is increasing evidence that the behavior of intentional "performances" is different from spontaneous behavior that occurs in nature. Due to the induction of the micro expression, the collection and the calibration are time-consuming and labor-consuming, so that the sample size of the micro expression is very small, and the micro expression samples published in the prior art are very small and are typical small sample problems.
The existing micro-expression data sets have no corresponding physiological data and cannot be used for exploring potential correlation between micro-expressions and the physiological data.
Disclosure of Invention
The invention provides a method and a device for acquiring emotion data, which are used for solving the following problems in the prior art: the existing micro-expression data sets have no corresponding physiological data and cannot be used for exploring potential correlation between micro-expressions and the physiological data.
In order to solve the technical problem, in one aspect, the present invention provides a method for acquiring emotion data, including: the method comprises the steps of inducing micro-expression based on a video stimulation material, synchronously acquiring physiological data of a tested body and recording a facial video of the facial expression, wherein the facial video comprises: face RGB video and depth video, the physiological data comprising at least: electroencephalogram data, peripheral physiological electrical data, eye movement data; receiving a peak frame, a starting frame and an ending frame of a micro expression sequence in the face video marked by the tested body through the playback process of the face video and the video stimulation material, and acquiring face video data between the starting frame and the ending frame; and acquiring physiological data in a time range corresponding to the face video data, and determining emotion data according to the physiological data and the face video data.
Optionally, after determining emotion data according to the physiological data and the facial video data, the method further includes: and performing preset processing on the emotion data to obtain benchmark data of an emotion recognition algorithm.
Optionally, the performing predetermined processing on the emotion data to obtain reference data of an emotion recognition algorithm includes: removing interference artifacts from the electroencephalogram data by an independent component analysis mode to obtain reference electroencephalogram data; respectively extracting the physiological electro-data and the statistical characteristics of the eye movement data to obtain reference physiological electro-data and reference eye movement data; extracting the characteristics of the face video data by adopting a pre-trained neural network model, and classifying the face video data by adopting a preset machine learning classifier to obtain reference face video data; and generating reference data of the emotion recognition algorithm according to the reference electroencephalogram data, the reference physiological electro-data, the reference eye movement data and the reference face video data.
Optionally, before inducing the micro expression based on the video stimulation material, and synchronously acquiring the physiological data of the tested body and recording the facial video of the facial expression, the method further includes: before the video stimulation material is not played, the physiological data of the tested body is obtained and the facial video of the facial expression is recorded.
Optionally, before acquiring the face video data between the starting frame and the ending frame, the method further includes: and carrying out face motion unit labeling on the face in the face video.
On the other hand, the invention also provides a device for acquiring emotion data, which comprises: the acquisition module is used for inducing micro-expression based on the video stimulation material, synchronously acquiring physiological data of a tested body and recording a facial video of the facial expression, wherein the facial video comprises: face RGB video and depth video, the physiological data comprising at least: electroencephalogram data, peripheral physiological electrical data, eye movement data; the marking module is used for receiving the peak value frame, the starting frame and the ending frame of the micro expression sequence marked in the face video by the tested body in the process of playing back the face video and the video stimulation material, and acquiring the face video data between the starting frame and the ending frame; and the determining module is used for acquiring the physiological data in the corresponding time range of the face video data and determining the emotion data according to the physiological data and the face video data.
Optionally, the method further includes: and the processing module is used for carrying out preset processing on the emotion data to obtain benchmark data of an emotion recognition algorithm.
Optionally, the processing module is specifically configured to: removing interference artifacts from the electroencephalogram data by an independent component analysis mode to obtain reference electroencephalogram data; respectively extracting the physiological electrical data and the statistical characteristics of the eye movement data to obtain reference physiological electrical data and reference eye movement data; extracting the characteristics of the face video data by adopting a pre-trained neural network model, and classifying the face video data by adopting a preset machine learning classifier to obtain reference face video data; and generating reference data of the emotion recognition algorithm according to the reference electroencephalogram data, the reference physiological electro-data, the reference eye movement data and the reference face video data.
Optionally, the acquiring module is further configured to acquire physiological data of the subject and record a facial video of the facial expression before the video stimulation material is not played.
Optionally, the labeling module is further configured to perform face motion unit labeling on the face in the face video.
The embodiment of the invention obtains the physiological data of the tested body and the facial video recording the facial expression based on the video stimulation material, leads the tested body to participate in data annotation of emotion change in the playback process, correspondingly obtains the physiological data when obtaining the annotated facial video data, further establishes the relationship between the physiological data and the facial video data, constructs complete emotion data, can be used for the research of human micro-expression, explores the potential relevance between the micro-expression and the physiological data, and provides precious data resources for the subsequent scientific research work.
Drawings
FIG. 1 is a flow diagram of a method of obtaining emotion data in one embodiment of the invention;
FIG. 2 is a schematic diagram of a multi-modal emotion data synchronous collection process in one embodiment of the present invention;
FIG. 3 is a flow chart of an emotional elicitation collection experiment phase in one embodiment of the present invention;
FIG. 4 is a flow chart of a data annotation experiment phase in one embodiment of the present invention;
FIG. 5 is a schematic diagram of the connection between the experimental mainframe and the data acquisition devices according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of an emotion data acquisition device in another embodiment of the present invention.
Detailed Description
In order to solve the following problems in the prior art: the existing micro-expression data sets have no corresponding physiological data, and cannot be used for exploring potential correlation between micro-expressions and physiological data; the invention provides a method and a device for acquiring emotion data, which are further described in detail in the following with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and do not limit the invention.
An embodiment of the present invention provides a method for acquiring emotion data, where a flow of the method is shown in fig. 1, and the method includes steps S101 to S103:
s101, inducing micro-expression based on a video stimulation material, synchronously acquiring physiological data of a tested body and recording a facial video of the facial expression, wherein the facial video comprises: face RGB video and depth video, the physiological data including at least: brain electrical data, peripheral physiological electrical data, eye movement data.
In a specific implementation, the peripheral physiological electrical data may be electrocardiographic data, skin electrical impedance data, respiration data, skin temperature data, and the like, and is not limited herein.
Before the video stimulation material is not played, physiological data of the tested body and a facial video for recording facial expressions can be obtained, and the data and the video obtained in the process can be used as a reference basis for a user in a quiet state.
S102, receiving a peak frame, a start frame and an end frame of a micro expression sequence in a marked face video of the tested body through the playback process of the face video and the video stimulation material, and acquiring face video data between the start frame and the end frame.
The existing images/videos obtained through a stimulation mode are all RGB or infrared images, facial video data (namely depth data) are not included, facial expressions are three-dimensional, and the introduction of the depth data can improve the accuracy of expression recognition.
Before the face video data between the starting frame and the ending frame is obtained, the face motion unit labeling can be carried out on the face in the face video. In the expression data set, the Action Units (AU) of the micro expression samples are marked, so that the expression is more objectively and accurately marked. On the emotional labeling of the micro expression, the characteristics of AU and video materials and the subjective report of the tested body need to be considered comprehensively.
S103, acquiring physiological data in a time range corresponding to the face video data, and determining emotion data according to the physiological data and the face video data.
After the emotional face video data is acquired, the physiological data in the corresponding time range can be acquired, namely, the corresponding relation between the physiological data and the facial video data can be constructed, and further the emotional data corresponding to the micro expression is determined.
The embodiment of the invention obtains the physiological data of the tested body and the facial video recording the facial expression based on the video stimulation material, leads the tested body to participate in data annotation of emotion change in the playback process, correspondingly obtains the physiological data when obtaining the annotated facial video data, further establishes the relationship between the physiological data and the facial video data, constructs complete emotion data, can be used for the research of human micro-expression, explores the potential relevance between the micro-expression and the physiological data, and provides precious data resources for the subsequent scientific research work.
After determining the emotion data from the physiological data and the face video data, the emotion data may also be subjected to predetermined processing to obtain reference data for an emotion recognition algorithm. During specific implementation, interference artifacts are removed from the electroencephalogram data in an independent component analysis mode to obtain reference electroencephalogram data; respectively extracting the statistical characteristics of the physiological electro-data and the eye movement data to obtain reference physiological electro-data and reference eye movement data; extracting the characteristics of the face video data by adopting a pre-trained neural network model, and classifying the face video data by adopting a preset machine learning classifier to obtain reference face video data; and generating reference data of the emotion recognition algorithm according to the reference electroencephalogram data, the reference physiological electro-data, the reference eye movement data and the reference face video data.
The above-described process is described below with reference to the drawings and specific examples.
According to the emotion data acquisition method, an experimental paradigm based on micro-expression induction and a synchronization process of a multi-source acquisition module are designed. The micro-expression data, when collected, requires the subject to view different strong emotional stimuli while maintaining a facial expression-free condition. The induced natural micro expression overcomes the unnatural problems in some micro expression databases in the early stage to a certain extent. And synchronously acquiring multi-mode signals such as brain electrical signals, physiological electricity, eye movement, depth data and the like through various communication modes. The method solves the problems of small sample size of a database of the micro expression and missing of depth data and physiological signal data, provides a theoretical basis for a subsequent micro expression recognition algorithm, and supports the research of multi-modal emotion perception and non-contact physiological signal measurement.
As shown in fig. 2, the multi-modal emotion data synchronous acquisition process includes the following three parts:
firstly, the following steps: the emotion-inducing experimental program section (i.e., emotion-inducing experimental program module in fig. 2).
This section comprises two stages: an emotion induction experimental phase and a data annotation experimental phase.
Emotional induction experimental phase, as shown in figure 3:
(1) after the experiment is started, the tested person is informed to have a rest, no stimulating material exists at the moment, and the acquired data are used as reference data; (2) displaying an experiment guide, informing a tested person to watch the stimulation material under the condition of no expression, and if the expression is needed to be recovered as soon as possible; (3) the formal experiment is started, synchronous signals are sent to all the acquisition peripherals, a tested person starts to watch stimulating materials, and the tested person induces the spontaneous emotion through a video inducing mode in order to avoid the influence on facial expressions caused by speaking and the like; (4) after watching, evaluating the effectiveness (positive-negative) and arousal (excitation-silence) of the stimulus material; then the process is repeated, and the repetition times are equal to the number of stimulation videos; (5) after all the stimulating materials are displayed, the experiment is ended.
Then the experiment enters the second stage, and the data annotation experiment stage is as shown in fig. 4:
after the experimental guidance words are finished, the stimulation video in the first stage and the face video recorded synchronously with the stimulation video are played synchronously, the tested person marks expressions generated in the previous process independently, and the peak frame, the initial frame and the end frame are marked respectively, so that the playing of the stimulation video is beneficial to the tested person to accurately recall and mark the expressions. Thereafter, the professional performs the face movement unit labeling.
II, secondly, the method comprises the following steps: and a physiological signal and facial video synchronous acquisition part (namely a physiological and expression synchronous acquisition module in the figure 2).
In the process of constructing the multi-modal data set, the synchronization of multi-modal signals is of great importance, otherwise, the correlation analysis cannot be carried out, and the synchronization of the multi-modal signals can greatly save the workload of subsequent data preprocessing. The multi-modal signal which can be synchronized by the synchronization method in the invention comprises the following steps: brain electrical data, physiological electrical data, eye movement data and facial video data.
Fig. 5 is a schematic diagram of the connection between the experimental host and each data acquisition device. The experimental host is respectively connected with the main test display screen and the tested display screen through display card interfaces (DVI, DP and HDMI); connecting a depth camera through a USB3.1(Type A, Type C), and calling a depth camera SDK (C + +, Matlab, Python) in an experimental program to realize the recording synchronous with the stimulation video playing; the multi-lead physiological instrument is connected through a cross network cable, a synchronization module of the multi-lead physiological instrument is connected through a parallel port, and data of the multi-lead physiological instrument is marked by controlling the high and low levels of a parallel port needle to realize synchronization, wherein an experimental host can be provided with a parallel port, if the host does not have the parallel port, the parallel port communication function is realized through a PCI (peripheral component interconnect) (E) parallel port adapter and inquiring a port I/O (input/output) address; the electroencephalogram equipment and the eye tracker are connected through a network by routing, and are addressed through an IP (Internet protocol) port to realize synchronization.
Thirdly, the steps of: the emotion recognition data set normalization portion (i.e., the emotion recognition reference module in fig. 2).
A baseline algorithm evaluation is provided for the acquired multimodal data set. The method comprises the steps of respectively preprocessing acquired multi-modal data, removing artifacts such as eye movement by applying independent component analysis to electroencephalogram data, respectively extracting statistical characteristics of electrocardio, skin electrical impedance, respiration, skin temperature and the like, detecting a human face (which can be an open source tool such as an Openface) for a face video, extracting characteristics by using a pre-trained neural network model (AlexNet, GoogleNet and the like), and performing classification and evaluation by using a classical machine learning classifier (which can be an SVM, a random forest, naive Bayes and a multilayer perceptron), so as to provide a reference for evaluation of a subsequent emotion recognition algorithm.
The embodiment of the invention designs an experimental process for inducing the micro expression in a natural state, synchronously acquires RGB images, depth images, electroencephalogram, physiological electricity and eye movement multi-mode data, labels an initial frame, a peak frame, an end frame and a face movement unit for the expression and the micro expression, solves the problem of synchronization of multi-mode acquisition and labeling, and can make up the deficiency of depth data and physiological signal data in the existing data set and expand the sample size of the micro expression.
The embodiment of the invention has the following beneficial effects:
in the emotion data set synchronous acquisition process provided by the embodiment of the invention, an experiment induction program can be easily expanded, a targeted supplementary experiment is carried out according to the specific expression insufficiency in the data set, the stimulation video and the face video are synchronously played in the labeling stage, the self-memory of a tested person is facilitated, and the labeling efficiency and the reliability of the labeling result are improved by the provision of the labeling program. The synchronization of the multi-mode information of the invention integrates various communication modes such as network, USB, parallel port communication and the like to realize the high-precision synchronization of the multi-mode data, improves the quality of a data set, provides more modal objective data for emotion calculation, and greatly promotes the work of algorithm design, verification application and the like of emotion recognition.
Another embodiment of the present invention provides an emotion data acquisition apparatus, a schematic structure of which is shown in fig. 6, including:
an obtaining module 10, configured to induce a micro-expression based on a video stimulation material, and synchronously obtain physiological data of a tested object and a facial video recording a facial expression, where the facial video includes: face RGB video and depth video, the physiological data including at least: electroencephalogram data, peripheral physiological electrical data, eye movement data; the marking module 20 is coupled with the obtaining module 10 and is used for receiving a peak frame, a start frame and an end frame of a micro expression sequence in the marked face video of the tested body in the process of playing back the face video and the stimulation material video and obtaining the face video data between the start frame and the end frame; and the determining module 30 is coupled with the labeling module 20 and is used for acquiring the physiological data in the corresponding time range of the face video data and determining the emotion data according to the physiological data and the face video data.
The device may further include a processing module, coupled to the determining module, for performing a predetermined process on the emotion data to obtain reference data of the emotion recognition algorithm. The processing module is specifically configured to: removing interference artifacts from the electroencephalogram data by an independent component analysis mode to obtain reference electroencephalogram data; respectively extracting the physiological electrical data and the statistical characteristics of the eye movement data to obtain reference physiological electrical data and reference eye movement data; extracting the characteristics of the face video data by adopting a pre-trained neural network model, and classifying the face video data by adopting a preset machine learning classifier to obtain reference face video data; and generating reference data of the emotion recognition algorithm according to the reference electroencephalogram data, the reference physiological electro-data, the reference eye movement data and the reference face video data.
The acquisition module is also used for acquiring physiological data of the tested body and recording a facial video of the facial expression before the video stimulation material is not played.
The labeling module is further used for performing face motion unit labeling on the face in the face video.
The embodiment of the invention acquires the physiological data of the tested body and the facial video recording the facial expression based on the video stimulation material, enables the tested body to participate in data annotation of emotion change in the playback process, and correspondingly acquires the physiological data when acquiring the annotated facial video data, further establishes the relationship between the physiological data and the facial video data, constructs complete emotion data, can be used for researching human micro-expression, explores potential relevance between the micro-expression and the physiological data, and provides precious data resources for scientific research work.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (6)

1. A method for acquiring emotion data, comprising:
the method comprises the following steps of inducing micro expression based on a video stimulation material, synchronously acquiring physiological data of a tested body and recording a facial video of the facial expression, wherein the facial video comprises: face RGB video and depth video, the physiological data comprising at least: electroencephalogram data, peripheral physiological electrical data, eye movement data;
receiving a peak frame, a starting frame and an ending frame of a micro expression sequence marked in the face video by the tested body through the playback process of the face video and the video stimulation material, and acquiring face video data between the starting frame and the ending frame;
acquiring physiological data in a time range corresponding to the face video data, and determining emotion data according to the physiological data and the face video data;
after determining emotion data according to the physiological data and the face video data, the method further comprises the following steps: performing predetermined processing on the emotion data to obtain benchmark data of an emotion recognition algorithm, wherein the method comprises the following steps: removing interference artifacts from the electroencephalogram data by an independent component analysis mode to obtain reference electroencephalogram data;
respectively extracting the physiological electrical data and the statistical characteristics of the eye movement data to obtain reference physiological electrical data and reference eye movement data;
extracting the characteristics of the face video data by adopting a pre-trained neural network model, and classifying the face video data by adopting a preset machine learning classifier to obtain reference face video data;
and generating reference data of the emotion recognition algorithm according to the reference electroencephalogram data, the reference physiological electro-data, the reference eye movement data and the reference face video data.
2. The method of claim 1, wherein prior to simultaneously acquiring physiological data of the subject and recording facial video of the facial expression based on the video stimulus material inducing the micro-expression, further comprising: before the video stimulation material is not played, the physiological data of the tested body is obtained and the facial video of the facial expression is recorded.
3. The method of any of claims 1-2, wherein prior to obtaining the facial video data between the starting frame and the ending frame, further comprising: and carrying out face motion unit labeling on the face in the face video.
4. An acquisition apparatus of emotion data, comprising:
the acquisition module is used for inducing micro-expression based on the video stimulation material, synchronously acquiring physiological data of the tested body and recording a facial video of the facial expression, wherein the facial video comprises: face RGB video and depth video, the physiological data comprising at least: electroencephalogram data, peripheral physiological electrical data, eye movement data;
the marking module is used for receiving the peak value frame, the starting frame and the ending frame of the micro expression sequence marked in the face video by the tested body in the process of playing back the face video and the video stimulation material, and acquiring the face video data between the starting frame and the ending frame;
the determining module is used for acquiring physiological data in a time range corresponding to the face video data and determining emotion data according to the physiological data and the face video data;
the processing module is used for carrying out preset processing on the emotion data to obtain benchmark data of an emotion recognition algorithm;
the processing module is specifically configured to: removing interference artifacts from the electroencephalogram data by an independent component analysis mode to obtain reference electroencephalogram data;
respectively extracting the physiological electrical data and the statistical characteristics of the eye movement data to obtain reference physiological electrical data and reference eye movement data;
extracting the characteristics of the face video data by adopting a pre-trained neural network model, and classifying the face video data by adopting a preset machine learning classifier to obtain reference face video data;
and generating reference data of the emotion recognition algorithm according to the reference electroencephalogram data, the reference physiological electro-data, the reference eye movement data and the reference face video data.
5. The apparatus of claim 4,
the acquisition module is also used for acquiring physiological data of the tested body and recording a facial video of the facial expression before the video stimulation material is not played.
6. The apparatus of any one of claims 4 to 5, wherein the labeling module is further configured to perform face motion unit labeling on a human face in the facial video.
CN201910005279.5A 2019-01-03 2019-01-03 Emotion data acquisition method and device Active CN109730701B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910005279.5A CN109730701B (en) 2019-01-03 2019-01-03 Emotion data acquisition method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910005279.5A CN109730701B (en) 2019-01-03 2019-01-03 Emotion data acquisition method and device

Publications (2)

Publication Number Publication Date
CN109730701A CN109730701A (en) 2019-05-10
CN109730701B true CN109730701B (en) 2022-07-26

Family

ID=66363259

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910005279.5A Active CN109730701B (en) 2019-01-03 2019-01-03 Emotion data acquisition method and device

Country Status (1)

Country Link
CN (1) CN109730701B (en)

Families Citing this family (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110037693A (en) * 2019-04-24 2019-07-23 中央民族大学 A kind of mood classification method based on facial expression and EEG
CN110215218A (en) * 2019-06-11 2019-09-10 北京大学深圳医院 A kind of wisdom wearable device and its mood identification method based on big data mood identification model
CN110264097A (en) * 2019-06-26 2019-09-20 哈尔滨理工大学 More scientific workflows based on cloud environment concurrently execute dispatching method
CN110464366A (en) * 2019-07-01 2019-11-19 华南师范大学 A kind of Emotion identification method, system and storage medium
CN110765849B (en) * 2019-09-09 2024-04-09 中国平安财产保险股份有限公司 Identity information acquisition method and device based on micro-expressions and computer equipment
CN111222464B (en) * 2020-01-07 2023-11-07 中国医学科学院生物医学工程研究所 Emotion analysis method and system
WO2021159230A1 (en) * 2020-02-10 2021-08-19 中国科学院深圳先进技术研究院 Brain-computer interface system and method based on sensory transmission
CN111297379A (en) * 2020-02-10 2020-06-19 中国科学院深圳先进技术研究院 Brain-computer combination system and method based on sensory transmission
CN111611860B (en) * 2020-04-22 2022-06-28 西南大学 Micro-expression occurrence detection method and detection system
CN111950381B (en) * 2020-07-20 2022-09-13 武汉美和易思数字科技有限公司 Mental health on-line monitoring system
WO2022067524A1 (en) * 2020-09-29 2022-04-07 香港教育大学 Automatic emotion recognition method and system, computing device and computer readable storage medium
CN112220455A (en) * 2020-10-14 2021-01-15 深圳大学 Emotion recognition method and device based on video electroencephalogram signals and computer equipment
CN112597938B (en) * 2020-12-29 2023-06-02 杭州海康威视系统技术有限公司 Expression detection method and device, electronic equipment and storage medium
CN112716494A (en) * 2021-01-18 2021-04-30 上海对外经贸大学 Mental health condition analysis algorithm based on micro-expression and brain wave analysis algorithm
CN113420591B (en) * 2021-05-13 2023-08-22 华东师范大学 Emotion-based OCC-PAD-OCEAN federal cognitive modeling method
CN113705621B (en) * 2021-08-05 2023-12-08 沃民高新科技(北京)股份有限公司 Non-contact image evaluation method based on human heart recognition model
CN113827240B (en) * 2021-09-22 2024-03-22 北京百度网讯科技有限公司 Emotion classification method, training device and training equipment for emotion classification model
CN114366102B (en) * 2022-01-05 2024-03-01 广东电网有限责任公司 Multi-mode tension emotion recognition method, device, equipment and storage medium
CN115715680A (en) * 2022-12-01 2023-02-28 杭州市第七人民医院 Anxiety discrimination method and device based on connective tissue potential
CN117131099A (en) * 2022-12-14 2023-11-28 广州数化智甄科技有限公司 Emotion data analysis method and device in product evaluation and product evaluation method
CN117954100A (en) * 2024-03-26 2024-04-30 天津市品茗科技有限公司 Cognitive ability testing and training method and system based on user behaviors

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104055529A (en) * 2014-06-19 2014-09-24 西南大学 Method for calculating emotional electrocardiosignal scaling exponent
CN108056774A (en) * 2017-12-29 2018-05-22 中国人民解放军战略支援部队信息工程大学 Experimental paradigm mood analysis implementation method and its device based on visual transmission material
CN108216254A (en) * 2018-01-10 2018-06-29 山东大学 The road anger Emotion identification method merged based on face-image with pulse information

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013537435A (en) * 2010-06-07 2013-10-03 アフェクティヴァ,インコーポレイテッド Psychological state analysis using web services

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104055529A (en) * 2014-06-19 2014-09-24 西南大学 Method for calculating emotional electrocardiosignal scaling exponent
CN108056774A (en) * 2017-12-29 2018-05-22 中国人民解放军战略支援部队信息工程大学 Experimental paradigm mood analysis implementation method and its device based on visual transmission material
CN108216254A (en) * 2018-01-10 2018-06-29 山东大学 The road anger Emotion identification method merged based on face-image with pulse information

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
三维人脸表情获取及重建技术综述;王珊等;《系统仿真学报》;20180708(第07期);全文 *
基于时空特征的微表情自动识别系统;王子彦;《信息化研究》;20160220(第01期);第44-47页 *

Also Published As

Publication number Publication date
CN109730701A (en) 2019-05-10

Similar Documents

Publication Publication Date Title
CN109730701B (en) Emotion data acquisition method and device
CN110313923B (en) Autism early-stage screening system based on joint attention ability test and audio-video behavior analysis
Bulagang et al. A review of recent approaches for emotion classification using electrocardiography and electrodermography signals
CN110169770B (en) Fine-grained visualization system and method for emotion electroencephalogram
Matlovic et al. Emotions detection using facial expressions recognition and EEG
Zhai et al. Stress detection in computer users based on digital signal processing of noninvasive physiological variables
Danner et al. Quantitative analysis of multimodal speech data
CN114209324B (en) Psychological assessment data acquisition method based on image visual cognition and VR system
CN105105771B (en) The cognition index analysis method of latent energy value test
KR101724939B1 (en) System and method for predicting intention of user using brain wave
CN107992199A (en) A kind of Emotion identification method, system and electronic equipment for electronic equipment
JP7336755B2 (en) DATA GENERATION DEVICE, BIOLOGICAL DATA MEASUREMENT SYSTEM, CLASSIFIER GENERATION DEVICE, DATA GENERATION METHOD, CLASSIFIER GENERATION METHOD, AND PROGRAM
US20180296107A1 (en) Detection of the heartbeat in cranial accelerometer data using independent component analysis
CN114640699B (en) Emotion induction monitoring system based on VR role playing game interaction
CN111222464B (en) Emotion analysis method and system
CN107822628B (en) Epileptic brain focus area automatic positioning device and system
Landowska Emotion monitor-concept, construction and lessons learned
CN113764099A (en) Psychological state analysis method, device, equipment and medium based on artificial intelligence
Jaswal et al. Empirical analysis of multiple modalities for emotion recognition using convolutional neural network
CN116473556A (en) Emotion calculation method and system based on multi-site skin physiological response
CN115813343A (en) Child behavior abnormity evaluation method and system
US20200037911A1 (en) Intention decoding apparatus and intention conveyance assist apparatus
Blache et al. The Badalona Corpus An Audio, Video and Neuro-Physiological Conversational Dataset
Deenadayalan et al. EEG based learner’s learning style and preference prediction for E-learning
Knierim et al. Exploring the recognition of facial activities through around-the-ear electrode arrays (ceegrids)

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant