CN109730701A - A kind of acquisition methods and device of mood data - Google Patents

A kind of acquisition methods and device of mood data Download PDF

Info

Publication number
CN109730701A
CN109730701A CN201910005279.5A CN201910005279A CN109730701A CN 109730701 A CN109730701 A CN 109730701A CN 201910005279 A CN201910005279 A CN 201910005279A CN 109730701 A CN109730701 A CN 109730701A
Authority
CN
China
Prior art keywords
data
video
facial
expression
benchmark
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910005279.5A
Other languages
Chinese (zh)
Other versions
CN109730701B (en
Inventor
邹博超
吕相文
田子
谢海永
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Electronics Technology Group Corp CETC
Original Assignee
China Electronics Technology Group Corp CETC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Electronics Technology Group Corp CETC filed Critical China Electronics Technology Group Corp CETC
Priority to CN201910005279.5A priority Critical patent/CN109730701B/en
Publication of CN109730701A publication Critical patent/CN109730701A/en
Application granted granted Critical
Publication of CN109730701B publication Critical patent/CN109730701B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention discloses a kind of acquisition methods of mood data and devices, method includes: the facial video that micro- expression and the synchronous physiological data for obtaining tested body and record facial expression are induced based on visual transmission material, wherein, facial video includes: facial rgb video and deep video, and physiological data includes at least: eeg data, periphery physiology electric data, eye movement data;By the replayed section of facial video and stimulus material video, peak value frame, start frame and end frame that tested body marks micro- expression sequence in facial video are received, the facial video data between start frame and end frame is obtained;It obtains facial video data and corresponds to the physiological data in time range, mood data is determined according to physiological data and facial video data.The present invention constructs complete mood data, which can be used in the research of the micro- expression of the mankind, explores the potential relevance between micro- expression and physiological data, and valuable data resource is provided for subsequent scientific research.

Description

A kind of acquisition methods and device of mood data
Technical field
The present invention relates to data acquisition arts, more particularly to the acquisition methods and device of a kind of mood data.
Background technique
In the research of intelligent human-machine interaction, possess the identification to emotion, analysis, understanding, expression ability be must can not One of few intelligence.Other than expression, sound, the generation of mood will also embody in cerebration for the external manifestation of human emotion, And therefore the observable variation for causing electrocardio, breathing etc. physiologically is analyzed in the visual behaviour to different modalities On the basis of, multiple modalities are effectively merged, emotion information more abundant will be obtained, are more advanced machine intelligence Realization creates conditions.
Currently, deep learning is rapid in the application development of artificial intelligence field, the method for deep learning generally requires data Support.However, because emotion is more rare and duration is very of short duration, and handmarking these emotion samples are The work of one very time-consuming and easy error.Just because of these are difficult, the research that existing major part identifies human emotion It is all based on " artificial " emotion sample, i.e., performs a series of affective states out before video camera by subject.But increasingly More evidences points out that the behavior of intentional " performance " is the spontaneous behaviour different from generating in its natural state.Due to luring for micro- expression Hair, acquisition and calibration are all very time-consuming and laborious, cause the sample size of micro- expression very small, up to the present, publish Micro- expression sample very little is typical small sample problem.
Existing micro- expression data collection is not used to explore micro- expression and physiology number without corresponding physiological data According to potential relevance.
Summary of the invention
The present invention provides the acquisition methods and device of a kind of mood data, to solve the problems, such as the as follows of the prior art: existing The micro- expression data collection having is not used to explore potential between micro- expression and physiological data without corresponding physiological data Relevance.
In order to solve the above technical problems, on the one hand, the present invention provides a kind of acquisition methods of mood data, comprising: be based on Visual transmission material induces the facial video of micro- expression and the synchronous physiological data for obtaining tested body and record facial expression, In, the face video includes: facial rgb video and deep video, and the physiological data includes at least: eeg data, periphery Physiology electric data, eye movement data;By the replayed section of the facial video and the visual transmission material, receive described tested Examination body marks peak value frame, start frame and the end frame of micro- expression sequence in the facial video, and obtains start frame and end frame Between facial video data;It obtains the facial video data and corresponds to physiological data in time range, according to the physiology Data and the facial video data determine mood data.
Optionally, after determining mood data according to the physiological data and the facial video data, further includes: to institute It states mood data and carries out predetermined process, to obtain the reference data of Emotion identification algorithm.
Optionally, predetermined process is carried out to the mood data, to obtain the reference data of Emotion identification algorithm, comprising: Interference artefact is removed with independent component analysis mode to the eeg data, to obtain benchmark eeg data;Institute is extracted respectively Physiology electric data, the statistics feature of the eye movement data are stated, to obtain benchmark physiology electric data and benchmark eye movement data;Using The neural network model of pre-training carries out feature extraction to the facial video data, and uses predetermined Machine learning classifiers institute It states facial video data to classify, to obtain benchmark face video data;It is raw according to the benchmark eeg data, the benchmark Manage the reference data that electric data, the benchmark eye movement data, the benchmark face video data generate the Emotion identification algorithm.
Optionally, micro- expression and the synchronous physiological data and recording surface for obtaining tested body are induced based on visual transmission material Before the facial video of portion's expression, further includes: before not playing visual transmission material, obtain tested body physiological data and Record the facial video of facial expression.
Optionally, before the facial video data between acquisition start frame and end frame, further includes: to the facial video In face carry out facial movement unit mark.
On the other hand, the present invention also provides a kind of acquisition device of mood data, comprising: module is obtained, for based on view Frequency stimulus material induces the facial video of micro- expression and the synchronous physiological data for obtaining tested body and record facial expression, In, the face video includes: facial rgb video and deep video, and the physiological data includes at least: eeg data, periphery Physiology electric data, eye movement data;Labeling module, in the process for playing back the facial video and the visual transmission material In, it receives the tested body and marks peak value frame, start frame and the end frame of micro- expression sequence in the facial video, and obtain Facial video data between start frame and end frame;Determining module corresponds to time model for obtaining the facial video data Interior physiological data is enclosed, mood data is determined according to the physiological data and the facial video data.
Optionally, further includes: processing module, for carrying out predetermined process to the mood data, to obtain Emotion identification The reference data of algorithm.
Optionally, the processing module, is specifically used for: removing to the eeg data with independent component analysis mode dry Artefact is disturbed, to obtain benchmark eeg data;The physiology electric data, the statistics feature of the eye movement data are extracted respectively, with Obtain benchmark physiology electric data and benchmark eye movement data;Using pre-training neural network model to the facial video data into Row feature extraction, and classified using face video data described in predetermined Machine learning classifiers, to obtain benchmark face view Frequency evidence;According to the benchmark eeg data, the benchmark physiology electric data, the benchmark eye movement data, benchmark face Video data generates the reference data of the Emotion identification algorithm.
Optionally, the acquisition module, is also used to before not playing visual transmission material, obtains the physiology of tested body Data and the facial video for recording facial expression.
Optionally, the labeling module is also used to carry out facial movement unit mark to the face in the facial video.
The embodiment of the present invention obtains the physiological data of tested body based on visual transmission material and records the face of facial expression Portion's video, and allow tested body to participate in the data mark of emotional change in replayed section, and getting the face after marking When video data, corresponding acquisition physiological data, and then establish the relationship between physiological data and facial video data, building Complete mood data, the mood data can be used in the research of the micro- expression of the mankind, explore between micro- expression and physiological data Potential relevance, and valuable data resource is provided for subsequent scientific research.
Detailed description of the invention
Fig. 1 is the flow chart of the acquisition methods of mood data in one embodiment of the invention;
Fig. 2 is multi-modal mood data synchronous acquisition process schematic in one embodiment of the invention;
Fig. 3 is that mood induces acquisition experimental stage flow chart in one embodiment of the invention;
Fig. 4 is that data mark experimental stage flow chart in one embodiment of the invention;
Fig. 5 is the connection schematic diagram that host and each data acquisition equipment are tested in one embodiment of the invention;
Fig. 6 is the structural schematic diagram of the acquisition device of mood data in another embodiment of the present invention.
Specific embodiment
In order to solve the problems, such as the as follows of the prior art: existing micro- expression data collection is without corresponding physiology number According to being not used to explore the potential relevance between micro- expression and physiological data;The present invention provides a kind of acquisitions of mood data Method and device, below in conjunction with attached drawing and embodiment, the present invention will be described in further detail.It should be appreciated that this place The specific embodiment of description is only used to explain the present invention, does not limit the present invention.
One embodiment of the invention provides a kind of acquisition methods of mood data, and the process of this method is as shown in Figure 1, include Step S101 to S103:
S101 induces micro- expression based on visual transmission material and the synchronous physiological data for obtaining tested body and record is facial The facial video of expression, wherein facial video includes: facial rgb video and deep video, and physiological data includes at least: brain electricity Data, periphery physiology electric data, eye movement data.
When specific implementation, periphery physiology electric data can be the data such as electrocardio, skin pricktest impedance, breathing, skin temperature, this Place is without limiting.
Before not playing visual transmission material, the physiological data of tested body can also be obtained and record facial expression Facial video, data and video which gets can be used as the reference frame under user's rest state.
S102 is received tested body and is marked in facial video by the replayed section of facial video and visual transmission material Peak value frame, start frame and the end frame of micro- expression sequence, and obtain the facial video data between start frame and end frame.
The existing image/video obtained by stimulation mode is RGB or infrared image, and does not include facial video data (i.e. depth data), facial expression be it is three-dimensional, the introducing of depth data will improve the accuracy rate of Expression Recognition.
Before obtaining the facial video data between start frame and end frame, the face in facial video can also be carried out Facial movement unit mark.It is concentrated in expression data, marks the motor unit (AU) of micro- expression sample, facilitate more objective and accurate Ground marks expression.It is reported the characteristics of the mood of micro- expression marks, need to comprehensively consider AU, audio-visual-materials with the subjective of tested body It accuses.
S103 obtains facial video data and corresponds to the physiological data in time range, according to physiological data and facial video Data determine mood data.
After getting the facial video data being in a bad mood, so that it may obtain the physiological data in corresponding time range, i.e., Physiological data and facial video data can be constructed into corresponding relationship, and then determine the corresponding mood data of such micro- expression.
The embodiment of the present invention obtains the physiological data of tested body based on visual transmission material and records the face of facial expression Portion's video, and allow tested body to participate in the data mark of emotional change in replayed section, and getting the face after marking When video data, corresponding acquisition physiological data, and then establish the relationship between physiological data and facial video data, building Complete mood data, the mood data can be used in the research of the micro- expression of the mankind, explore between micro- expression and physiological data Potential relevance, and valuable data resource is provided for subsequent scientific research.
After determining mood data according to physiological data and facial video data, mood data can also be made a reservation for Processing, to obtain the reference data of Emotion identification algorithm.When specific implementation, eeg data is gone with independent component analysis mode Except interference artefact, to obtain benchmark eeg data;Physiology electric data, the statistics feature of eye movement data are extracted, respectively to obtain Benchmark physiology electric data and benchmark eye movement data;Feature is carried out to facial video data using the neural network model of pre-training to mention It takes, and is classified using predetermined Machine learning classifiers face video data, to obtain benchmark face video data;According to base Quasi- eeg data, benchmark physiology electric data, benchmark eye movement data, benchmark face video data generate the benchmark of Emotion identification algorithm Data.
In the following, being illustrated in conjunction with drawings and concrete examples to the above process.
In the acquisition methods of the mood data of the embodiment of the present invention, the experimental paradigm induced based on micro- expression is devised And the synchronizing process of multi-source acquisition module.Micro- expression data requires subject to watch not under the conditions of keeping poker-faced in acquisition With strong emotional stimulus.Natural micro- expression is induced, is overcome in the early period one of expression data library slightly to a certain extent Non-natural problem.And pass through the multi-modal signals such as multiple communication modes synchronous acquisition brain electricity, physiology electric, eye movement, depth data. Database sample size to solve the problems, such as micro- expression is few, depth data and physiological signal data missing, is subsequent micro- expression Recognizer provides theoretical foundation, and supports the research of multi-modal mood sensing Yu non-contact physiological signal measurements.
As shown in Fig. 2, the multi-modal mood data synchronous acquisition process includes following three part:
One: mood induces experimental arrangement part (i.e. mood in Fig. 2 induces experimental arrangement module).
The part includes two stages: mood induces the experimental stage and data mark the experimental stage.
Mood induces the experimental stage, as shown in Figure 3:
(1) after experiment starts, subject rest is informed, at this time without any stimulus material, acquired data are as reference data; (2) it shows experimental instruction, informs that subject is maintained in amimia situation and watch stimulus material, as espressiove need to restore as early as possible; (3) formal experiment starts, and sends synchronization signal to all acquisition peripheral hardwares, subject starts to watch stimulus material, in order to avoid speaking Etc. reasons facial expression is affected, induce in such a way that video induces to being tested spontaneous mood;(4) right after watching Seen stimulus material carries out validity (actively --- passive), arousal (excited --- quiet) evaluation;Then this process repeats, weight Again number is equal to stimulation number of videos;(5) after all stimulus materials are shown, experiment terminates.
Then experiment enters second stage, and data mark the experimental stage, as shown in Figure 4:
After experimental instruction, the stimulation video being played simultaneously in the stage one is regarded with recorded face is synchronized Frequently, subject marks peak value frame, start frame and end frame generated expression in main pip prior procedures respectively, stimulates video Broadcasting, help to be tested and accurately recall and mark expression.Hereafter, then by professional carry out facial movement unit mark.
Two: physiological signal and facial audio video synchronization collecting part (i.e. physiology and expression synchronization acquisition module in Fig. 2).
In multi-modal data collection building process, synchronizing for multi-modal signal is most important, otherwise will be unable to be associated Analysis, the synchronization of multi-modal signal can largely save the pretreated workload of follow-up data.Synchronous method in the present invention can be same The multi-modal signal of step includes: eeg data, physiology electric data, eye movement data and facial video data.
Fig. 5 is the connection schematic diagram for testing host and each data acquisition equipment.Wherein experiment host passes through video card interface (DVI, DP, HDMI) is separately connected two display screens of main examination and subject;Depth phase is connected by USB3.1 (Type A, Type C) Machine, and call depth camera SDK (C++, Matlab, Python) to realize the note synchronous with stimulation video playing in experimental arrangement Record;Physiograph is led by the way that crossover network cables connection, and by the parallel port connection synchronization modules for leading physiograph, by controlling parallel port more Needle low and high level marks more to lead physiograph data, realizes and synchronizes, wherein experiment host can carry parallel port, such as host There is no parallel port, parallel port adapter is turned by PCI (E), and inquire port I/O address, realizes parallel port communication function;Brain electric equipment with Eye tracker is connected to the network by route implementing, and is addressed by the port IP, is realized and is synchronized.
Three: Emotion identification data set benchmark part (i.e. Emotion identification base modules in Fig. 2).
Multi-modal data collection to be collected provides benchmark algorithm evaluation.For collected multi-modal data point It is not pre-processed, to eeg data with artefacts such as independent component analysis removal eye movements, for electrocardio, skin pricktest impedance, is exhaled Suction, skin temperature equal part, which you can well imagine, takes its statistics feature, (can be Open-Source Tools, such as facial video detection face Openface), and using the neural network model (AlexNet, GoogleNet etc.) for using pre-training, feature extraction is carried out, and Classification assessment is carried out using classical Machine learning classifiers (can be SVM, random forest, naive Bayesian, multi-layer perception (MLP)), Assessment for subsequent Emotion identification algorithm provides benchmark.
The embodiment of the present invention devises the experiment flow for inducing micro- expression under natural conditions, synchronous acquisition RGB image, depth Image and brain electricity, physiology electric, eye movement multi-modal data are spent, and start frame has been carried out to expression, micro- expression, peak value frame, has been terminated The mark of frame and facial movement unit solves the stationary problem of multi-modal acquisition and mark, can make up available data collection The missing of middle depth data and physiological signal data expands micro- expression sample size.
The embodiment of the present invention bring it is following the utility model has the advantages that
In mood data collection synchronous acquisition process provided in an embodiment of the present invention, experiment, which induces program, can be easier to expand Exhibition, and specific aim supplement experiment is carried out according to certain kinds expression deficiency in data set, in the mark stage, stimulation video and face are regarded Frequency is played simultaneously, and facilitates self recalling for subject, and the offer of marking program will improve annotating efficiency and annotation results reliability. The multiple communication modes such as the synchronous fusion of multi-modal information of the present invention network, USB, parallel port communication realize the height of multi-modal data Accurate synchronization improves the quality of data set, and more multi-modal objective data is provided for affection computation, will greatly facilitate feelings The work such as the other algorithm design of perception and verifying application.
Another embodiment of the present invention provides a kind of acquisition device of mood data, the structural representation of the device such as Fig. 6 institutes Show, comprising:
Module 10 is obtained, for inducing micro- expression and the synchronous physiological data for obtaining tested body based on visual transmission material And the facial video of record facial expression, wherein facial video includes: facial rgb video and deep video, and physiological data is at least It include: eeg data, periphery physiology electric data, eye movement data;Labeling module 20 is coupled with module 10 is obtained, for playing back During facial video and stimulus material video, receive tested body mark the peak value frame of micro- expression sequence in facial video, Start frame and end frame, and obtain the facial video data between start frame and end frame;Determining module 30, with labeling module 20 Coupling corresponds to physiological data in time range for obtaining facial video data, according to the physiological data and the face Video data determines mood data.
Above-mentioned apparatus can also include processing module, couple with determining module, for making a reservation for the mood data Processing, to obtain the reference data of Emotion identification algorithm.The processing module, is specifically used for: to the eeg data with independent Constituent analysis mode removes interference artefact, to obtain benchmark eeg data;The physiology electric data, the eye movement number are extracted respectively According to statistics feature, to obtain benchmark physiology electric data and benchmark eye movement data;Using the neural network model pair of pre-training The face video data carries out feature extraction, and is divided using face video data described in predetermined Machine learning classifiers Class, to obtain benchmark face video data;According to the benchmark eeg data, the benchmark physiology electric data, the benchmark eye Dynamic data, the benchmark face video data generate the reference data of the Emotion identification algorithm.
Above-mentioned acquisition module, is also used to before not playing visual transmission material, obtain tested body physiological data and Record the facial video of facial expression.
Above-mentioned labeling module is also used to carry out facial movement unit mark to the face in the facial video.
The embodiment of the present invention obtains the physiological data of tested body based on visual transmission material and records the face of facial expression Portion's video, and allow tested body to participate in the data mark of emotional change in replayed section, and getting the face after marking When video data, corresponding acquisition physiological data, and then establish the relationship between physiological data and facial video data, building Complete mood data, the mood data can be used in the research of the micro- expression of the mankind, explore between micro- expression and physiological data Potential relevance provides valuable data resource for research work.
The embodiment of the present invention is described with above attached drawing, but the invention is not limited to above-mentioned specific Embodiment, the above mentioned embodiment is only schematical, rather than restrictive, those skilled in the art Under the inspiration of the present invention, without breaking away from the scope protected by the purposes and claims of the present invention, it can also make very much Form, all of these belong to the protection of the present invention.

Claims (10)

1. a kind of acquisition methods of mood data characterized by comprising
The face of micro- expression and the synchronous physiological data for obtaining tested body and record facial expression is induced based on visual transmission material Portion's video, wherein the face video includes: facial rgb video and deep video, and the physiological data includes at least: brain electricity Data, periphery physiology electric data, eye movement data;
By the replayed section of the facial video and the stimulus material video, receives the tested body and mark the face Peak value frame, start frame and the end frame of micro- expression sequence in video, and obtain the facial video counts between start frame and end frame According to;
It obtains the facial video data and correspond to physiological data in time range, according to the physiological data and the face view Frequency is according to determining mood data.
2. the method as described in claim 1, which is characterized in that determined according to the physiological data and the facial video data After mood data, further includes:
Predetermined process is carried out to the mood data, to obtain the reference data of Emotion identification algorithm.
3. method according to claim 2, which is characterized in that predetermined process is carried out to the mood data, to obtain mood The reference data of recognizer, comprising:
Interference artefact is removed with independent component analysis mode to the eeg data, to obtain benchmark eeg data;
The physiology electric data, the statistics feature of the eye movement data are extracted, respectively to obtain benchmark physiology electric data and base Quasi- eye movement data;
Feature extraction is carried out to the facial video data using the neural network model of pre-training, and uses predetermined machine learning Face video data described in classifier is classified, to obtain benchmark face video data;
According to the benchmark eeg data, the benchmark physiology electric data, the benchmark eye movement data, the benchmark face video Data generate the reference data of the Emotion identification algorithm.
4. the method as described in claim 1, which is characterized in that induce micro- expression and synchronous acquisition quilt based on visual transmission material Before the physiological data of test body and the facial video of record facial expression, further includes:
Before not playing visual transmission material, obtains the physiological data of tested body and record the facial video of facial expression.
5. method according to any one of claims 1 to 4, which is characterized in that obtain the face between start frame and end frame Before portion's video data, further includes:
Facial movement unit mark is carried out to the face in the facial video.
6. a kind of acquisition device of mood data characterized by comprising
Module is obtained, for inducing micro- expression and the synchronous physiological data and record for obtaining tested body based on visual transmission material The facial video of facial expression, wherein the face video includes: facial rgb video and deep video, and the physiological data is extremely It less include: eeg data, periphery physiology electric data, eye movement data;
Labeling module, it is described tested for receiving during playing back the facial video and the stimulus material video Body marks peak value frame, start frame and the end frame of micro- expression sequence in the facial video, and obtain start frame and end frame it Between facial video data;
Determining module corresponds to physiological data in time range for obtaining the facial video data, according to the physiology number Mood data is determined according to the facial video data.
7. device as claimed in claim 6, which is characterized in that further include:
Processing module, for carrying out predetermined process to the mood data, to obtain the reference data of Emotion identification algorithm.
8. device as claimed in claim 7, which is characterized in that the processing module is specifically used for:
Interference artefact is removed with independent component analysis mode to the eeg data, to obtain benchmark eeg data;
The physiology electric data, the statistics feature of the eye movement data are extracted, respectively to obtain benchmark physiology electric data and base Quasi- eye movement data;
Feature extraction is carried out to the facial video data using the neural network model of pre-training, and uses predetermined machine learning Face video data described in classifier is classified, to obtain benchmark face video data;
According to the benchmark eeg data, the benchmark physiology electric data, the benchmark eye movement data, the benchmark face video Data generate the reference data of the Emotion identification algorithm.
9. device as claimed in claim 6, which is characterized in that
The acquisition module, is also used to before not playing visual transmission material, obtains the physiological data and record of tested body The facial video of facial expression.
10. the device as described in any one of claim 6 to 9, which is characterized in that
The labeling module is also used to carry out facial movement unit mark to the face in the facial video.
CN201910005279.5A 2019-01-03 2019-01-03 Emotion data acquisition method and device Active CN109730701B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910005279.5A CN109730701B (en) 2019-01-03 2019-01-03 Emotion data acquisition method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910005279.5A CN109730701B (en) 2019-01-03 2019-01-03 Emotion data acquisition method and device

Publications (2)

Publication Number Publication Date
CN109730701A true CN109730701A (en) 2019-05-10
CN109730701B CN109730701B (en) 2022-07-26

Family

ID=66363259

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910005279.5A Active CN109730701B (en) 2019-01-03 2019-01-03 Emotion data acquisition method and device

Country Status (1)

Country Link
CN (1) CN109730701B (en)

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110037693A (en) * 2019-04-24 2019-07-23 中央民族大学 A kind of mood classification method based on facial expression and EEG
CN110215218A (en) * 2019-06-11 2019-09-10 北京大学深圳医院 A kind of wisdom wearable device and its mood identification method based on big data mood identification model
CN110264097A (en) * 2019-06-26 2019-09-20 哈尔滨理工大学 More scientific workflows based on cloud environment concurrently execute dispatching method
CN110464366A (en) * 2019-07-01 2019-11-19 华南师范大学 A kind of Emotion identification method, system and storage medium
CN110765849A (en) * 2019-09-09 2020-02-07 中国平安财产保险股份有限公司 Identity information acquisition method and device based on micro expression and computer equipment
CN111222464A (en) * 2020-01-07 2020-06-02 中国医学科学院生物医学工程研究所 Emotion analysis method and system
CN111297379A (en) * 2020-02-10 2020-06-19 中国科学院深圳先进技术研究院 Brain-computer combination system and method based on sensory transmission
CN111611860A (en) * 2020-04-22 2020-09-01 西南大学 Micro-expression occurrence detection method and detection system
CN111950381A (en) * 2020-07-20 2020-11-17 湖北美和易思教育科技有限公司 Mental health on-line monitoring system
CN112220455A (en) * 2020-10-14 2021-01-15 深圳大学 Emotion recognition method and device based on video electroencephalogram signals and computer equipment
CN112597938A (en) * 2020-12-29 2021-04-02 杭州海康威视系统技术有限公司 Expression detection method and device, electronic equipment and storage medium
CN112716494A (en) * 2021-01-18 2021-04-30 上海对外经贸大学 Mental health condition analysis algorithm based on micro-expression and brain wave analysis algorithm
WO2021159230A1 (en) * 2020-02-10 2021-08-19 中国科学院深圳先进技术研究院 Brain-computer interface system and method based on sensory transmission
CN113420591A (en) * 2021-05-13 2021-09-21 华东师范大学 Emotion-based OCC-PAD-OCEAN federal cognitive modeling method
CN113705621A (en) * 2021-08-05 2021-11-26 沃民高新科技(北京)股份有限公司 Non-contact image evaluation method based on human heart recognition model
CN113827240A (en) * 2021-09-22 2021-12-24 北京百度网讯科技有限公司 Emotion classification method and emotion classification model training method, device and equipment
WO2022067524A1 (en) * 2020-09-29 2022-04-07 香港教育大学 Automatic emotion recognition method and system, computing device and computer readable storage medium
CN114366102A (en) * 2022-01-05 2022-04-19 广东电网有限责任公司 Multi-mode nervous emotion recognition method, device, equipment and storage medium
CN115715680A (en) * 2022-12-01 2023-02-28 杭州市第七人民医院 Anxiety discrimination method and device based on connective tissue potential
CN117131099A (en) * 2022-12-14 2023-11-28 广州数化智甄科技有限公司 Emotion data analysis method and device in product evaluation and product evaluation method
CN117954100A (en) * 2024-03-26 2024-04-30 天津市品茗科技有限公司 Cognitive ability testing and training method and system based on user behaviors

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110301433A1 (en) * 2010-06-07 2011-12-08 Richard Scott Sadowsky Mental state analysis using web services
CN104055529A (en) * 2014-06-19 2014-09-24 西南大学 Method for calculating emotional electrocardiosignal scaling exponent
CN108056774A (en) * 2017-12-29 2018-05-22 中国人民解放军战略支援部队信息工程大学 Experimental paradigm mood analysis implementation method and its device based on visual transmission material
CN108216254A (en) * 2018-01-10 2018-06-29 山东大学 The road anger Emotion identification method merged based on face-image with pulse information

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110301433A1 (en) * 2010-06-07 2011-12-08 Richard Scott Sadowsky Mental state analysis using web services
CN104055529A (en) * 2014-06-19 2014-09-24 西南大学 Method for calculating emotional electrocardiosignal scaling exponent
CN108056774A (en) * 2017-12-29 2018-05-22 中国人民解放军战略支援部队信息工程大学 Experimental paradigm mood analysis implementation method and its device based on visual transmission material
CN108216254A (en) * 2018-01-10 2018-06-29 山东大学 The road anger Emotion identification method merged based on face-image with pulse information

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
王子彦: "基于时空特征的微表情自动识别系统", 《信息化研究》 *
王珊等: "三维人脸表情获取及重建技术综述", 《系统仿真学报》 *

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110037693A (en) * 2019-04-24 2019-07-23 中央民族大学 A kind of mood classification method based on facial expression and EEG
CN110215218A (en) * 2019-06-11 2019-09-10 北京大学深圳医院 A kind of wisdom wearable device and its mood identification method based on big data mood identification model
CN110264097A (en) * 2019-06-26 2019-09-20 哈尔滨理工大学 More scientific workflows based on cloud environment concurrently execute dispatching method
CN110464366A (en) * 2019-07-01 2019-11-19 华南师范大学 A kind of Emotion identification method, system and storage medium
CN110765849B (en) * 2019-09-09 2024-04-09 中国平安财产保险股份有限公司 Identity information acquisition method and device based on micro-expressions and computer equipment
CN110765849A (en) * 2019-09-09 2020-02-07 中国平安财产保险股份有限公司 Identity information acquisition method and device based on micro expression and computer equipment
CN111222464A (en) * 2020-01-07 2020-06-02 中国医学科学院生物医学工程研究所 Emotion analysis method and system
CN111222464B (en) * 2020-01-07 2023-11-07 中国医学科学院生物医学工程研究所 Emotion analysis method and system
WO2021159230A1 (en) * 2020-02-10 2021-08-19 中国科学院深圳先进技术研究院 Brain-computer interface system and method based on sensory transmission
CN111297379A (en) * 2020-02-10 2020-06-19 中国科学院深圳先进技术研究院 Brain-computer combination system and method based on sensory transmission
CN111611860B (en) * 2020-04-22 2022-06-28 西南大学 Micro-expression occurrence detection method and detection system
CN111611860A (en) * 2020-04-22 2020-09-01 西南大学 Micro-expression occurrence detection method and detection system
CN111950381A (en) * 2020-07-20 2020-11-17 湖北美和易思教育科技有限公司 Mental health on-line monitoring system
CN111950381B (en) * 2020-07-20 2022-09-13 武汉美和易思数字科技有限公司 Mental health on-line monitoring system
WO2022067524A1 (en) * 2020-09-29 2022-04-07 香港教育大学 Automatic emotion recognition method and system, computing device and computer readable storage medium
CN112220455A (en) * 2020-10-14 2021-01-15 深圳大学 Emotion recognition method and device based on video electroencephalogram signals and computer equipment
CN112597938A (en) * 2020-12-29 2021-04-02 杭州海康威视系统技术有限公司 Expression detection method and device, electronic equipment and storage medium
CN112716494A (en) * 2021-01-18 2021-04-30 上海对外经贸大学 Mental health condition analysis algorithm based on micro-expression and brain wave analysis algorithm
CN113420591B (en) * 2021-05-13 2023-08-22 华东师范大学 Emotion-based OCC-PAD-OCEAN federal cognitive modeling method
CN113420591A (en) * 2021-05-13 2021-09-21 华东师范大学 Emotion-based OCC-PAD-OCEAN federal cognitive modeling method
CN113705621A (en) * 2021-08-05 2021-11-26 沃民高新科技(北京)股份有限公司 Non-contact image evaluation method based on human heart recognition model
CN113705621B (en) * 2021-08-05 2023-12-08 沃民高新科技(北京)股份有限公司 Non-contact image evaluation method based on human heart recognition model
CN113827240A (en) * 2021-09-22 2021-12-24 北京百度网讯科技有限公司 Emotion classification method and emotion classification model training method, device and equipment
CN113827240B (en) * 2021-09-22 2024-03-22 北京百度网讯科技有限公司 Emotion classification method, training device and training equipment for emotion classification model
CN114366102A (en) * 2022-01-05 2022-04-19 广东电网有限责任公司 Multi-mode nervous emotion recognition method, device, equipment and storage medium
CN114366102B (en) * 2022-01-05 2024-03-01 广东电网有限责任公司 Multi-mode tension emotion recognition method, device, equipment and storage medium
CN115715680A (en) * 2022-12-01 2023-02-28 杭州市第七人民医院 Anxiety discrimination method and device based on connective tissue potential
CN117131099A (en) * 2022-12-14 2023-11-28 广州数化智甄科技有限公司 Emotion data analysis method and device in product evaluation and product evaluation method
CN117131099B (en) * 2022-12-14 2024-08-02 广州数化智甄科技有限公司 Emotion data analysis method and device in product evaluation and product evaluation method
CN117954100A (en) * 2024-03-26 2024-04-30 天津市品茗科技有限公司 Cognitive ability testing and training method and system based on user behaviors

Also Published As

Publication number Publication date
CN109730701B (en) 2022-07-26

Similar Documents

Publication Publication Date Title
CN109730701A (en) A kind of acquisition methods and device of mood data
CN110353673B (en) Electroencephalogram channel selection method based on standard mutual information
CN103126655B (en) Non-binding goal non-contact pulse wave acquisition system and sampling method
Danner et al. Quantitative analysis of multimodal speech data
CN108056774A (en) Experimental paradigm mood analysis implementation method and its device based on visual transmission material
US10765332B2 (en) Detection of the heartbeat in cranial accelerometer data using independent component analysis
US10085684B2 (en) State identification in data with a temporal dimension
Conneau et al. Assessment of new spectral features for eeg-based emotion recognition
CN105512609A (en) Multi-mode fusion video emotion identification method based on kernel-based over-limit learning machine
CN104367306A (en) Physiological and psychological career evaluation system and implementation method
CN102389306A (en) Automatic identification method of electroencephalogram artifact and automatic identification electroencephalograph using same
CN111222464B (en) Emotion analysis method and system
Chen et al. Design and implementation of human-computer interaction systems based on transfer support vector machine and EEG signal for depression patients’ emotion recognition
CN106096544B (en) Non-contact blink and heart rate joint detection system and method based on second-order blind identification
CN107822628B (en) Epileptic brain focus area automatic positioning device and system
CN113723206A (en) Brain wave identification method based on quantum neural network algorithm
CN104793743A (en) Virtual social contact system and control method thereof
Gajewski et al. Human gaze control in real world search
KR102608633B1 (en) Electronic device and control method thereof
CN115813343A (en) Child behavior abnormity evaluation method and system
CN206472279U (en) Wearable audiovisual ganged test device
JP2020146206A (en) Information processing device, information processing method, program, and biological signal measurement system
CN113288099B (en) Face attraction identification method based on electrocardiosignals and photoplethysmography
CN104765712A (en) Multifunctional electronic product integrator and terminal
Chin et al. An affective interaction system using virtual reality and brain-computer interface

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant