CN113208635B - Emotion electroencephalogram signal induction method based on conversation - Google Patents

Emotion electroencephalogram signal induction method based on conversation Download PDF

Info

Publication number
CN113208635B
CN113208635B CN202110471023.0A CN202110471023A CN113208635B CN 113208635 B CN113208635 B CN 113208635B CN 202110471023 A CN202110471023 A CN 202110471023A CN 113208635 B CN113208635 B CN 113208635B
Authority
CN
China
Prior art keywords
emotion
voice
experiment
electroencephalogram
conversation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110471023.0A
Other languages
Chinese (zh)
Other versions
CN113208635A (en
Inventor
畅江
王莹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanxi University
Original Assignee
Shanxi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanxi University filed Critical Shanxi University
Priority to CN202110471023.0A priority Critical patent/CN113208635B/en
Publication of CN113208635A publication Critical patent/CN113208635A/en
Application granted granted Critical
Publication of CN113208635B publication Critical patent/CN113208635B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/165Evaluating the state of mind, e.g. depression, anxiety
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7271Specific aspects of physiological measurement analysis

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Psychiatry (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • Surgery (AREA)
  • Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Developmental Disabilities (AREA)
  • Social Psychology (AREA)
  • Psychology (AREA)
  • Hospice & Palliative Care (AREA)
  • Educational Technology (AREA)
  • Child & Adolescent Psychology (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physiology (AREA)
  • Signal Processing (AREA)
  • User Interface Of Digital Computer (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention relates to an emotion electroencephalogram signal induction method based on conversation. The method mainly solves the technical problem that an isolated emotion voice induced electroencephalogram signal is not suitable for actual emotion requirements in the existing method for inducing the emotion electroencephalogram signal. The technical scheme of the invention is as follows: the method comprises the following specific steps: 1) collection of affective material; 2) experiment design, namely, a chat mode of 'WeChat' voice is adopted, so that participants simulate a chat process in a real scene through a role playing mode and a role played by a computer, the real emotion of the participants is induced, and emotion electroencephalogram signals consistent with emotion inducing material labels are obtained; 3) the experiment inducing and collecting adopts a conversation mode combining voice and characters, before the experiment begins, a participant sits still in front of a computer and is provided with an electroencephalogram collecting cap, and when the participant hears voice emitted by a role played by the computer once, the participant carries out emotion dimension scoring on the heard emotion voice signals.

Description

Emotion electroencephalogram signal induction method based on conversation
Technical Field
The invention belongs to the technical field of brain science and emotion calculation, and particularly relates to an emotion electroencephalogram signal induction method based on conversation.
Background
The emotional electroencephalogram signals can be induced by various modes, so that the emotional electroencephalogram signals have great difference in the aspects of emotional types, marking schemes and the like. The choice of the inducing mode is mainly based on visual, auditory and audio-visual perception induction at present. For voice induction, the presentation mode is different from that of a visual signal, and the voice induction needs to be presented sequentially according to the time sequence, so that only after audio information is completely broadcast, the human brain can correctly understand the content expressed by the audio signal. In the emotion voice inducing material, acoustic parameters (fundamental frequency, intensity, duration and the like) of different voices are generally difficult to unify, and different documents selecting different emotion inducing materials can make the contrast of experimental results difficult to determine, so that the cognitive analysis result of specific emotions in the voices is not uniform, in addition, in the selection of the emotion materials, the number and the types of the selected emotion types are different, and a plurality of researchers adopt discrete emotion models, such as: type 2 emotions (happy, sad; positive, negative), type 3 emotions (happy, angry, sad), type 4 emotions (angry, disgust, fear, happy), type 6 emotions (angry, disgust, fear, happy, sad), and the like. Although simple and easy to understand, the discrete emotion model has great limitation in describing emotion capacity, and the number of categories of emotion classification of the discrete emotion model is difficult to determine because human emotion is very complex, and the emotion cannot be contained in all emotion states no matter how many categories the emotion is classified into, and in addition, different translations such as 'Happy' can exist between the same emotion words and can be translated into 'Glad' and 'Happy' due to differences of regions and cultures where people are located. The continuous emotion dimensionality theory does not have the problem, the relation among different emotion categories can be described in multiple directions from multiple angles, and if a wide DEAP database is used at present, emotion electroencephalogram signals are labeled from four continuous emotion dimensionalities (VAD dimensionality and likeness). However, the continuous dimension has the problem of difficult comprehension in manual labeling, such as a three-dimensional continuous emotion VAD (V-Valence (Valence); A-wakefulness (Arousal); D-control (Dominance)) model, and the experimental participants have difficulty in literally understanding the emotional meaning that the model represents. Finally, for voice induction, electroencephalogram signals based on emotion voices are induced by adopting isolated single sentences or single words mostly, even though the same emotion voices are selected to induce participants, due to the lack of context, there is no transfer and transition of emotion information, information obtained by different experimental participants is not completely the same, because different emotion voices are isolated and presented to the participants, the participants are passive when receiving different emotion voices, and the participation sense is lacked in the experimental process, which is different from the cognitive state of the emotion voices in a real scene, and therefore, the required electroencephalogram signals are difficult to obtain. In fact, emotional voice exists in a conversation between two or more persons in a certain scene, and in real life, the induction of electroencephalogram signals by using isolated emotional voice is not suitable for actual emotional requirements.
Therefore, aiming at the problem, in order to obtain more real and effective electroencephalogram signals, the method adopts a Chinese language 'conversation' mode to induce the participants, the experimental process is similar to a 'WeChat' voice conversation mode, conversation scene information is added as experimental background introduction, the participants play a specific role in conversation, actively participate in the emotion conversation process to induce the emotion of the participants, meanwhile, emotion inducing materials are labeled in the data acquisition process, emotion electroencephalogram data consistent with the labeled emotion are acquired, and the data can be used for analyzing and identifying the emotion electroencephalogram signals and exploring the brain cognitive process of emotion voice in the real conversation scene.
Disclosure of Invention
The invention aims to solve the technical problem that an isolated emotion voice induced electroencephalogram signal is not suitable for actual emotion requirements in the existing method for inducing the emotion electroencephalogram signal, and provides a method for inducing the emotion electroencephalogram signal based on conversation.
In order to solve the technical problems, the invention adopts the technical scheme that:
a dialogue-based emotion electroencephalogram signal induction method comprises the following specific steps:
1) collection of Emotion-inducing materials
The collection of the emotion inducing materials is to collect voice short dialogs with strong emotion colors from network videos or audio novels, collect a plurality of voice short dialogs according to the number of scenes, the number of the dialogs and the voice duration factor, screen the voice short dialogs according to the content and the quality of the voice, encode the screened voice one by one, complete the emotion marking of the voice in the acquisition process of electroencephalogram signals, and obtain emotion voice data with continuous dimensionality and discrete dimensionality;
2) design of experiments
The experimental design is that a chat mode of WeChat voice is adopted, the experimental design is respectively designed into a boy group and a girl group according to the gender of participants, the participants simulate the chat process in a real scene through the role playing mode and the role played by a computer, the real emotion of the participants is induced, and emotion electroencephalogram signals consistent with the emotion induction material labels are obtained;
3) experimental Induction and Collection
The experiment induction and collection adopts a dialogue mode combining voice and characters for the experiment, the role substituted by the participant is set as character content input, and the role played by a computer interacting with the participant is set as voice input; before the experiment begins, a participant sits in front of a computer and is provided with a brain electricity collecting cap, in the experiment process, when the participant hears voice sent by the role played by the computer once, the computer screen can automatically present an emotion dimension scoring table once, the participant needs to score emotion dimensions of the heard emotion voice signals, and the rest can be done until the experiment is finished.
Further, the selection is carried out according to the content and the quality of the voice, the voice needs to be selected from evaluation factors and grading levels, and emotion dialogue groups with emotion expression intensity, situational awareness, definition, naturalness, fluency and background cleanliness all more than three are selected as emotion inducing materials, wherein the grading levels are good, medium, poor and poor, so that the quality of an emotion voice inducing database is ensured and preparation is made for collecting effective emotion electroencephalogram data; the good, medium, poor and bad scores are 5, 4, 3, 2 and 1 respectively.
Furthermore, the emotion marking is based on a human body model self-evaluation method (SAM), a VAD emotion dimensionality scoring table with text explanation is adopted to continuously and discretely mark the emotion in the voice, the text explanation is used as discrete emotion marking, and the continuous emotion marking and the discrete emotion marking are carried out in the electroencephalogram signal acquisition process.
Further, the value of the VAD emotion dimension scoring table is 1-9 points.
Compared with the prior art, the invention has the beneficial effects that:
1. the situation that the emotion is not sufficient induced by a passive induction mode of isolating the emotion when electroencephalogram signals are collected is solved in the past through a mode of 'WeChat' voice conversation;
2. the emotional state of the participant is quickly induced in a role playing mode, so that the emotional expression of the participant is more real and natural;
3. by labeling the emotion-inducing material in the acquisition process, a multi-mode emotion-inducing electroencephalogram signal consistent with the emotion-inducing material label can be obtained, and better data support is provided for emotion calculation and research in related fields.
Drawings
FIG. 1 is a flow chart of the experimental design of the present invention;
in the figure: (a) the participants were females and (b) the participants were males.
Detailed Description
The present invention will be further described with reference to the following examples.
In this embodiment, an emotion electroencephalogram signal induction method based on a dialog includes the specific steps of:
1) collection of Emotion inducing materials
In real life, emotional voice often occurs in a conversation between two or more people in a certain scene, such as quarrel, debate, video conference, WeChat voice chat, and the like. In order to enrich the content of emotional voice, selecting voice short dialogs with stronger emotional colors collected from network videos or audio novels to obtain emotional voice inducing data based on a Chinese dialogue mode, selecting short dialogs under not less than 20 different scenes, selecting the number of the short dialogs in each scene to be within 50 sentences, wherein the voice duration of each sentence is less than or equal to 3s, numbering the voice short dialogs and each sentence of emotional voice in each scene, and continuously and discretely marking the emotion in the voice by adopting a VAD emotion dimensionality scoring table with character explanation in the acquisition process of electroencephalogram signals to obtain emotion voice data with continuous dimensionality and discrete dimensionality; in addition, before the experiment begins, emotion voice conversation contents are primarily screened, the emotion voice conversation contents are mainly screened from evaluation factors and grading levels, emotion conversation groups with emotion expression intensity, situational awareness, definition, naturalness, fluency and background cleanliness more than three minutes are selected as emotion inducing materials, and grading levels are good (5 grades), good (4 grades), medium (3 grades), poor (2 grades) and poor (1 grades), so that the quality of an emotion voice database is guaranteed, and effective emotion electroencephalogram data are prepared for collection.
2) Design of experiments
Performing emotion induction on an experiment participant by adopting an emotion voice conversation mode, intercepting the collected voice conversation again by taking sentences as units by adopting a WeChat voice conversation mode, and embedding the intercepted single sentences into stimulus presentation software according to an original conversation sequence, wherein the emotion voice conversation mode is shown in figure 1; in order to collect emotional electroencephalogram signals close to real environment, a 'chat' background introduction is added for each short conversation before the experiment starts, meanwhile, two different experiments of a boy group and a girl group are designed respectively according to the gender of participants, for example, when the participants are male, the participants can play husband, a computer plays wife, the experiment task is to simulate a husband and wife noise process, so that the participants induce real emotion of the participants through the role playing task, and finally, the emotional electroencephalogram signals consistent with the emotion inducing material labels are obtained.
3) Experimental Induction and Collection
The experiment adopts a dialogue mode combining voice and text, and sets the role substituted by the participants as text content input and the role played by a computer interacting with the participants as voice input. Before the experiment begins, a participant needs to sit in front of a computer with a brain electricity collecting cap, and in the experiment process, when the participant hears role voice played by the computer once, a computer screen can automatically present a VAD emotion dimension scoring table, the participant needs to score emotion dimensions of the heard voice signals, and the rest can be done until the experiment is finished.
TABLE 1 VAD Emotion dimension score Table
Figure BDA0003045337880000051
The emotional voice is scored on VAD continuous dimension by adopting a human body model self-evaluation method (SAM), and the score is 1-9. In order to enable participants to better understand the meaning of VAD emotion dimension, the method adds a text explanation on the basis of the SAM scale, and the text explanation can be labeled as discrete emotion, as shown in a table 1. Experimenters respectively mark and score feelings in three dimensions of titer degree V, arousal degree A and dominance degree D. Such as: the titer V: let you feel pleasantness or unpleasantness, the higher the pleasantness, the closer to 9 the score, the lower the pleasantness, the closer to 1 the score, where score values 2, 4, 6, 8 indicate that the participant can make finer determinations and selections between the two emotions, through which scale emotion-inducing material and emotion electroencephalograms with continuous and discrete labels can be collected.

Claims (4)

1. A dialogue-based emotion electroencephalogram signal induction method is characterized by comprising the following specific steps:
1) collection of Emotion inducing materials
The collection of the emotion inducing materials is to collect voice short dialogs with strong emotion colors from network videos or audio novels, collect a plurality of voice short dialogs according to the number of scenes, the number of the dialogs and the voice duration factor, screen the voice short dialogs according to the content and the quality of the voice, encode the screened voice one by one, complete the emotion marking of the voice in the acquisition process of electroencephalogram signals, and obtain emotion voice data with continuous dimensionality and discrete dimensionality;
2) design of experiments
The experimental design is that a chat mode of WeChat voice is adopted, the experimental design is respectively designed into a boy group and a girl group according to the gender of participants, the participants simulate the chat process in a real scene through the role playing mode and the role played by a computer, the real emotion of the participants is induced, and emotion electroencephalogram signals consistent with the emotion induction material labels are obtained; in the emotion marking process, performing emotion marking by adopting a VAD continuous emotion scale with text explanation in an experiment so as to enable participants to better understand the meaning of VAD emotion dimensionality, simultaneously using the text explanation as discrete emotion marking, and collecting emotion inducing materials with continuous and discrete emotion marking and emotion electroencephalogram signals;
3) experimental Induction and Collection
The experiment induction and collection adopts a dialogue mode combining voice and characters for the experiment, the role substituted by the participant is set as character content input, and the role played by a computer interacting with the participant is set as voice input; before the experiment begins, a participant sits in front of a computer and is provided with a brain electricity collecting cap, in the experiment process, when the participant hears voice sent by the role played by the computer once, the computer screen can automatically present an emotion dimension scoring table once, the participant needs to score emotion dimensions of the heard emotion voice signals, and the rest can be done until the experiment is finished.
2. The emotion electroencephalogram signal induction method based on conversation, as claimed in claim 1, wherein: the method comprises the steps of screening according to the content and quality of voice, screening from evaluation factors and grading levels, selecting emotion dialogue groups with emotion expression intensity, situational awareness, definition, naturalness, fluency and background cleanliness more than three minutes as emotion inducing materials, wherein the grading levels are good, medium, poor and poor so as to ensure the quality of an emotion voice inducing database and prepare for collecting effective emotion electroencephalogram data; the good, medium, poor and bad scores are 5, 4, 3, 2 and 1 respectively.
3. The emotion electroencephalogram signal induction method based on conversation, as claimed in claim 1, wherein: the emotion marking is based on a human body model self-evaluation method, a VAD emotion dimensionality scoring table with text explanation is adopted to continuously and discretely mark emotion in voice, the text explanation is used as discrete emotion marking, and the continuous emotion marking and the discrete emotion marking are carried out in the electroencephalogram signal acquisition process.
4. The emotion electroencephalogram signal induction method based on conversation, as claimed in claim 3, wherein: and the value in the VAD emotion dimension scoring table is 1-9 points.
CN202110471023.0A 2021-04-29 2021-04-29 Emotion electroencephalogram signal induction method based on conversation Active CN113208635B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110471023.0A CN113208635B (en) 2021-04-29 2021-04-29 Emotion electroencephalogram signal induction method based on conversation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110471023.0A CN113208635B (en) 2021-04-29 2021-04-29 Emotion electroencephalogram signal induction method based on conversation

Publications (2)

Publication Number Publication Date
CN113208635A CN113208635A (en) 2021-08-06
CN113208635B true CN113208635B (en) 2022-05-20

Family

ID=77089916

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110471023.0A Active CN113208635B (en) 2021-04-29 2021-04-29 Emotion electroencephalogram signal induction method based on conversation

Country Status (1)

Country Link
CN (1) CN113208635B (en)

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103690165B (en) * 2013-12-12 2015-04-29 天津大学 Modeling method for cross-inducing-mode emotion electroencephalogram recognition
CN106725452A (en) * 2016-11-29 2017-05-31 太原理工大学 Based on the EEG signal identification method that emotion induces
CN107361767A (en) * 2017-08-04 2017-11-21 西南大学 A kind of human emotion's potency classifying identification method using EEG signals
CN109117787A (en) * 2018-08-10 2019-01-01 太原理工大学 A kind of emotion EEG signal identification method and system
CN109199414B (en) * 2018-10-30 2020-11-17 武汉理工大学 Audio-visual evoked emotion recognition method and system based on electroencephalogram signals
CN112656427B (en) * 2020-11-26 2023-03-24 山西大学 Electroencephalogram emotion recognition method based on dimension model

Also Published As

Publication number Publication date
CN113208635A (en) 2021-08-06

Similar Documents

Publication Publication Date Title
Lima et al. When voices get emotional: A corpus of nonverbal vocalizations for research on emotion processing
Liu et al. Recognizing vocal emotions in Mandarin Chinese: A validated database of Chinese vocal emotional stimuli
De Linde et al. The semiotics of subtitling
CN111709358B (en) Teacher-student behavior analysis system based on classroom video
Caucci et al. Social and paralinguistic cues to sarcasm
Caro et al. “Feeling” audio description: Exploring the impact of AD on emotional response
Rilliard et al. Multimodal indices to Japanese and French prosodically expressed social affects
Purba et al. Investigation of decoding fillers used in an English learning talk show “English with Alice”
Sheffert et al. The Hoosier audiovisual multi-talker database
CN113208635B (en) Emotion electroencephalogram signal induction method based on conversation
Dhamarullah The relationship between movie-watching activity and listening skill
Gustafson-Capková Emotions in speech: tagset and acoustic correlates
Vaughan Naturalistic emotional speech corpora with large scale emotional dimension ratings
Weast American Sign Language tone and intonation: A phonetic analysis of eyebrow properties
Narain et al. Nonverbal vocalizations as speech: Characterizing natural-environment audio from nonverbal individuals with autism
Si Short video—A new approach to language international education
Perlina et al. Code-mixing by a content creator Gita Savitri Devi: How and why?
Damiani Art, design and neurodiversity
Alhinti et al. The Dysarthric expressed emotional database (DEED): An audio-visual database in British English
Tu et al. Establishment of Chinese Speech Emotion Database of Broadcasting
Dehghan et al. The effect of pedagogical films on the development of listening skill among Iranian EFL female and male learners
KR102658252B1 (en) Video education content providing method and apparatus based on artificial intelligence natural language processing using characters
Burczynska Investigating the multimodal construal and reception of irony in film translation-an experimental approach
Guo et al. Avatar Social System Improve Perceptions of Disabled People’s Social Ability
Tickle Cross-language vocalisation of emotion: methodological issues

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant