KR20190130808A - Emotion Classification Device and Method using Convergence of Features of EEG and Face - Google Patents

Emotion Classification Device and Method using Convergence of Features of EEG and Face Download PDF

Info

Publication number
KR20190130808A
KR20190130808A KR1020180055398A KR20180055398A KR20190130808A KR 20190130808 A KR20190130808 A KR 20190130808A KR 1020180055398 A KR1020180055398 A KR 1020180055398A KR 20180055398 A KR20180055398 A KR 20180055398A KR 20190130808 A KR20190130808 A KR 20190130808A
Authority
KR
South Korea
Prior art keywords
eeg
module
image
features
face image
Prior art date
Application number
KR1020180055398A
Other languages
Korean (ko)
Inventor
김신덕
권예훈
Original Assignee
연세대학교 산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 연세대학교 산학협력단 filed Critical 연세대학교 산학협력단
Priority to KR1020180055398A priority Critical patent/KR20190130808A/en
Publication of KR20190130808A publication Critical patent/KR20190130808A/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0077Devices for viewing the surface of the body, e.g. camera, magnifying lens
    • A61B5/0476
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/165Evaluating the state of mind, e.g. depression, anxiety
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]

Abstract

Disclosed is an emotion classification device using convergence of features of electroencephalogram (EEG) and a face image. The device includes: an EEG conversion module for converting an acquired EEG image into a spectrogram image; a face image acquisition module for obtaining a face image; a CNN network module for fusing the image of the EEG conversion module and feature information of the image to output convergence feature information; and an LSTM module for outputting a change in emotion characteristics over time and classifying emotions using output information of the CNN network module, wherein the CNN network module and the LSTM module are configured by learning. According to the present invention, it is possible to perform accurate emotion recognition through the convergence of features in consideration of the motive for the features of the EEG and the features of the face image.

Description

뇌파와 얼굴 이미지의 특징 융합을 이용한 감정 분류 장치 및 방법{Emotion Classification Device and Method using Convergence of Features of EEG and Face}Emotion Classification Device and Method using Convergence of Features of EEG and Face}

본 발명은 뇌파와 얼굴 이미지의 특징 융합을 이용한 감정 분류 장치 및 방법에 관한 것이다. The present invention relates to an apparatus and method for classifying emotion using feature fusion of an EEG and a face image.

감정 인식을 위해 다양한 데이터가 사용되고 있다. 예를 들어, 뇌파, 인간의 표정, 심박 수 등과 같이 다양한 이미지 및 생체 데이터가 사용되고 있다. 그러나, 기존의 감정 인식 방법들은 서로 다른 생체 데이터 및 이미지의 동기화와 상관 관계를 고려하지 않고 완전히 독립적인 모듈에서 특징이 추출되어 합성되었다.  Various data are used for emotion recognition. For example, various images and biometric data are used, such as brain waves, human facial expressions, heart rate, and the like. However, the existing emotion recognition methods are extracted and synthesized from a completely independent module without considering the synchronization and correlation of different biometric data and images.

따라서, 기존의 감정 인식은 각 특징 정보가 시간적 관계를 고려하지 않고 합성됨으로써 정확한 감정 인식이 이루어지기 어려운 문제점이 있었다. Therefore, the existing emotion recognition has a problem that it is difficult to achieve accurate emotion recognition by combining each feature information without considering the temporal relationship.

본 발명은 뇌파의 특징과 얼굴 이미지의 특징에 대한 동기를 고려한 특징 융합을 통해 보다 정확한 감정 인식이 가능한 감정 분류 장치 및 방법을 제공한다. The present invention provides an emotion classification apparatus and method capable of more accurate emotion recognition through fusion of features in consideration of the characteristics of the EEG and the motivation of the facial image.

상기한 목적을 달성하기 위해 본 발명의 일 실시예에 따르면, 획득한 뇌파 이미지를 스펙토그롬 이미지로 변환하는 뇌파 변환 모듈; 얼굴 이미지를 획득하는 얼굴 이미지 획득 모듈; 상기 뇌파 변환 모듈의 이미지 및 상기 이미지의 특징 정보를 융합하여 융합 특징 정보를 출력하는 CNN 네트워크 모듈; 및 상기 CNN 네트워크 모듈의 출력 정보를 이용하여 시간의 흐름에 따른 감정 특징 변화를 출력하고 감정을 분류하는 LSTM 모듈을 포함하되, 상기 CNN 네트워크 모듈 및 상기 LSTM 모듈은 학습에 의해 설정되는 파와 얼굴 이미지의 특징 융합을 이용한 감정 분류 장치가 제공된다. According to an embodiment of the present invention to achieve the above object, an EEG conversion module for converting the acquired EEG image into a spectrogram image; A face image acquisition module for obtaining a face image; A CNN network module for fusing the image of the EEG conversion module and the feature information of the image to output fusion feature information; And an LSTM module that outputs a change in emotion characteristics over time and classifies emotions using output information of the CNN network module, wherein the CNN network module and the LSTM module are configured to generate a wave and face image set by learning. An emotion classification apparatus using feature fusion is provided.

본 발명에 의하면, 뇌파의 특징과 얼굴 이미지의 특징에 대한 동기를 고려한 특징 융합을 통해 보다 정확한 감정 인식이 가능한 장점이 있다. According to the present invention, there is an advantage in that more accurate emotion recognition is possible through fusion of features in consideration of the motive for the features of the EEG and the features of the face image.

도 1은 본 발명의 일 실시예에 따라 스펙토그램의 형태로 변환된 뇌파 이미지를 나타낸 도면.
도 2는 1fps 추출하는 얼굴 이미지의 일례를 나타낸 도면.
도 3은 본 발명의 일 실시예에 따른 특징 융합을 위한 CNN 네트워크를 나타낸 도면.
도 4는 본 발명의 일 실시예에 따른 융합된 특징 데이터의 LSTM 분류를 나타낸 도면.
1 is a diagram showing an EEG image converted in the form of a spectrogram according to an embodiment of the present invention.
2 is a diagram showing an example of a face image extracted from 1 fps;
3 illustrates a CNN network for feature convergence according to an embodiment of the present invention.
4 illustrates LSTM classification of fused feature data according to an embodiment of the present invention.

본 발명은 다양한 변경을 가할 수 있고 여러 가지 실시예를 가질 수 있는 바, 특정 실시예들을 도면에 예시하고 상세한 설명에 상세하게 설명하고자 한다. 그러나, 이는 본 발명을 특정한 실시 형태에 대해 한정하려는 것이 아니며, 본 발명의 사상 및 기술 범위에 포함되는 모든 변경, 균등물 내지 대체물을 포함하는 것으로 이해되어야 한다. 각 도면을 설명하면서 유사한 참조부호를 유사한 구성요소에 대해 사용하였다. As the invention allows for various changes and numerous embodiments, particular embodiments will be illustrated in the drawings and described in detail in the written description. However, this is not intended to limit the present invention to specific embodiments, it should be understood to include all modifications, equivalents, and substitutes included in the spirit and scope of the present invention. In describing the drawings, similar reference numerals are used for similar elements.

이하에서, 본 발명에 따른 실시예들을 첨부된 도면을 참조하여 상세하게 설명한다.Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings.

본 발명에서 제안하는 방법은 두 개의 학습 모듈을 포함하며, 얼굴 이미지와 EEG(뇌파)의 융합된 특징을 추출하는 CNN 모듈과 추출된 특징의 시간에 따른 변화 정도를 학습하는 LSTM 모듈을 포함한다. The method proposed in the present invention includes two learning modules, and includes a CNN module extracting a fusion feature of a face image and an EEG and an LSTM module learning a degree of change of the extracted feature over time.

특징 융합을 위해 본 발명은 EEG 신호를 SIFT(Short Time Fourier Transform)을 통해 스펙토그램(spectogram)으로 이미지화시킨다. For feature fusion the present invention images the EEG signal in a spectrogram via a Short Time Fourier Transform (SIFT).

도 1은 본 발명의 일 실시예에 따라 스펙토그램의 형태로 변환된 뇌파 이미지를 나타낸 도면이다. 1 is a view showing an EEG image converted into the form of a spectrogram according to an embodiment of the present invention.

또한, 얼굴 이미지를 획득하고 얼굴 이미지를 1fps로 추출한다. In addition, a face image is obtained and a face image is extracted at 1 fps.

도 2는 1fps 추출하는 얼굴 이미지의 일례를 나타낸 도면이다. 2 is a diagram illustrating an example of a face image extracted by 1 fps.

획득한 얼굴 이미지의 특징 데이터와 EEG의 특징 데이터는 유합되며, 이는 특징 융합 신경망을 통해 이루어진다. The feature data of the acquired face image and the feature data of the EEG are merged, and this is achieved through a feature fusion neural network.

도 3은 본 발명의 일 실시예에 따른 특징 융합을 위한 CNN 네트워크를 나타낸 도면이다. 3 illustrates a CNN network for feature convergence according to an embodiment of the present invention.

도 3을 참조하면, 본 발명의 일 실시예에 따른 CNN 네트워크는 얼굴 이미지 및 EEG에 대한 특징을 독립적으로 컨볼루션 필터링 및 맥스 풀링을 수행한 후 특정 단계에서 두 개의 특징을 융합한다. Referring to FIG. 3, the CNN network according to an embodiment of the present invention fuses two features at a specific stage after performing convolution filtering and max pooling independently on features of a face image and an EEG.

특징 융합이 이루어진 후에는 맥스 풀링 및 컨볼루션 필터링이 이루어져 최종적인 특징 융합이 이루어진다. After feature fusion, max pooling and convolution filtering are performed, resulting in final feature fusion.

도 3에 도시된 CNN 네트워크는 레퍼런스 데이터와의 비교를 통해 학습이 이루어지며, 이를 통해 각 단계에 적용될 컨볼루션 필터가 결정된다. The CNN network illustrated in FIG. 3 learns through comparison with reference data, and determines a convolution filter to be applied to each step.

CNN 네트워크를 통해 특징 유합이 이루어지면, LSTM을 이용하여 시간의 mfma당 감정 특징의 변화를 학습하고 분류 결과를 도출한다. When feature integration is achieved through the CNN network, the LSTM is used to learn changes in emotional features per mfma over time and to derive classification results.

도 4는 본 발명의 일 실시예에 따른 융합된 특징 데이터의 LSTM 분류를 나타낸 도면이다. 4 illustrates LSTM classification of fused feature data according to an embodiment of the present invention.

이상과 같이 본 발명에서는 구체적인 구성 요소 등과 같은 특정 사항들과 한정된 실시예 및 도면에 의해 설명되었으나 이는 본 발명의 보다 전반적인 이해를 돕기 위해서 제공된 것일 뿐, 본 발명은 상기의 실시예에 한정되는 것은 아니며, 본 발명이 속하는 분야에서 통상적인 지식을 가진 자라면 이러한 기재로부터 다양한 수정 및 변형이 가능하다. 따라서, 본 발명의 사상은 설명된 실시예에 국한되어 정해져서는 아니되며, 후술하는 특허청구범위뿐 아니라 이 특허청구범위와 균등하거나 등가적 변형이 있는 모든 것들은 본 발명 사상의 범주에 속한다고 할 것이다.As described above, the present invention has been described by specific embodiments such as specific components and the like, but the embodiments and the drawings are provided only to help a more general understanding of the present invention, and the present invention is not limited to the above embodiments. For those skilled in the art, various modifications and variations are possible from these descriptions. Therefore, the spirit of the present invention should not be limited to the described embodiments, and all of the equivalents or equivalents of the claims as well as the claims to be described later belong to the scope of the present invention. .

Claims (1)

획득한 뇌파 이미지를 스펙토그롬 이미지로 변환하는 뇌파 변환 모듈;
얼굴 이미지를 획득하는 얼굴 이미지 획득 모듈;
상기 뇌파 변환 모듈의 이미지 및 상기 이미지의 특징 정보를 융합하여 융합 특징 정보를 출력하는 CNN 네트워크 모듈; 및
상기 CNN 네트워크 모듈의 출력 정보를 이용하여 시간의 흐름에 따른 감정 특징 변화를 출력하고 감정을 분류하는 LSTM 모듈을 포함하되,
상기 CNN 네트워크 모듈 및 상기 LSTM 모듈은 학습에 의해 설정되는 것을 특징으로 하는 뇌파와 얼굴 이미지의 특징 융합을 이용한 감정 분류 장치.




An EEG conversion module for converting the acquired EEG image into a spectrogram image;
A face image acquisition module for obtaining a face image;
A CNN network module for fusing the image of the EEG conversion module and the feature information of the image to output fusion feature information; And
Including an LSTM module for outputting a change in emotion characteristics over time and classifies emotions using output information of the CNN network module,
And the CNN network module and the LSTM module are configured by learning.




KR1020180055398A 2018-05-15 2018-05-15 Emotion Classification Device and Method using Convergence of Features of EEG and Face KR20190130808A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020180055398A KR20190130808A (en) 2018-05-15 2018-05-15 Emotion Classification Device and Method using Convergence of Features of EEG and Face

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020180055398A KR20190130808A (en) 2018-05-15 2018-05-15 Emotion Classification Device and Method using Convergence of Features of EEG and Face

Publications (1)

Publication Number Publication Date
KR20190130808A true KR20190130808A (en) 2019-11-25

Family

ID=68730753

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020180055398A KR20190130808A (en) 2018-05-15 2018-05-15 Emotion Classification Device and Method using Convergence of Features of EEG and Face

Country Status (1)

Country Link
KR (1) KR20190130808A (en)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111012336A (en) * 2019-12-06 2020-04-17 重庆邮电大学 Parallel convolutional network motor imagery electroencephalogram classification method based on spatio-temporal feature fusion
CN111134666A (en) * 2020-01-09 2020-05-12 中国科学院软件研究所 Emotion recognition method of multi-channel electroencephalogram data and electronic device
CN111184511A (en) * 2020-02-04 2020-05-22 西安交通大学 Electroencephalogram signal classification method based on attention mechanism and convolutional neural network
CN111291614A (en) * 2020-01-12 2020-06-16 杭州电子科技大学 Child epilepsy syndrome classification method based on transfer learning multi-model decision fusion
KR20200071807A (en) * 2018-11-30 2020-06-22 인하대학교 산학협력단 Human emotion state recognition method and system using fusion of image and eeg signals
CN111368686A (en) * 2020-02-27 2020-07-03 西安交通大学 Electroencephalogram emotion classification method based on deep learning
CN111950455A (en) * 2020-08-12 2020-11-17 重庆邮电大学 Motion imagery electroencephalogram characteristic identification method based on LFFCNN-GRU algorithm model
CN112022153A (en) * 2020-09-27 2020-12-04 西安电子科技大学 Electroencephalogram signal detection method based on convolutional neural network
CN113128353A (en) * 2021-03-26 2021-07-16 安徽大学 Emotion sensing method and system for natural human-computer interaction
CN113297981A (en) * 2021-05-27 2021-08-24 西北工业大学 End-to-end electroencephalogram emotion recognition method based on attention mechanism
CN113312942A (en) * 2020-02-27 2021-08-27 阿里巴巴集团控股有限公司 Data processing method and equipment and converged network architecture
CN113361631A (en) * 2021-06-25 2021-09-07 海南电网有限责任公司电力科学研究院 Insulator aging spectrum classification method based on transfer learning
CN113598774A (en) * 2021-07-16 2021-11-05 中国科学院软件研究所 Active emotion multi-label classification method and device based on multi-channel electroencephalogram data
CN113768518A (en) * 2021-09-03 2021-12-10 中国地质大学(武汉) Electroencephalogram emotion recognition method and system based on multi-scale dispersion entropy analysis
CN113951883A (en) * 2021-11-12 2022-01-21 上海交通大学 Gender difference detection method based on electroencephalogram signal emotion recognition
CN114129175A (en) * 2021-11-19 2022-03-04 江苏科技大学 LSTM and BP based motor imagery electroencephalogram signal classification method
CN114461069A (en) * 2022-02-07 2022-05-10 上海图灵智算量子科技有限公司 Quantum CNN-LSTM-based emotion recognition method
CN117137488A (en) * 2023-10-27 2023-12-01 吉林大学 Auxiliary identification method for depression symptoms based on electroencephalogram data and facial expression images
CN113312942B (en) * 2020-02-27 2024-05-17 阿里巴巴集团控股有限公司 Data processing method and device and converged network architecture system

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20200071807A (en) * 2018-11-30 2020-06-22 인하대학교 산학협력단 Human emotion state recognition method and system using fusion of image and eeg signals
CN111012336A (en) * 2019-12-06 2020-04-17 重庆邮电大学 Parallel convolutional network motor imagery electroencephalogram classification method based on spatio-temporal feature fusion
CN111134666A (en) * 2020-01-09 2020-05-12 中国科学院软件研究所 Emotion recognition method of multi-channel electroencephalogram data and electronic device
CN111291614A (en) * 2020-01-12 2020-06-16 杭州电子科技大学 Child epilepsy syndrome classification method based on transfer learning multi-model decision fusion
CN111291614B (en) * 2020-01-12 2023-11-21 杭州电子科技大学 Child epileptic syndrome classification method based on migration learning multimode decision fusion
CN111184511A (en) * 2020-02-04 2020-05-22 西安交通大学 Electroencephalogram signal classification method based on attention mechanism and convolutional neural network
CN113312942A (en) * 2020-02-27 2021-08-27 阿里巴巴集团控股有限公司 Data processing method and equipment and converged network architecture
CN113312942B (en) * 2020-02-27 2024-05-17 阿里巴巴集团控股有限公司 Data processing method and device and converged network architecture system
CN111368686A (en) * 2020-02-27 2020-07-03 西安交通大学 Electroencephalogram emotion classification method based on deep learning
CN111950455A (en) * 2020-08-12 2020-11-17 重庆邮电大学 Motion imagery electroencephalogram characteristic identification method based on LFFCNN-GRU algorithm model
CN111950455B (en) * 2020-08-12 2022-03-22 重庆邮电大学 Motion imagery electroencephalogram characteristic identification method based on LFFCNN-GRU algorithm model
CN112022153A (en) * 2020-09-27 2020-12-04 西安电子科技大学 Electroencephalogram signal detection method based on convolutional neural network
CN112022153B (en) * 2020-09-27 2021-07-06 西安电子科技大学 Electroencephalogram signal detection method based on convolutional neural network
CN113128353A (en) * 2021-03-26 2021-07-16 安徽大学 Emotion sensing method and system for natural human-computer interaction
CN113128353B (en) * 2021-03-26 2023-10-24 安徽大学 Emotion perception method and system oriented to natural man-machine interaction
CN113297981A (en) * 2021-05-27 2021-08-24 西北工业大学 End-to-end electroencephalogram emotion recognition method based on attention mechanism
CN113297981B (en) * 2021-05-27 2023-04-07 西北工业大学 End-to-end electroencephalogram emotion recognition method based on attention mechanism
CN113361631A (en) * 2021-06-25 2021-09-07 海南电网有限责任公司电力科学研究院 Insulator aging spectrum classification method based on transfer learning
CN113598774B (en) * 2021-07-16 2022-07-15 中国科学院软件研究所 Active emotion multi-label classification method and device based on multi-channel electroencephalogram data
CN113598774A (en) * 2021-07-16 2021-11-05 中国科学院软件研究所 Active emotion multi-label classification method and device based on multi-channel electroencephalogram data
CN113768518A (en) * 2021-09-03 2021-12-10 中国地质大学(武汉) Electroencephalogram emotion recognition method and system based on multi-scale dispersion entropy analysis
CN113951883B (en) * 2021-11-12 2022-08-12 吕宝粮 Gender difference detection method based on electroencephalogram signal emotion recognition
CN113951883A (en) * 2021-11-12 2022-01-21 上海交通大学 Gender difference detection method based on electroencephalogram signal emotion recognition
CN114129175A (en) * 2021-11-19 2022-03-04 江苏科技大学 LSTM and BP based motor imagery electroencephalogram signal classification method
CN114461069A (en) * 2022-02-07 2022-05-10 上海图灵智算量子科技有限公司 Quantum CNN-LSTM-based emotion recognition method
CN117137488A (en) * 2023-10-27 2023-12-01 吉林大学 Auxiliary identification method for depression symptoms based on electroencephalogram data and facial expression images
CN117137488B (en) * 2023-10-27 2024-01-26 吉林大学 Auxiliary identification method for depression symptoms based on electroencephalogram data and facial expression images

Similar Documents

Publication Publication Date Title
KR20190130808A (en) Emotion Classification Device and Method using Convergence of Features of EEG and Face
Anina et al. Ouluvs2: A multi-view audiovisual database for non-rigid mouth motion analysis
Chou et al. NNIME: The NTHU-NTUA Chinese interactive multimodal emotion corpus
Riaz et al. Inter comparison of classification techniques for vowel speech imagery using EEG sensors
Ilyas et al. AVFakeNet: A unified end-to-end Dense Swin Transformer deep learning model for audio–visual​ deepfakes detection
US11869524B2 (en) Audio processing method and apparatus, computer device, and storage medium
Dhuheir et al. Emotion recognition for healthcare surveillance systems using neural networks: A survey
CN109871831B (en) Emotion recognition method and system
Cid et al. A novel multimodal emotion recognition approach for affective human robot interaction
Urmee et al. Real-time bangla sign language detection using xception model with augmented dataset
JP2019200671A (en) Learning device, learning method, program, data generation method, and identification device
Freitas et al. An introduction to silent speech interfaces
US11328533B1 (en) System, method and apparatus for detecting facial expression for motion capture
KR101714708B1 (en) Brain-computer interface apparatus using movement-related cortical potential and method thereof
Islam et al. Cognitive state estimation by effective feature extraction and proper channel selection of EEG signal
Roshdy et al. Machine Empathy: Digitizing Human Emotions
Freitas et al. Multimodal corpora for silent speech interaction
CN108831472B (en) Artificial intelligent sounding system and sounding method based on lip language recognition
Wang et al. Simulation experiment of bci based on imagined speech eeg decoding
Shandiz et al. Improving neural silent speech interface models by adversarial training
Hernandez-Galvan et al. A prototypical network for few-shot recognition of speech imagery data
Alashkar et al. AI-vision towards an improved social inclusion
Guodong et al. Multi feature fusion EEG emotion recognition
CN114492579A (en) Emotion recognition method, camera device, emotion recognition device and storage device
Haq et al. Using lip reading recognition to predict daily Mandarin conversation