CN116898439A - Emotion recognition method and system for analyzing brain waves by deep learning model - Google Patents

Emotion recognition method and system for analyzing brain waves by deep learning model Download PDF

Info

Publication number
CN116898439A
CN116898439A CN202310826965.5A CN202310826965A CN116898439A CN 116898439 A CN116898439 A CN 116898439A CN 202310826965 A CN202310826965 A CN 202310826965A CN 116898439 A CN116898439 A CN 116898439A
Authority
CN
China
Prior art keywords
electroencephalogram
gradient
deep learning
learning model
emotion recognition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310826965.5A
Other languages
Chinese (zh)
Inventor
黄辰
张丽
王时绘
张龑
唐博
黄明
宋林
宋建华
吴伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hubei Integrated Traditional Chinese And Western Medicine Hospital Hubei Occupational Disease Hospital
Hubei University
Original Assignee
Hubei Integrated Traditional Chinese And Western Medicine Hospital Hubei Occupational Disease Hospital
Hubei University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hubei Integrated Traditional Chinese And Western Medicine Hospital Hubei Occupational Disease Hospital, Hubei University filed Critical Hubei Integrated Traditional Chinese And Western Medicine Hospital Hubei Occupational Disease Hospital
Priority to CN202310826965.5A priority Critical patent/CN116898439A/en
Publication of CN116898439A publication Critical patent/CN116898439A/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • A61B5/372Analysis of electroencephalograms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • G06V20/695Preprocessing, e.g. image segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • G06V20/698Matching; Classification

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Psychology (AREA)
  • Surgery (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Psychiatry (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Pathology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Developmental Disabilities (AREA)
  • Educational Technology (AREA)
  • Social Psychology (AREA)
  • Child & Adolescent Psychology (AREA)
  • Hospice & Palliative Care (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method and a system for identifying emotion by analyzing brain waves by using a deep learning model. The method comprises the steps of firstly collecting the brain electrical signals and then mapping the characteristics of the brain electrical signals to an electroencephalogram. And extracting gradient features from the electroencephalogram topographic map, and finally inputting the gradient features into a preset deep learning model to obtain an emotion recognition result. The method for visualizing and extracting the gradient characteristics of the images through the electroencephalogram signals is single in characteristics and redundant in data. The electroencephalogram emotion signals are converted into the electroencephalogram topographic maps to be used as data for classification and identification, so that the classification characteristics of the electroencephalogram topographic maps are more various, and the accuracy and the identification efficiency of emotion identification are improved.

Description

Emotion recognition method and system for analyzing brain waves by deep learning model
Technical Field
The invention relates to the technical field of emotion recognition, in particular to an emotion recognition method and system for analyzing brain waves by using a deep learning model.
Background
In recent years, brain research has been greatly advanced, and various fields related to the human brain have been paid attention to. The human body's various high-level behaviors such as emotion, thinking, learning, feeling, language, etc. are indistinguishable from our brain density. In the field of artificial intelligence and brain-computer interfaces, research on how the human brain adjusts these advanced behaviors has very important scientific research value and meaning.
Therefore, a method for recognizing emotion based on an electroencephalogram signal is required.
Disclosure of Invention
The invention provides a method and a system for recognizing emotion by analyzing brain waves by using a deep learning model, which can recognize emotion based on the brain waves.
The invention provides a emotion recognition method for analyzing brain waves by using a deep learning model, which comprises the following steps:
collecting brain electrical signals;
mapping the characteristics of the electroencephalogram signals to an electroencephalogram map;
extracting gradient features from the electroencephalogram map;
and inputting the gradient characteristics into a preset deep learning model to obtain emotion recognition results.
Specifically, the mapping the characteristics of the electroencephalogram to an electroencephalogram map includes:
and mapping the characteristics of the electroencephalogram signals to the space position of the guide electrode of the brain-computer interface equipment to generate an electroencephalogram topographic map.
Specifically, after mapping the characteristics of the electroencephalogram signal onto an electroencephalogram map, the method further includes:
and if the coverage of the generated electroencephalogram with a single color is less than 3/5 of that of the picture, performing a subsequent process.
Specifically, the extracting gradient features from the electroencephalogram map includes:
by the formula H (x, y) =h (x, y) gamma Carrying out normalization processing of color and gamma space on the electroencephalogram topographic map; wherein H (x, y) represents a pixel value of a pixel point in the electroencephalogram at (x, y), gamma is a correction value, and H (x, y) represents a pixel value of a processed pixel point at (x, y);
by the formulaCalculating to obtain gradient value G of pixel point (x, y) in the electroencephalogram topographic map in horizontal direction x Gradient value G of (x, y) and pixel point (x, y) in vertical direction y (x,y);
Dividing the electroencephalogram into a plurality of units of cells, and constructing a gradient direction histogram of each cell unit according to the gradient value of the pixel points (x, y) in the cells in the horizontal direction and the gradient value in the vertical direction;
and combining the gradient direction histograms of the cell units to obtain an overall gradient direction histogram.
Specifically, after the electroencephalogram signal is acquired, the method further comprises:
and denoising, smoothing and filtering the electroencephalogram signals and extracting frequency domain features.
The invention also provides an emotion recognition system for analyzing brain waves by using the deep learning model, which comprises the following steps:
the electroencephalogram signal acquisition module is used for acquiring electroencephalogram signals;
the electroencephalogram generating module is used for mapping the characteristics of the electroencephalogram signals to an electroencephalogram;
the gradient feature extraction module is used for extracting gradient features from the electroencephalogram;
and the emotion recognition module is used for inputting the gradient characteristics into a preset deep learning model to obtain an emotion recognition result.
Specifically, the electroencephalogram generating module is specifically configured to map the characteristics of the electroencephalogram to a spatial position of a lead of a brain-computer interface device to generate an electroencephalogram.
Specifically, the method further comprises the steps of:
and the data detection module is used for carrying out the follow-up process if the coverage of the generated electroencephalogram to a single color is smaller than 3/5 of that of the picture.
Specifically, the gradient feature extraction module includes:
normalization unit for calculating H (x, y) =h (x, y) by the formula H (x, y) gamma Carrying out normalization processing of color and gamma space on the electroencephalogram topographic map; wherein H (x, y) represents a pixel value of a pixel point in the electroencephalogram at (x, y), gamma is a correction value, and H (x, y) represents a pixel value of a processed pixel point at (x, y);
gradient value calculating unit for calculating gradient value by formulaCalculating to obtain gradient value G of pixel point (x, y) in the electroencephalogram topographic map in horizontal direction x Gradient value G of (x, y) and pixel point (x, y) in vertical direction y (x,y);
A gradient direction histogram construction unit for dividing the electroencephalogram into cells of a plurality of units, and constructing a gradient direction histogram of each cell unit according to the gradient values of the pixel points (x, y) in the cells in the horizontal direction and the gradient values in the vertical direction;
and the gradient direction histogram merging unit is used for merging the gradient direction histograms of the cell units to obtain an overall gradient direction histogram.
Specifically, the method further comprises the steps of:
and the data preprocessing module is used for denoising, smoothing filtering and frequency domain feature extraction processing of the electroencephalogram signals.
One or more technical schemes provided by the invention have at least the following technical effects or advantages:
the method comprises the steps of firstly collecting the brain electrical signals and then mapping the characteristics of the brain electrical signals to an electroencephalogram. And extracting gradient features from the electroencephalogram topographic map, and finally inputting the gradient features into a preset deep learning model to obtain an emotion recognition result. The method for visualizing and extracting the gradient characteristics of the images through the electroencephalogram signals is single in characteristics and redundant in data. The electroencephalogram emotion signals are converted into the electroencephalogram topographic maps to be used as data for classification and identification, so that the classification characteristics of the electroencephalogram topographic maps are more various, and the accuracy and the identification efficiency of emotion identification are improved.
In addition, the invention has the following advantages:
1. the electroencephalogram frequency band is selected, and the electroencephalogram on the gamma frequency band of the electroencephalogram emotion data set is used, so that the electroencephalogram on the gamma frequency band has the advantages that after the differential entropy characteristics of the electroencephalogram on the gamma frequency band are extracted, the degree of distinguishing the electroencephalogram between the frequency bands is obvious, and the data of the gamma frequency band is selected from the whole section of electroencephalogram data to serve as the data for drawing an electroencephalogram topography, so that the emotion recognition accuracy is further improved.
2. The electroencephalogram topographic map is drawn according to the sampling points, after the original electroencephalogram signals are processed, electroencephalogram signal data with higher quality can be obtained, then different emotions of each data in the electroencephalogram emotion data set and different time periods are respectively drawn into the electroencephalogram topographic map according to the sampling point rule in the drawing tool according to the space position of the equipment guide electrode, the space position information of the guide electrode can generate different emotions in the electroencephalogram topographic map, different colors, different textures and different curves are generated, classification and identification of the emotions are carried out, and the accuracy rate of emotion identification is further improved.
Drawings
Fig. 1 is a flowchart of a method for emotion recognition by analyzing brain waves using a deep learning model according to an embodiment of the present invention;
fig. 2 is a schematic diagram of an emotion recognition method for analyzing brain waves by using a deep learning model according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of model training and emotion recognition in an emotion recognition method for analyzing brain waves using a deep learning model according to an embodiment of the present invention;
fig. 4 is a block diagram of an emotion recognition system for analyzing brain waves using a deep learning model according to an embodiment of the present invention.
Detailed Description
The embodiment of the invention provides a method and a system for identifying emotion by analyzing brain waves by using a deep learning model, which can identify emotion based on the brain waves.
The technical scheme in the embodiment of the invention aims to achieve the technical effects, and the overall thought is as follows:
preprocessing and feature extraction are carried out on an electroencephalogram emotion data set, and the method mainly comprises preprocessing, downsampling, smoothing, feature extraction and normalization of electroencephalogram signals on original electroencephalogram signal data, and then mapping the electroencephalogram signal features to the space positions of the lead poles of a brain-computer interface to generate an electroencephalogram topographic map, so that the electroencephalogram topographic map has a plurality of features of emotion recognition. In particular, each brain functional region theoretically maps a certain class of emotion, so the lead in the brain-computer interface device also has partial emotion recognition features between different brain regions. The electroencephalogram topographic maps of different moods are characterized and different in color depth, texture and curve, are wider in color coverage in normal state, are relatively less in color coverage in sadness and fear, and are minimum in color coverage in pleasure. The coverage range and the texture of sadness and fear are similar, so that the problems of redundancy and single characteristics of electroencephalogram data are avoided. And then, extracting gradient features of the electroencephalogram as classification features of a support vector machine, and finally inputting the gradient features into the support vector machine for training and testing to obtain a classification result of emotion recognition.
In order to better understand the above technical solutions, the following detailed description will refer to the accompanying drawings and specific embodiments.
Referring to fig. 1 and 2, the emotion recognition method for analyzing brain waves by using a deep learning model according to an embodiment of the present invention includes:
step S110: collecting brain electrical signals;
in order to avoid that the original electroencephalogram data possibly contains other factors such as artifacts, interference, noise and the like, the interference of eye movement interference, eye muscle dryness and electricity is removed for the electroencephalogram through independent component analysis, and the electroencephalogram data and the emotion of a person are guaranteed to have good correlation, after the electroencephalogram data is acquired, the method further comprises the following steps:
and denoising, smoothing and filtering the electroencephalogram signals and extracting frequency domain features.
Specifically, a linear dynamic algorithm is used to smooth the data, converting an irregular waveform into a normal waveform. And extracting the frequency domain characteristics of the electroencephalogram signals by using a differential entropy algorithm.
In order to accelerate the gradient descent speed during calculation so as to facilitate easier searching of the global optimal solution, the electroencephalogram signals are normalized.
In order to ensure that the collected emotion features are most prominent, all brain electrical emotion data are downsampled to a frequency range of 1-75 Hz.
Step S120: mapping the characteristics of the electroencephalogram signals to an electroencephalogram topographic map;
the specific description of the step is that the characteristics of the brain electrical signals are mapped to the brain electrical topographic map, and the specific description comprises the following steps:
and mapping the characteristics of the brain-computer signal to the space position of the guide electrode of the brain-computer interface equipment to generate an electroencephalogram topographic map.
Specifically, the lead of the scalp position is set according to the spatial position of the lead of the brain-computer interface device, according to the setting mode, the emotion characteristics of the brain function area can be mapped onto the electroencephalogram topographic map, and corresponding channel information is set according to different data sets. Based on the space position information of the equipment guide electrode, mapping is carried out on each section of electroencephalogram emotion data by using a batch processing method, and data in different time periods and data in different emotions are respectively output and stored in corresponding folders for storage.
In order to check whether the electroencephalogram has abnormal data, after mapping the characteristics of the electroencephalogram onto the electroencephalogram, the method further comprises:
if the generated electroencephalogram is smaller than 3/5 of the picture for single-color coverage, the data are normal, and the follow-up process is carried out.
Step S130: extracting gradient characteristics from an electroencephalogram map;
the specific explanation of this step is that the gradient feature is extracted from the electroencephalogram map, including:
by the formula H (x, y) =h (x, y) gamma The color and gamma space normalization processing is carried out on the electroencephalogram topographic map, so that negative effects caused by shadows and color changes in the image can be effectively reduced; wherein h (x, y) represents the pixel value of the pixel point at (x, y) in the electroencephalogram, gamma is a correction value, the parameter is set to 0.5, and H (x, y) represents the pixel value of the pixel point at (x, y) after processing;
by the formulaCalculating to obtain gradient value G of pixel point (x, y) in electroencephalogram in horizontal direction x Gradient value G of (x, y) and pixel point (x, y) in vertical direction y (x,y);
Dividing an electroencephalogram into cells of a plurality of units, and constructing a gradient direction histogram of each cell unit according to gradient values of pixel points (x, y) in the cells in the horizontal direction and gradient values in the vertical direction; cells refer to a collection of pixels, e.g., each cell is divided into 6 x 6 pixels. Using the gradient value G in the horizontal and vertical directions x (x,y)、G y The (x, y) constructed gradient direction histogram represents the relationship between the cells, thereby realizing numbering of the local graphic range and ensuring the accuracy of the shape and appearance of the objects in the graphic.
And combining the gradient direction histograms of the cell units to obtain an overall gradient direction histogram, and generating a gradient characteristic file corresponding to the electroencephalogram. By combining small cell units into larger cell units, the HOG descriptor can be converted into vectors composed of the direction histograms of the cell units in each interval, the vectors can be mutually overlapped, and the input and output of each cell unit can have various influences on the final descriptor, so that the influence of elements such as color, edge, shadow and the like can be reduced after the large interval is formed, and the quality of the electroencephalogram data is improved.
Step S140: and inputting the gradient characteristics into a preset deep learning model to obtain emotion recognition results.
Referring to fig. 3, before performing this step, the electroencephalogram dataset needs to be trained, and a specific training process includes:
1) And classifying the electroencephalogram topographic maps in the training set into corresponding training folders according to different categories.
2) And respectively setting the information of four classification labels of the electroencephalogram topographic map in the training set as corresponding classification serial numbers, sadness, difficulty and happiness, and inputting the classification serial numbers and the corresponding emotion labels into a support vector machine.
3) And after the steps of calculating the gradient value of the electroencephalogram in the training set, constructing a direction histogram and the like, extracting and obtaining gradient characteristics of the gradient value, generating a corresponding gradient characteristic file, marking the characteristic file with label information, creating a corresponding data folder and storing the corresponding data folder in a system.
4) The gradient characteristics of the electroencephalogram in the training set are input into a support vector machine for training, so that gradient classification characteristics of the electroencephalogram in four different emotions can be obtained, then the label information is combined with a support vector machine classifier for training, and therefore a complete deep learning classification model is constructed, and the classification model is stored in a system.
The embodiment of the invention adopts the electroencephalogram data acquired by the same acquirer on different dates in the electroencephalogram emotion data set, and can avoid quality problems in a certain data acquisition process so as to influence the overall recognition accuracy. The number of the training sets is set to be twice of that of the test sets, so that the data sets can be effectively trained, and a classification recognition model is built by using gradient features of the electroencephalogram topographic map in the training process, so that classification features are more prominent, and classification effects are more reasonable and accurate.
Next, an electroencephalogram dataset is tested and a classification result is output, specifically including:
1) And classifying the electroencephalogram map in the test set into corresponding test folders according to different categories.
2) And respectively setting the information neutrality, sadness, difficulty and happiness of four classification labels of the electroencephalogram topographic map in the test set as corresponding classification serial numbers, and inputting the classification serial numbers and the corresponding emotion labels into a support vector machine.
3) And after the steps of calculating the gradient value of the electroencephalogram in the test set, constructing a direction histogram and the like, extracting and obtaining gradient characteristics of the gradient value, generating a corresponding gradient characteristic file, marking the characteristic file with label information, creating a corresponding data folder and storing the corresponding data folder in a system.
4) After the classification labels are set on the different electroencephalogram topographic map data in the test set, the training is completed to support the vector machine deep learning classification model, the recognition prediction result and the calculation recognition accuracy are output, and each specific index is output.
It should be noted that, the embodiment of the invention extracts the gradient characteristics of the electroencephalogram to classify, is simple and efficient in classifying and identifying the test set, can complete the classification task by analyzing the gradient characteristics of the electroencephalogram without huge calculation of the classification model, and effectively improves the classification efficiency and accuracy on the basis of classifying the gradient characteristics of the electroencephalogram without analyzing the electroencephalogram data of all frequency bands one by one.
Referring to fig. 4, an emotion recognition system for analyzing brain waves using a deep learning model according to an embodiment of the present invention includes:
an electroencephalogram signal acquisition module 100 for acquiring an electroencephalogram signal;
in order to avoid that the original electroencephalogram data possibly contains other factors such as artifacts, interference, noise and the like, the interference of eye movement interference, eye muscle dryness and electricity is removed for the electroencephalogram through independent component analysis, and the electroencephalogram data and the emotion of a person are guaranteed to have good correlation, and the method further comprises the following steps:
and the data preprocessing module is used for denoising, smoothing filtering and frequency domain feature extraction processing on the electroencephalogram signals.
Specifically, the data preprocessing module uses a linear dynamic algorithm to carry out smooth filtering on data and converts an irregular waveform into a normal waveform; and extracting the frequency domain characteristics of the electroencephalogram signals by using a differential entropy algorithm.
In order to accelerate the gradient descent speed in calculation so as to facilitate the search of the global optimal solution, the method further comprises the following steps:
and the normalization module is used for carrying out normalization processing on the electroencephalogram signals.
In order to ensure that the collected emotion features are most prominent, all brain electrical emotion data are downsampled to a frequency range of 1-75 Hz.
An electroencephalogram generating module 200, configured to map features of an electroencephalogram signal onto an electroencephalogram;
specifically, the electroencephalogram generating module 200 is specifically configured to map features of an electroencephalogram signal to a lead space position of a brain-computer interface device to generate an electroencephalogram map.
Specifically, the lead of the scalp position is set according to the spatial position of the lead of the brain-computer interface device, according to the setting mode, the emotion characteristics of the brain function area can be mapped onto the electroencephalogram topographic map, and corresponding channel information is set according to different data sets. Based on the space position information of the equipment guide electrode, mapping is carried out on each section of electroencephalogram emotion data by using a batch processing method, and data in different time periods and data in different emotions are respectively output and stored in corresponding folders for storage.
In order to check whether the electroencephalogram has abnormal data, the method further comprises:
and the data detection module is used for indicating that the data are normal if the coverage of the generated electroencephalogram map with respect to a single color is smaller than 3/5 of the picture, and carrying out the subsequent process.
The gradient feature extraction module 300 is used for extracting gradient features from the electroencephalogram;
specifically, the gradient feature extraction module 300 includes:
normalization unit for calculating H (x, y) =h (x, y) by the formula H (x, y) gamma The color and gamma space normalization processing is carried out on the electroencephalogram topographic map, so that negative effects caused by shadows and color changes in the image can be effectively reduced; wherein h (x, y) represents the pixel value of the pixel point at (x, y) in the electroencephalogram, gamma is a correction value, the parameter is set to 0.5, and H (x, y) represents the pixel value of the pixel point at (x, y) after processing;
gradient value calculating unit for calculating gradient value by formulaCalculating to obtain gradient value G of pixel point (x, y) in electroencephalogram in horizontal direction x Gradient value G of (x, y) and pixel point (x, y) in vertical direction y (x,y);
A gradient direction histogram construction unit for dividing the electroencephalogram into cells of a plurality of units, and constructing a gradient direction histogram of each cell unit according to the gradient values of the pixel points (x, y) in the cells in the horizontal direction and the gradient values in the vertical direction; cells refer to a collection of pixels, e.g., each cell is divided into 6 x 6 pixels. By using gradients in the horizontal and vertical directionsValue G x (x,y)、G y The (x, y) constructed gradient direction histogram represents the relationship between the cells, thereby realizing numbering of the local graphic range and ensuring the accuracy of the shape and appearance of the objects in the graphic.
And the gradient direction histogram merging unit is used for merging the gradient direction histograms of the cell units to obtain an overall gradient direction histogram and generating a gradient characteristic file corresponding to the electroencephalogram topographic map. By combining small cell units into larger cell units, the HOG descriptor can be converted into vectors composed of the direction histograms of the cell units in each interval, the vectors can be mutually overlapped, and the input and output of each cell unit can have various influences on the final descriptor, so that the influence of elements such as color, edge, shadow and the like can be reduced after the large interval is formed, and the quality of the electroencephalogram data is improved.
The emotion recognition module 400 is configured to input the gradient features into a preset deep learning model, and obtain an emotion recognition result.
Specifically, before using the deep learning model, the electroencephalogram data set needs to be trained, and a specific training process includes:
1) And classifying the electroencephalogram topographic maps in the training set into corresponding training folders according to different categories.
2) And respectively setting the information of four classification labels of the electroencephalogram topographic map in the training set as corresponding classification serial numbers, sadness, difficulty and happiness, and inputting the classification serial numbers and the corresponding emotion labels into a support vector machine.
3) And after the steps of calculating the gradient value of the electroencephalogram in the training set, constructing a direction histogram and the like, extracting and obtaining gradient characteristics of the gradient value, generating a corresponding gradient characteristic file, marking the characteristic file with label information, creating a corresponding data folder and storing the corresponding data folder in a system.
4) The gradient characteristics of the electroencephalogram in the training set are input into a support vector machine for training, so that gradient classification characteristics of the electroencephalogram in four different emotions can be obtained, then the label information is combined with a support vector machine classifier for training, and therefore a complete deep learning classification model is constructed, and the classification model is stored in a system.
The embodiment of the invention adopts the electroencephalogram data acquired by the same acquirer on different dates in the electroencephalogram emotion data set, and can avoid quality problems in a certain data acquisition process so as to influence the overall recognition accuracy. The number of the training sets is set to be twice of that of the test sets, so that the data sets can be effectively trained, and a classification recognition model is built by using gradient features of the electroencephalogram topographic map in the training process, so that classification features are more prominent, and classification effects are more reasonable and accurate.
Next, an electroencephalogram dataset is tested and a classification result is output, specifically including:
1) And classifying the electroencephalogram map in the test set into corresponding test folders according to different categories.
2) And respectively setting the information neutrality, sadness, difficulty and happiness of four classification labels of the electroencephalogram topographic map in the test set as corresponding classification serial numbers, and inputting the classification serial numbers and the corresponding emotion labels into a support vector machine.
3) And after the steps of calculating the gradient value of the electroencephalogram in the test set, constructing a direction histogram and the like, extracting and obtaining gradient characteristics of the gradient value, generating a corresponding gradient characteristic file, marking the characteristic file with label information, creating a corresponding data folder and storing the corresponding data folder in a system.
4) After the classification labels are set on the different electroencephalogram topographic map data in the test set, the training is completed to support the vector machine deep learning classification model, the recognition prediction result and the calculation recognition accuracy are output, and each specific index is output.
It should be noted that, the embodiment of the invention extracts the gradient characteristics of the electroencephalogram to classify, is simple and efficient in classifying and identifying the test set, can complete the classification task by analyzing the gradient characteristics of the electroencephalogram without huge calculation of the classification model, and effectively improves the classification efficiency and accuracy on the basis of classifying the gradient characteristics of the electroencephalogram without analyzing the electroencephalogram data of all frequency bands one by one.
The embodiment of the invention provides a method and a system for recognizing emotion based on a directional gradient histogram and a support vector machine, which are used for processing acquired electroencephalogram signals into electroencephalogram topographic maps with emotion characteristics, avoiding single redundancy and characteristics of electroencephalogram signal data, extracting gradient characteristics of the electroencephalogram topographic maps by using the directional gradient histogram, classifying and training the electroencephalogram topographic maps of different emotions by combining a machine learning technology, and applying a trained model to the system to realize classifying and recognizing the emotion, thereby carrying out efficient and correct recognition on the emotion.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Embodiments of the present invention are not described in detail and are well known to those skilled in the art. Finally, it is noted that the above embodiments are only for illustrating the technical solution of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications and equivalents may be made thereto without departing from the spirit and scope of the technical solution of the present invention, which is intended to be covered by the scope of the claims of the present invention.

Claims (10)

1. A mood recognition method for analyzing brain waves using a deep learning model, comprising:
collecting brain electrical signals;
mapping the characteristics of the electroencephalogram signals to an electroencephalogram map;
extracting gradient features from the electroencephalogram map;
and inputting the gradient characteristics into a preset deep learning model to obtain emotion recognition results.
2. The emotion recognition method for analyzing brain waves using a deep learning model according to claim 1, wherein the mapping the features of the brain waves onto an electroencephalogram map includes:
and mapping the characteristics of the electroencephalogram signals to the space position of the guide electrode of the brain-computer interface equipment to generate an electroencephalogram topographic map.
3. The emotion recognition method for analyzing brain waves using a deep learning model according to claim 1 or 2, further comprising, after said mapping of the features of the brain electrical signals onto an electroencephalogram map:
and if the coverage of the generated electroencephalogram with a single color is less than 3/5 of that of the picture, performing a subsequent process.
4. The emotion recognition method for analyzing brain waves using a deep learning model according to claim 1, wherein the extracting gradient features from the electroencephalogram comprises:
by the formula H (x, y) =h (x, y) gamma Carrying out normalization processing of color and gamma space on the electroencephalogram topographic map; wherein H (x, y) represents a pixel value of a pixel point in the electroencephalogram at (x, y), gamma is a correction value, and H (x, y) represents a pixel value of a processed pixel point at (x, y);
by the formulaCalculating to obtain gradient value G of pixel point (x, y) in the electroencephalogram topographic map in horizontal direction x Gradient value G of (x, y) and pixel point (x, y) in vertical direction y (x,y);
Dividing the electroencephalogram into a plurality of units of cells, and constructing a gradient direction histogram of each cell unit according to the gradient value of the pixel points (x, y) in the cells in the horizontal direction and the gradient value in the vertical direction;
and combining the gradient direction histograms of the cell units to obtain an overall gradient direction histogram.
5. The emotion recognition method for analyzing brain waves using a deep learning model according to claim 1, further comprising, after the acquisition of the brain waves:
and denoising, smoothing and filtering the electroencephalogram signals and extracting frequency domain features.
6. An emotion recognition system for analyzing brain waves using a deep learning model, comprising:
the electroencephalogram signal acquisition module is used for acquiring electroencephalogram signals;
the electroencephalogram generating module is used for mapping the characteristics of the electroencephalogram signals to an electroencephalogram;
the gradient feature extraction module is used for extracting gradient features from the electroencephalogram;
and the emotion recognition module is used for inputting the gradient characteristics into a preset deep learning model to obtain an emotion recognition result.
7. The emotion recognition system for analyzing brain waves using a deep learning model of claim 6, wherein the electroencephalogram generation module is specifically configured to map features of the brain electrical signals onto a lead spatial location of a brain-computer interface device to generate an electroencephalogram.
8. The emotion recognition system for analyzing brain waves using a deep learning model according to claim 6 or 7, further comprising:
and the data detection module is used for carrying out the follow-up process if the coverage of the generated electroencephalogram map on a single color is smaller than 3/5 of that of the picture.
9. The emotion recognition system for analyzing brain waves using a deep learning model of claim 6, wherein the gradient feature extraction module comprises:
normalization unit for calculating H (x, y) =h (x, y) by the formula H (x, y) gamma Carrying out normalization processing of color and gamma space on the electroencephalogram topographic map; wherein H (x, y) represents a pixel value of a pixel point in the electroencephalogram at (x, y), gamma is a correction value, and H (x, y) represents a pixel value of a processed pixel point at (x, y);
gradient value calculating unit forBy the formulaCalculating to obtain gradient value G of pixel point (x, y) in the electroencephalogram topographic map in horizontal direction x Gradient value G of (x, y) and pixel point (x, y) in vertical direction y (x,y);
A gradient direction histogram construction unit for dividing the electroencephalogram into cells of a plurality of units, and constructing a gradient direction histogram of each cell unit according to the gradient values of the pixel points (x, y) in the cells in the horizontal direction and the gradient values in the vertical direction;
and the gradient direction histogram merging unit is used for merging the gradient direction histograms of the cell units to obtain an overall gradient direction histogram.
10. The emotion recognition system for analyzing brain waves using a deep learning model of claim 6, further comprising:
and the data preprocessing module is used for denoising, smoothing filtering and frequency domain feature extraction processing of the electroencephalogram signals.
CN202310826965.5A 2023-07-07 2023-07-07 Emotion recognition method and system for analyzing brain waves by deep learning model Pending CN116898439A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310826965.5A CN116898439A (en) 2023-07-07 2023-07-07 Emotion recognition method and system for analyzing brain waves by deep learning model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310826965.5A CN116898439A (en) 2023-07-07 2023-07-07 Emotion recognition method and system for analyzing brain waves by deep learning model

Publications (1)

Publication Number Publication Date
CN116898439A true CN116898439A (en) 2023-10-20

Family

ID=88355648

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310826965.5A Pending CN116898439A (en) 2023-07-07 2023-07-07 Emotion recognition method and system for analyzing brain waves by deep learning model

Country Status (1)

Country Link
CN (1) CN116898439A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100145218A1 (en) * 2008-04-04 2010-06-10 Shinobu Adachi Adjustment device, method, and computer program for a brainwave identification system
CN104083163A (en) * 2014-07-16 2014-10-08 南京大学 Method for obtaining nonlinearity parameter electroencephalogram mapping
KR101792579B1 (en) * 2016-08-08 2017-11-02 정진수 Method for control the apparatus using brain wave
AU2021103884A4 (en) * 2021-07-06 2022-04-07 Mishra, Satyasis DR Epileptic Seizure Detection and Classification Using HOG feature based MSCA-ELM Model and Embedded Prototype Development
CN114970599A (en) * 2022-04-01 2022-08-30 中国科学院深圳先进技术研究院 Identification method and identification device for attention defect associated electroencephalogram signals and storage medium
CN115444420A (en) * 2022-09-12 2022-12-09 昆明理工大学 CCNN and stacked-BilSTM-based network emotion recognition method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100145218A1 (en) * 2008-04-04 2010-06-10 Shinobu Adachi Adjustment device, method, and computer program for a brainwave identification system
CN104083163A (en) * 2014-07-16 2014-10-08 南京大学 Method for obtaining nonlinearity parameter electroencephalogram mapping
KR101792579B1 (en) * 2016-08-08 2017-11-02 정진수 Method for control the apparatus using brain wave
AU2021103884A4 (en) * 2021-07-06 2022-04-07 Mishra, Satyasis DR Epileptic Seizure Detection and Classification Using HOG feature based MSCA-ELM Model and Embedded Prototype Development
CN114970599A (en) * 2022-04-01 2022-08-30 中国科学院深圳先进技术研究院 Identification method and identification device for attention defect associated electroencephalogram signals and storage medium
CN115444420A (en) * 2022-09-12 2022-12-09 昆明理工大学 CCNN and stacked-BilSTM-based network emotion recognition method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
YUNYUAN GAO, 等: "《"Single-trial EEG Emotion Recognition using Granger Causality/Transfer Entropy Analysis》", 《JOURNAL OF NEUROSCIENCE METHODS》, vol. 346, pages 1 - 9 *

Similar Documents

Publication Publication Date Title
CN107657279B (en) Remote sensing target detection method based on small amount of samples
CN110659582A (en) Image conversion model training method, heterogeneous face recognition method, device and equipment
Rani et al. Efficient 3D AlexNet architecture for object recognition using syntactic patterns from medical images
CN108491077A (en) A kind of surface electromyogram signal gesture identification method for convolutional neural networks of being divided and ruled based on multithread
CN109034099B (en) Expression recognition method and device
CN105160310A (en) 3D (three-dimensional) convolutional neural network based human body behavior recognition method
CN112580617B (en) Expression recognition method and device in natural scene
CN105046197A (en) Multi-template pedestrian detection method based on cluster
CN108776774A (en) A kind of human facial expression recognition method based on complexity categorization of perception algorithm
CN110991406A (en) RSVP electroencephalogram characteristic-based small target detection method and system
CN105117708A (en) Facial expression recognition method and apparatus
CN102842033A (en) Human expression emotion semantic recognizing method based on face recognition
CN113486752B (en) Emotion recognition method and system based on electrocardiosignal
CN111723662B (en) Human body posture recognition method based on convolutional neural network
CN113112498B (en) Grape leaf spot identification method based on fine-grained countermeasure generation network
HN et al. Human Facial Expression Recognition from static images using shape and appearance feature
Mohedano et al. Object segmentation in images using EEG signals
CN111126280A (en) Gesture recognition fusion-based aphasia patient auxiliary rehabilitation training system and method
CN113743389B (en) Facial expression recognition method and device and electronic equipment
CN113627391B (en) Cross-mode electroencephalogram signal identification method considering individual difference
CN107729863B (en) Human finger vein recognition method
CN112861881A (en) Honeycomb lung recognition method based on improved MobileNet model
CN111914796A (en) Human body behavior identification method based on depth map and skeleton points
CN109359543B (en) Portrait retrieval method and device based on skeletonization
CN116421200A (en) Brain electricity emotion analysis method of multi-task mixed model based on parallel training

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination