CN111427450A - Method, system and device for emotion recognition and readable storage medium - Google Patents

Method, system and device for emotion recognition and readable storage medium Download PDF

Info

Publication number
CN111427450A
CN111427450A CN202010203114.1A CN202010203114A CN111427450A CN 111427450 A CN111427450 A CN 111427450A CN 202010203114 A CN202010203114 A CN 202010203114A CN 111427450 A CN111427450 A CN 111427450A
Authority
CN
China
Prior art keywords
channel
filtering
signal
determining
emotion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010203114.1A
Other languages
Chinese (zh)
Inventor
谢小峰
阮浩
唐荣年
邹孝坤
李子波
宁雨珂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hainan University
Original Assignee
Hainan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hainan University filed Critical Hainan University
Priority to CN202010203114.1A priority Critical patent/CN111427450A/en
Publication of CN111427450A publication Critical patent/CN111427450A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/015Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/02Preprocessing
    • G06F2218/04Denoising
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Biomedical Technology (AREA)
  • Dermatology (AREA)
  • General Health & Medical Sciences (AREA)
  • Neurology (AREA)
  • Neurosurgery (AREA)
  • Human Computer Interaction (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a method for emotion recognition, which comprises the following steps: performing down-sampling on the original electroencephalogram signal to obtain an electroencephalogram signal with a preset frequency; filtering the electroencephalogram signal with preset frequency by using a full-band-pass filter to obtain a filtered signal; local channel selection is carried out on the filtering signal, and spatial filtering is carried out according to the local channel selection result to obtain the characteristic of each channel of the filtering signal; and determining the optimal channel characteristics according to the characteristics of each channel, and determining the emotion type of the original electroencephalogram signal according to the optimal channel characteristics. According to the method, the full-band-pass filter is used for filtering, so that the obtaining range of effective information in the electroencephalogram signals is widened; by selecting local channels for the filtering signals, the distinguishability of channel characteristics is improved, and the recognition accuracy of the emotion recognition method based on the brain-computer interface is further improved. The application also provides a system, equipment and readable storage medium for emotion recognition, and the system, the equipment and the readable storage medium have the beneficial effects.

Description

Method, system and device for emotion recognition and readable storage medium
Technical Field
The present application relates to the field of brain-computer interfaces, and in particular, to a method, system, device, and readable storage medium for emotion recognition.
Background
The emotion is attitude experience generated by whether a person meets self desire or needs to objective things or situations (external or self stimulation), and is psychological and physiological states generated by comprehensively reacting various feelings, ideas and behaviors of the person. The emotion as a high-level function of human brain belongs to an important part of human intelligence and influences the learning, memory and decision of people to different degrees. Detection, identification and regulation of emotion are always hot spots of scientific research. The emotion recognition is an important component of emotion calculation, is the combination of computational science, psychological science and cognitive science, and is an important guarantee for researching the emotional characteristics of man-machine interaction and improving the harmony between people and computers.
Brain Computer Interface (BCI) is a communication system that transfers information between the brain and peripheral devices, and does not rely on the normal output pathway of the brain, which is composed of peripheral nerves and muscles. With the rapid development of electronic information technology, the application of brain-computer interface technology is greatly expanded, and the brain-computer interface technology has better application prospects in the fields of neural rehabilitation, entertainment, disease diagnosis and detection, artificial intelligence, machine learning and the like. However, the emotional fluctuation in the brain control process may affect the stability and reliability of the performance of the brain-computer interface, so there is an urgent need to monitor and identify the emotional state in the brain control process and to feed back the user in real time.
However, electroencephalogram signals acquired by the emotion recognition method based on the brain-computer interface in the prior art are unbalanced, the acquisition range of effective information is too small, and the feature distinguishability is poor, so that the emotion recognition method based on the brain-computer interface is extremely low in precision.
Therefore, how to improve the recognition accuracy of the emotion recognition method based on the brain-computer interface is a technical problem to be solved by those skilled in the art at present.
Disclosure of Invention
The application aims to provide a method, a system, equipment and a readable storage medium for emotion recognition, which are used for improving the recognition accuracy of the emotion recognition method based on a brain-computer interface.
In order to solve the above technical problem, the present application provides a method for emotion recognition, including:
acquiring an original electroencephalogram signal, and performing down-sampling on the original electroencephalogram signal to obtain an electroencephalogram signal with a preset frequency;
filtering the electroencephalogram signals with the preset frequency by using a full-band-pass filter to obtain filtered signals;
local channel selection is carried out on the filtering signal, and spatial filtering is carried out according to a local channel selection result to obtain the characteristic of each channel of the filtering signal;
and determining the optimal channel characteristics according to the characteristics of each channel, and determining the emotion type of the original electroencephalogram signal according to the optimal channel characteristics.
Optionally, the local channel selection is performed on the filtered signal, and spatial filtering is performed according to a local channel selection result to obtain a feature of each channel of the filtered signal, including:
sequentially taking each channel of the filtering signal as a main channel, and taking a preset number of channels around the main channel as local channels;
and performing spatial filtering on the main channel and each local channel by using a common spatial mode method, and taking an obtained filtering result as the characteristic of the main channel to obtain the characteristic of each channel of the filtering signal.
Optionally, determining an optimal channel characteristic according to the characteristic of each channel includes:
performing group sparse selection on each channel according to the characteristics of each channel to determine a redundant channel;
and eliminating the redundant channels, and combining the characteristics of the residual channels to obtain the optimal channel characteristics.
Optionally, determining an emotion category of the original electroencephalogram signal according to the optimal channel feature includes:
and determining the emotion type of the original electroencephalogram signal according to the optimal channel characteristics by utilizing a support vector machine classifier.
Optionally, determining an emotion category of the original electroencephalogram signal according to the optimal channel feature includes:
and determining the emotion type of the original electroencephalogram signal according to the optimal channel characteristics by utilizing a linear discriminant analysis algorithm.
The present application also provides a system for emotion recognition, the system comprising:
the down-sampling module is used for acquiring an original electroencephalogram signal and down-sampling the original electroencephalogram signal to obtain an electroencephalogram signal with a preset frequency;
the band-pass filtering module is used for filtering the electroencephalogram signals with the preset frequency by using a full-band-pass filter to obtain filtered signals;
the spatial filtering module is used for carrying out local channel selection on the filtering signal and carrying out spatial filtering according to a local channel selection result to obtain the characteristic of each channel of the filtering signal;
and the emotion classification module is used for determining the optimal channel characteristics according to the characteristics of each channel and determining the emotion types of the original electroencephalogram signals according to the optimal channel characteristics.
Optionally, the spatial filtering module includes:
the determining submodule is used for sequentially taking each channel of the filtering signal as a main channel and taking a preset number of channels around the main channel as local channels;
and the spatial filtering submodule is used for performing spatial filtering on the main channel and each local channel by using a common spatial mode method, and taking an obtained filtering result as the characteristic of the main channel to obtain the characteristic of each channel of the filtering signal.
Optionally, the emotion classification module includes:
the selection submodule is used for carrying out group sparse selection on each channel according to the characteristics of each channel and determining a redundant channel;
and the combination sub-module is used for eliminating the redundant channels and combining the characteristics of the rest channels to obtain the optimal channel characteristics.
The present application also provides an emotion recognition apparatus, including:
a memory for storing a computer program;
a processor for implementing the steps of the method of emotion recognition as defined in any of the above when said computer program is executed.
The present application also provides a readable storage medium having stored thereon a computer program which, when being executed by a processor, carries out the steps of the method of emotion recognition as set forth in any of the above.
The application provides a method for emotion recognition, which comprises the following steps: acquiring an original electroencephalogram signal, and performing down-sampling on the original electroencephalogram signal to obtain an electroencephalogram signal with a preset frequency; filtering the electroencephalogram signal with preset frequency by using a full-band-pass filter to obtain a filtered signal; local channel selection is carried out on the filtering signal, and spatial filtering is carried out according to the local channel selection result to obtain the characteristic of each channel of the filtering signal; and determining the optimal channel characteristics according to the characteristics of each channel, and determining the emotion type of the original electroencephalogram signal according to the optimal channel characteristics.
According to the technical scheme provided by the application, the original electroencephalogram signals are subjected to down-sampling processing, so that the situation of signal imbalance is avoided; the full-band-pass filter is used for filtering the electroencephalogram signals with preset frequencies, so that the acquisition range of effective information in the electroencephalogram signals is widened; by carrying out local channel selection on the filtering signal and carrying out spatial filtering processing according to the local channel selection result, the distinguishability of the channel characteristics is improved, and the recognition precision of the emotion recognition method based on the brain-computer interface is further improved. The application also provides a system, equipment and readable storage medium for emotion recognition, which have the beneficial effects and are not repeated herein.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a flowchart of a method for emotion recognition provided in an embodiment of the present application;
FIG. 2 is a flow chart of an actual representation of S103 in a method of emotion recognition provided in FIG. 1;
fig. 3 is a block diagram of a system for emotion recognition provided in an embodiment of the present application;
FIG. 4 is a block diagram of another emotion recognition system provided in an embodiment of the present application;
fig. 5 is a block diagram of an emotion recognition apparatus according to an embodiment of the present application.
Detailed Description
The core of the application is to provide a method, a system, equipment and a readable storage medium for emotion recognition, which are used for improving the recognition accuracy of the emotion recognition method based on a brain-computer interface.
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Referring to fig. 1, fig. 1 is a flowchart of a method for emotion recognition according to an embodiment of the present application.
The method specifically comprises the following steps:
s101: acquiring an original electroencephalogram signal, and performing down-sampling on the original electroencephalogram signal to obtain an electroencephalogram signal with a preset frequency;
because the fluctuation of emotion in the brain control process can affect the stability and reliability of the performance of the brain-computer interface, electroencephalogram signals acquired by the emotion recognition method based on the brain-computer interface in the prior art are unbalanced, the acquisition range of effective information is too small, and the characteristic distinguishability is poor, so that the accuracy of the emotion recognition method based on the brain-computer interface is extremely low; therefore, the present application provides a method for emotion recognition, which is used for solving the above problems;
the original electroencephalogram signal mentioned here may be acquired by an electroencephalogram signal acquisition device, or may be downloaded by connecting a system to a specified location, which is not specifically limited in this application;
the original electroencephalogram signal is down-sampled to obtain an electroencephalogram signal with a preset frequency, and the electroencephalogram signals with different frequencies are unified into the same frequency, so that the occurrence of signal imbalance is avoided, and in a specific embodiment, the preset frequency can be 128 Hz.
S102: filtering the electroencephalogram signal with preset frequency by using a full-band-pass filter to obtain a filtered signal;
the electroencephalogram signal is a multi-channel time sequence signal, one sample point is a matrix, in the prior art, a 5-40 Hz band-pass filter is usually adopted to carry out digital signal filtering on the time sequence of each channel so as to filter noise and obtain required information, so that the problems of over-small information acquisition range, small quantity of extracted useful features, insufficient redundancy of useful information, more prominent noise influence and the like are caused; therefore, after the electroencephalogram signals with the preset frequency are obtained, the electroencephalogram signals with the preset frequency are filtered by the full-band (4-45 Hz) band-pass filter, the filtering range of the full-band-pass filter is wider, and therefore compared with the prior art, the method and the device improve the obtaining range of effective information in the electroencephalogram signals, increase the quantity of extracted useful characteristics, improve the redundancy of the useful information and further reduce the influence of noise.
S103: local channel selection is carried out on the filtering signal, and spatial filtering is carried out according to the local channel selection result to obtain the characteristic of each channel of the filtering signal;
in the step, the purpose of selecting local channels for the filtering signals is to construct a rich airspace characteristic information base, increase the number of extracted useful characteristics, avoid neglecting local information and further improve the emotion classification precision;
optionally, in order to improve the efficiency of spatial filtering, the spatial filtering may be implemented by reducing the number of channels of the filtering signal, that is, performing local channel selection on the filtering signal and performing spatial filtering according to a local channel selection result to obtain a feature of each channel of the filtering signal, where the feature may specifically be:
selecting a local channel from the filtering signal to carry out spatial filtering, and taking a spatial filtering result of the local channel as the characteristic of each channel of the filtering signal;
preferably, to improve the accuracy of the spatial filtering, the local channel selection is performed on the filtered signal mentioned herein, and the spatial filtering is performed according to the local channel selection result, so as to obtain the feature of each channel of the filtered signal, which may also be specifically implemented by executing the steps shown in fig. 2, which is described below with reference to fig. 2, and fig. 2 is a flowchart of an actual representation manner of S103 in the emotion recognition method provided in fig. 1, and specifically includes the following steps:
s201: sequentially taking each channel of the filtering signal as a main channel, and taking a preset number of channels around the main channel as local channels;
s202: and performing spatial filtering on the main channel and each local channel by using a common spatial mode method, and taking the obtained filtering result as the characteristic of the main channel to obtain the characteristic of each channel of the filtering signal.
The embodiment of the application takes each channel of the filtering signal as a main channel in sequence, takes the channels with the preset number around the main channel as local channels, then carries out spatial filtering on the main channel and each local channel by utilizing a common spatial mode method, and takes the obtained filtering result as the characteristics of the main channel to obtain the characteristics of each channel of the filtering signal, thereby greatly improving the quantity of extracted useful characteristics, avoiding neglecting local information, and further greatly improving the precision of emotion classification.
S104: and determining the optimal channel characteristics according to the characteristics of each channel, and determining the emotion type of the original electroencephalogram signal according to the optimal channel characteristics.
After the characteristics of each channel of the filtering signal are obtained, the optimal channel characteristics are determined, the emotion category of the original electroencephalogram signal is determined according to the optimal channel characteristics, and emotion recognition of the original electroencephalogram signal is completed;
optionally, the determining of the optimal channel feature according to the feature of each channel mentioned herein may specifically be to select an optimal feature from the features of each channel as the optimal channel feature; further, in order to avoid the redundant channel from affecting the emotion classification, the method specifically includes:
performing group sparse selection on each channel according to the characteristics of each channel to determine a redundant channel;
and eliminating the redundant channels, and combining the characteristics of the residual channels to obtain the optimal channel characteristics.
Optionally, on this basis, the determining of the emotion category of the original electroencephalogram signal according to the optimal channel feature mentioned in the step may specifically be:
determining the emotion type of the original electroencephalogram signal according to the optimal channel characteristics by using a support vector machine classifier;
optionally, on this basis, the determining of the emotion category of the original electroencephalogram signal according to the optimal channel feature mentioned in the step may specifically be:
and determining the emotion type of the original electroencephalogram signal according to the optimal channel characteristics by utilizing a linear discriminant analysis algorithm.
Based on the technical scheme, the emotion recognition method provided by the application avoids the occurrence of signal imbalance by performing down-sampling processing on the original electroencephalogram signal; the full-band-pass filter is used for filtering the electroencephalogram signals with preset frequencies, so that the acquisition range of effective information in the electroencephalogram signals is widened; by carrying out local channel selection on the filtering signal and carrying out spatial filtering processing according to the local channel selection result, the distinguishability of the channel characteristics is improved, and the recognition precision of the emotion recognition method based on the brain-computer interface is further improved.
Referring to fig. 3, fig. 3 is a block diagram of a system for emotion recognition according to an embodiment of the present application.
The system may include:
the down-sampling module 100 is configured to acquire an original electroencephalogram signal, and down-sample the original electroencephalogram signal to obtain an electroencephalogram signal with a preset frequency;
the band-pass filtering module 200 is used for filtering the electroencephalogram signal with the preset frequency by using a full-band-pass filter to obtain a filtered signal;
the spatial filtering module 300 is configured to perform local channel selection on the filtered signal, and perform spatial filtering according to a local channel selection result to obtain a feature of each channel of the filtered signal;
and the emotion classification module 400 is used for determining the optimal channel characteristics according to the characteristics of each channel and determining the emotion types of the original electroencephalogram signals according to the optimal channel characteristics.
Referring to fig. 4, fig. 4 is a block diagram of another emotion recognition system provided in an embodiment of the present application.
The spatial filtering module 300 may include:
the determining submodule is used for sequentially taking each channel of the filtering signal as a main channel and taking the channels with the preset number around the main channel as local channels;
and the spatial filtering submodule is used for carrying out spatial filtering on the main channel and each local channel by using a common spatial mode method, and taking an obtained filtering result as the characteristic of the main channel to obtain the characteristic of each channel of the filtering signal.
The emotion classification module 400 may include:
the selection submodule is used for carrying out group sparse selection on each channel according to the characteristics of each channel and determining a redundant channel;
and the combination submodule is used for eliminating the redundant channels and combining the characteristics of the rest channels to obtain the optimal channel characteristics.
The emotion classification module 400 may include:
and the first classification submodule is used for determining the emotion category of the original electroencephalogram signal according to the optimal channel characteristics by utilizing a support vector machine classifier.
The emotion classification module 400 may include:
and the second classification submodule is used for determining the emotion category of the original electroencephalogram signal according to the optimal channel characteristics by utilizing a linear discriminant analysis algorithm.
Since the embodiment of the system part corresponds to the embodiment of the method part, the embodiment of the system part is described with reference to the embodiment of the method part, and is not repeated here.
Referring to fig. 5, fig. 5 is a structural diagram of an emotion recognition apparatus according to an embodiment of the present application.
The emotion recognition device 500 may vary significantly due to configuration or performance, and may include one or more processors (CPUs) 522 (e.g., one or more processors) and memory 532, one or more storage media 530 (e.g., one or more mass storage devices) storing applications 542 or data 544. Memory 532 and storage media 530 may be, among other things, transient storage or persistent storage. The program stored on the storage medium 530 may include one or more modules (not shown), each of which may include a sequence of instruction operations for the device. Still further, processor 522 may be configured to communicate with storage medium 530 to execute a series of instruction operations in storage medium 530 on emotion recognition device 500.
The emotion recognition device 500 may also include one or more power supplies 525, one or more wired or wireless network interfaces 550, one or more input-output interfaces 558, and/or one or more operating systems 541, such as Windows server (tm), Mac OS XTM, UnixTM, &lttttranslation = L "&tttl/t &tttinuxtm, FreeBSDTM, and the like.
The steps in the method of emotion recognition described in fig. 1 to 2 above are implemented by an emotion recognition device based on the structure shown in fig. 5.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the system, the apparatus and the module described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus, device and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of modules is merely a division of logical functions, and an actual implementation may have another division, for example, a plurality of modules or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or modules, and may be in an electrical, mechanical or other form.
Modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present application may be integrated into one processing module, or each of the modules may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
The integrated module, if implemented in the form of a software functional module and sold or used as a separate product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a function calling device, or a network device) to execute all or part of the steps of the method of the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
A method, system, device and readable storage medium for emotion recognition provided by the present application are described in detail above. The principles and embodiments of the present application are explained herein using specific examples, which are provided only to help understand the method and the core idea of the present application. It should be noted that, for those skilled in the art, it is possible to make several improvements and modifications to the present application without departing from the principle of the present application, and such improvements and modifications also fall within the scope of the claims of the present application.
It is further noted that, in the present specification, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.

Claims (10)

1. A method of emotion recognition, comprising:
acquiring an original electroencephalogram signal, and performing down-sampling on the original electroencephalogram signal to obtain an electroencephalogram signal with a preset frequency;
filtering the electroencephalogram signals with the preset frequency by using a full-band-pass filter to obtain filtered signals;
local channel selection is carried out on the filtering signal, and spatial filtering is carried out according to a local channel selection result to obtain the characteristic of each channel of the filtering signal;
and determining the optimal channel characteristics according to the characteristics of each channel, and determining the emotion type of the original electroencephalogram signal according to the optimal channel characteristics.
2. The method of claim 1, wherein performing local channel selection on the filtered signal and performing spatial filtering according to the local channel selection result to obtain the feature of each channel of the filtered signal comprises:
sequentially taking each channel of the filtering signal as a main channel, and taking a preset number of channels around the main channel as local channels;
and performing spatial filtering on the main channel and each local channel by using a common spatial mode method, and taking an obtained filtering result as the characteristic of the main channel to obtain the characteristic of each channel of the filtering signal.
3. The method of claim 1, wherein determining optimal channel characteristics from characteristics of each of the channels comprises:
performing group sparse selection on each channel according to the characteristics of each channel to determine a redundant channel;
and eliminating the redundant channels, and combining the characteristics of the residual channels to obtain the optimal channel characteristics.
4. The method of claim 3, wherein determining the emotion classification of the raw brain electrical signal from the optimal channel characteristics comprises:
and determining the emotion type of the original electroencephalogram signal according to the optimal channel characteristics by utilizing a support vector machine classifier.
5. The method of claim 3, wherein determining the emotion classification of the raw brain electrical signal from the optimal channel characteristics comprises:
and determining the emotion type of the original electroencephalogram signal according to the optimal channel characteristics by utilizing a linear discriminant analysis algorithm.
6. A system for emotion recognition, comprising:
the down-sampling module is used for acquiring an original electroencephalogram signal and down-sampling the original electroencephalogram signal to obtain an electroencephalogram signal with a preset frequency;
the band-pass filtering module is used for filtering the electroencephalogram signals with the preset frequency by using a full-band-pass filter to obtain filtered signals;
the spatial filtering module is used for carrying out local channel selection on the filtering signal and carrying out spatial filtering according to a local channel selection result to obtain the characteristic of each channel of the filtering signal;
and the emotion classification module is used for determining the optimal channel characteristics according to the characteristics of each channel and determining the emotion types of the original electroencephalogram signals according to the optimal channel characteristics.
7. The system of claim 6, wherein the spatial filtering module comprises:
the determining submodule is used for sequentially taking each channel of the filtering signal as a main channel and taking a preset number of channels around the main channel as local channels;
and the spatial filtering submodule is used for performing spatial filtering on the main channel and each local channel by using a common spatial mode method, and taking an obtained filtering result as the characteristic of the main channel to obtain the characteristic of each channel of the filtering signal.
8. The system of claim 6, wherein the emotion classification module comprises:
the selection submodule is used for carrying out group sparse selection on each channel according to the characteristics of each channel and determining a redundant channel;
and the combination sub-module is used for eliminating the redundant channels and combining the characteristics of the rest channels to obtain the optimal channel characteristics.
9. An emotion recognition device, characterized by comprising:
a memory for storing a computer program;
a processor for implementing the steps of the method of emotion recognition as claimed in any of claims 1 to 5 when said computer program is executed.
10. A readable storage medium, having stored thereon a computer program which, when being executed by a processor, carries out the steps of the method of emotion recognition as recited in any of claims 1 to 5.
CN202010203114.1A 2020-03-20 2020-03-20 Method, system and device for emotion recognition and readable storage medium Pending CN111427450A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010203114.1A CN111427450A (en) 2020-03-20 2020-03-20 Method, system and device for emotion recognition and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010203114.1A CN111427450A (en) 2020-03-20 2020-03-20 Method, system and device for emotion recognition and readable storage medium

Publications (1)

Publication Number Publication Date
CN111427450A true CN111427450A (en) 2020-07-17

Family

ID=71548310

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010203114.1A Pending CN111427450A (en) 2020-03-20 2020-03-20 Method, system and device for emotion recognition and readable storage medium

Country Status (1)

Country Link
CN (1) CN111427450A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114818786A (en) * 2022-04-06 2022-07-29 五邑大学 Channel screening method, emotion recognition method, system and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102715911A (en) * 2012-06-15 2012-10-10 天津大学 Brain electric features based emotional state recognition method
CN108937968A (en) * 2018-06-04 2018-12-07 安徽大学 lead selection method of emotion electroencephalogram signal based on independent component analysis
US20190096279A1 (en) * 2017-09-26 2019-03-28 Cerekinetic, Inc. Decision-making system using emotion and cognition inputs
CN110353673A (en) * 2019-07-16 2019-10-22 西安邮电大学 A kind of brain electric channel selection method based on standard mutual information
CN110881975A (en) * 2019-12-24 2020-03-17 山东中科先进技术研究院有限公司 Emotion recognition method and system based on electroencephalogram signals
CN111091074A (en) * 2019-12-02 2020-05-01 杭州电子科技大学 Motor imagery electroencephalogram signal classification method based on optimal region common space mode

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102715911A (en) * 2012-06-15 2012-10-10 天津大学 Brain electric features based emotional state recognition method
US20190096279A1 (en) * 2017-09-26 2019-03-28 Cerekinetic, Inc. Decision-making system using emotion and cognition inputs
CN108937968A (en) * 2018-06-04 2018-12-07 安徽大学 lead selection method of emotion electroencephalogram signal based on independent component analysis
CN110353673A (en) * 2019-07-16 2019-10-22 西安邮电大学 A kind of brain electric channel selection method based on standard mutual information
CN111091074A (en) * 2019-12-02 2020-05-01 杭州电子科技大学 Motor imagery electroencephalogram signal classification method based on optimal region common space mode
CN110881975A (en) * 2019-12-24 2020-03-17 山东中科先进技术研究院有限公司 Emotion recognition method and system based on electroencephalogram signals

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
汲继跃 等: "最优区域共空间模式的运动想象脑电信号分类方法", 《传感技术学报》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114818786A (en) * 2022-04-06 2022-07-29 五邑大学 Channel screening method, emotion recognition method, system and storage medium
CN114818786B (en) * 2022-04-06 2024-03-01 五邑大学 Channel screening method, emotion recognition system and storage medium

Similar Documents

Publication Publication Date Title
Tao et al. EEG-based emotion recognition via channel-wise attention and self attention
Chakladar et al. EEG based emotion classification using “correlation based subset selection”
Chen et al. Emotion recognition of EEG signals based on the ensemble learning method: AdaBoost
Athif et al. WaveCSP: a robust motor imagery classifier for consumer EEG devices
Lopez et al. Hypercomplex multimodal emotion recognition from EEG and peripheral physiological signals
CN113842152B (en) Electroencephalogram signal classification network training method, classification method, equipment and storage medium
CN111427450A (en) Method, system and device for emotion recognition and readable storage medium
Jiao et al. Effective connectivity analysis of fMRI data based on network motifs
CN114947886A (en) Symbol digital conversion testing method and system based on asynchronous brain-computer interface
CN111338483B (en) Method and system for controlling equipment, control equipment and readable storage medium
CN116687409B (en) Emotion recognition method and system based on digital twin and deep learning
Ren et al. Extracting and supplementing method for EEG signal in manufacturing workshop based on deep learning of time–frequency correlation
CN117055726A (en) Micro-motion control method for brain-computer interaction
CN116541751A (en) Electroencephalogram signal classification method based on brain function connection network characteristics
CN115775565A (en) Multi-mode-based emotion recognition method and related equipment
CN114358086A (en) Clustering-based multi-task emotion electroencephalogram feature extraction and identification method
Hong et al. AI-based Bayesian inference scheme to recognize electroencephalogram signals for smart healthcare
CN113925517A (en) Cognitive disorder recognition method, device and medium based on electroencephalogram signals
Xu et al. Emotion Recognition from Multi-channel EEG via an Attention-Based CNN Model
Wang et al. EEG-based emotion recognition using convolutional neural network with functional connections
Wang et al. Channel selection method based on CNNSE for EEG emotion recognition
Li et al. Motor imagery electroencephalogram classification based on sparse spatiotemporal decomposition and channel attention
Yang et al. Greedy-mrmr: An emotion recognition algorithm based on eeg using greedy algorithm
CN108108763B (en) Electroencephalogram classification model generation method and device and electronic equipment
CN118626923A (en) Identity recognition method, identity recognition device, computer equipment, readable storage medium and product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200717