CN116369950A - Target detection method based on electroencephalogram tracing and multi-feature extraction - Google Patents

Target detection method based on electroencephalogram tracing and multi-feature extraction Download PDF

Info

Publication number
CN116369950A
CN116369950A CN202310595880.0A CN202310595880A CN116369950A CN 116369950 A CN116369950 A CN 116369950A CN 202310595880 A CN202310595880 A CN 202310595880A CN 116369950 A CN116369950 A CN 116369950A
Authority
CN
China
Prior art keywords
electroencephalogram
signals
signal
matrix
tracing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310595880.0A
Other languages
Chinese (zh)
Other versions
CN116369950B (en
Inventor
艾青松
赵梦圆
陈昆
刘泉
马力
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University of Technology WUT
Original Assignee
Wuhan University of Technology WUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University of Technology WUT filed Critical Wuhan University of Technology WUT
Priority to CN202310595880.0A priority Critical patent/CN116369950B/en
Publication of CN116369950A publication Critical patent/CN116369950A/en
Application granted granted Critical
Publication of CN116369950B publication Critical patent/CN116369950B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • A61B5/377Electroencephalography [EEG] using evoked responses
    • A61B5/378Visual stimuli
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7225Details of analog processing, e.g. isolation amplifier, gain or sensitivity adjustment, filtering, baseline or drift compensation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Physics & Mathematics (AREA)
  • Surgery (AREA)
  • Veterinary Medicine (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Psychiatry (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Pathology (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physiology (AREA)
  • Power Engineering (AREA)
  • Psychology (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

The invention discloses a target detection method based on electroencephalogram tracing and multi-feature extraction, which comprises the following steps: acquiring brain electrical signals under the condition of image stimulation based on a rapid sequence, preprocessing, tracing the multichannel brain electrical signals through a calculation head model, a minimum standard imaging algorithm and an sLORTEA brain tomography method to obtain cortical nerve activity signals, and extracting peak values and peak-to-peak values as time domain features; performing characteristic extraction of five frequency bands by empirical mode decomposition and adopting a second-order natural mode function to serve as a time-frequency domain characteristic of the cortex signal; constructing an optimal spatial filter by adopting a co-spatial mode to extract the airspace characteristics of the cortex signal; extracting space-time data representation of the cortex signal through an extractor, and obtaining a depth feature map of the cortex signal; and the low-level features such as time domain, space domain and the like are fused with the high-level depth features extracted by the MGIFNet neural network through multi-scale series features. The invention can improve the performance of the target detection system based on the electroencephalogram signals.

Description

Target detection method based on electroencephalogram tracing and multi-feature extraction
Technical Field
The invention relates to the technical field of computer image detection and electroencephalogram signal processing, in particular to a target detection method based on electroencephalogram tracing and multi-feature extraction.
Technical Field
A brain-computer interface is a communication system that enables a human to communicate directly with the outside world or devices, without going through peripheral nerve and muscle output pathways. Electroencephalogram, also known as an electroencephalogram signal, is a technique for collecting brain electrical activity from the scalp, and is widely used in brain-computer interfaces because it is non-invasive.
The traditional target recognition task depends on a computer vision technology, is greatly influenced by factors such as background, light, target state and the like, and is difficult to accurately recognize a weakly hidden target, a target in a complex environment and the like. The human brain has a powerful cognitive function, can understand image semantics, and can identify a target more accurately in a complex environment. Rapid Serial Visual Presentation (RSVP), which is a process of continuously displaying images at the same spatial location at a high presentation rate of multiple images per second, is a brain-computer interface paradigm based on visual stimuli, with a small number of target images in the image stream inducing experimenters to generate event-related potentials (ERPs). And the specific potential and the characteristics thereof generated by the induction of the target image are analyzed by collecting the electroencephalogram signals of the experimenter, so that the target detection can be realized. Therefore, how to extract the effective electroencephalogram distinguishing characteristics under the target stimulation is the key of successful target detection based on the electroencephalogram.
In order to improve the target recognition performance based on the electroencephalogram signals, an electroencephalogram signal decoding method has been studied. The hierarchical discriminant component analysis based on linear discriminant is often used for the electroencephalogram signal decoding in the rapid serial visual presentation task, and is improved by introducing a fisher linear discriminant classifier, a principal component analysis dimension reduction and the like, but the algorithm still has the problems of information redundancy, noise interference and the like. Convolutional neural networks have been demonstrated to be able to learn cross-time features in complex brain electrical signals, effectively improving target recognition performance. EEGNet is used as a compact convolutional neural network constructed for electroencephalogram signals, and has strong robustness and excellent classification performance on various electroencephalogram signal decoding. However, deep learning is an end-to-end algorithm whose feature extraction and decision results are difficult to interpret and understand.
Disclosure of Invention
Aiming at the technical problems that in the prior art, the characteristic extraction of target identification in a rapid serial visual presentation task is difficult to explain and the accuracy is low, the invention provides a target detection method based on electroencephalogram tracing and multi-characteristic extraction.
In order to achieve the above purpose, the invention provides a target detection method based on electroencephalogram tracing and multi-feature extraction, which is characterized by comprising the following steps:
s1: collecting scalp electroencephalogram signals of a person when observing image stimulation presented by a rapid sequence;
s2: preprocessing the acquired scalp electroencephalogram signals through a band-pass filter and an independent component analysis method;
s3: according to the distribution of the acquisition electrodes of the scalp electroencephalogram signals, calculating a head model Headmodel and the noise covariance of the electroencephalogram signals;
s4: mapping scalp brain electrical signals to cerebral cortex by adopting a minimum standard imaging algorithm and a sLORTEA brain tomography method to obtain cortical nerve activity signals with high space-time resolution;
s5: extracting the peak value of the cortical neural activity signal as a time domain signal; the time-frequency domain features are extracted through empirical mode decomposition and five-frequency-band features; extracting airspace characteristics of the cortical neural activity signal by adopting a co-spatial mode;
s6: constructing an MGIFNet neural network, extracting space-time data representation of signals through an MGI-based extractor, and extracting depth features of the signals;
s7: and carrying out multi-scale series feature fusion on low-level features such as time domain, space domain and the like and high-level depth features extracted by the MGIFNet neural network.
Preferably, in step S1, the brain electrical signals of the experimenter at the scalp under the image stimulus are acquired by designing a rapid serial visual presentation experiment in the brain-computer interface format using an electroencephalogram cap.
Preferably, in step S2, 2-30Hz band-pass filtering is performed on the electroencephalogram signal, and the filtered signal is further analyzed by independent components to remove electro-oculogram and myoelectric artifacts, so as to obtain a multichannel electroencephalogram signal.
Preferably, in step S3, a head model Headmodel is calculated based on OpenMEEG software, and an electroencephalogram signal without a detection task before each experimental image stimulation is collected as a noise signal, so as to calculate a noise covariance of the electroencephalogram signal.
Preferably, the specific steps of step S5 include:
s5.1: extracting peak values and peak values of the cortical time sequence signals as time domain features according to the characteristics of continuous brain response under target stimulation for the cortical neural activity signals obtained in the step S4;
s5.2: decomposing the cortical neural activity signal by adopting an empirical mode decomposition method according to the time-frequency response characteristic under target stimulation, decomposing the cortical neural activity signal into a series of eigenmode functions, and extracting five-frequency-band characteristics of the second-order eigenmode functions to serve as time-frequency-domain characteristics;
s5.3: since the cortical neural activity signal with high spatial-temporal resolution is obtained through tracing, the spatial features of the cortical neural activity signal are extracted using a co-spatial mode.
Preferably, the specific steps of step S5.2 include:
s5.2.1: decomposing the cortical neural activity signal x (t) through eigenmode decomposition, decomposing the cortical neural activity signal into a series of eigenmode functions IMF, and representing the original electroencephalogram signal as:
Figure SMS_1
wherein->
Figure SMS_2
Representing a residual function, i=1, 2, … n;
s5.2.2: and selecting a second-order natural mode function IMF with the frequency basically concentrated at 0-60 Hz, and further extracting five-frequency-band characteristics, wherein the five-frequency-band comprises alpha, beta, delta, gamma, theta.
Preferably, the specific steps of step S5.3 include:
step S5.3.1: respectively solving a covariance matrix and a mixed space covariance matrix of the two types of signals after normalization:
Figure SMS_3
,/>
Figure SMS_4
,/>
Figure SMS_5
wherein X is 1 、X 2 Two types of signal matrixes induced under two types of tasks of target stimulation and non-target stimulation are respectively represented;
Figure SMS_6
、/>
Figure SMS_7
respectively represent matrix X 1 、X 2 Is a transpose of (2); representing summing elements on a diagonal of a matrix, R 1 、R 2 Respectively a signal matrix X 1 And X 2 Normalized covariance matrix,>
Figure SMS_8
、/>
Figure SMS_9
the average covariance matrixes under two types of tasks are respectively, and R is the covariance matrix of the mixing space;
step S5.3.2: performing eigenvalue decomposition on the mixed space covariance matrix R:
Figure SMS_10
wherein U is a matrix
Figure SMS_11
Feature vector matrix, ">
Figure SMS_12
Is a diagonal array formed by corresponding characteristic values; the eigenvalues and the inverse descending order are arranged, and the whitening value matrix P is expressed as: />
Figure SMS_13
Step S5.3.3: the covariance matrix normalized by the two types of signals is changed as follows:
Figure SMS_15
,
Figure SMS_19
the method comprises the steps of carrying out a first treatment on the surface of the And then the two matrixes are subjected to principal component decomposition to obtain: />
Figure SMS_20
, />
Figure SMS_16
The method comprises the steps of carrying out a first treatment on the surface of the And then obtaining an optimal spatial filter W as follows: />
Figure SMS_18
The method comprises the steps of carrying out a first treatment on the surface of the Wherein P and P T Respectively, whitening matrix and transpose thereof, R 1 、R 2 The covariance matrixes of the two types of signal matrixes are normalized respectively; and then to S 1 、S 2 The two matrixes are obtained after primary component decomposition: />
Figure SMS_21
,
Figure SMS_22
The method comprises the steps of carrying out a first treatment on the surface of the Wherein B is 1 、B 2 Respectively is a matrix S 1 、S 2 Is a feature vector of>
Figure SMS_14
;λ 1 、λ 2 A diagonal matrix of two eigenvalues; and then obtaining an optimal spatial filter W as follows: />
Figure SMS_17
Step S5.3.4: the spatial characteristics of the signals can be obtained by filtering the signal matrix through an optimal filter W:
Figure SMS_23
wherein X is i For task signal matrix, Z i Is the spatial signature of the corresponding signal.
Preferably, in step S6, an MGI-based extractor is built, and a time block and a space block are sequentially adopted to extract the relationship between the time sequence signal dynamics and the brain region, so as to extract the space-time data representation of the cortex signal, and an MGIFNet neural network is built to extract a depth feature map containing the space-time data representation of the cortex according to a depth network framework.
Preferably, in step S7, the multi-scale series feature fusion method refers to serial fusion of two scale features, where the two scale features refer to a low-level feature and a high-level feature respectively; the low-level features refer to time domain, time-frequency domain and space domain features comprising more detail information, and the high-level features refer to depth features with stronger semantic information after more convolution layers.
The invention also provides a computer readable storage medium storing a computer program which when executed by a processor realizes the target detection method based on electroencephalogram tracing and multi-feature extraction.
Aiming at the technical problems that feature extraction is difficult to explain and accuracy is low in the existing identification method, the target detection method based on electroencephalogram tracing and multi-feature extraction firstly traces the scalp electroencephalogram by a calculation head model, a minimum standard imaging algorithm and an sLORTEA brain fault scanning method to obtain brain cortex nerve activity signals with high space-time resolution, and extracts cortex nerve activity signals according to brain areas; according to analysis of target stimulation subcortical nerve activity, extracting peak values and peak-peak values of the subcortical signals as time domain features; decomposing the cortex signal into a series of inherent mode functions through empirical mode decomposition, and extracting five-frequency-band (alpha, beta, delta, gamma, theta) characteristics by adopting a second-order inherent mode function to serve as time-frequency domain characteristics of the cortex signal; and constructing an optimal spatial filter by adopting a co-spatial mode to extract the spatial domain characteristics of the cortical signal. In order to further extract the depth characteristics of the cortical signals, the invention develops a spatial-temporal data representation of extracting the cortical signals by an extractor based on the MGI, and builds an MGIFNet neural network to obtain the depth characteristic map of the cortical signals.
Aiming at the technical problems that the characteristic extraction is difficult to explain and the accuracy is low in the existing target identification method based on the electroencephalogram signals, the invention provides a method for acquiring the cortical signals with high space-time resolution through the electroencephalogram signals and the tracing method, analyzing the cortical neural activity under target stimulation, searching the significance characteristics of continuous brain response under target and non-target stimulation, extracting the characteristics in a targeted manner, and providing a theoretical basis for the characteristic extraction and the establishment of a classification model. The invention provides a multi-feature extraction method which aims at the significance characteristics of continuous brain response under target and non-target stimulation, and the time domain, frequency domain and space domain features are extracted sequentially. And the invention performs multi-scale series feature fusion on the multi-features.
Compared with the prior art, the invention has the beneficial effects that:
1. mapping an electroencephalogram signal at a scalp position to the inside of a brain through a tracing means to obtain a cortical signal with high space-time resolution, analyzing cortical neural activity, carrying out subsequent feature extraction work according to the characteristics of the cortical signal, and providing theoretical explanation for feature extraction and the whole target detection system;
2. according to the characteristic of continuous brain response under target stimulation, the method specifically extracts time domain, space domain characteristics and depth characteristic images, and the multi-scale series characteristic fusion method under the multi-characteristic extraction condition can improve the performance of a target detection system;
3. the method provided by the invention not only provides interpretability for the feature extraction and classification results, but also can improve the performance of the target detection system based on the electroencephalogram signals by the multi-feature extraction and fusion method.
Drawings
FIG. 1 is a flow chart of a target detection method based on electroencephalogram tracing and multi-feature extraction in an embodiment of the invention;
FIG. 2 is a graph of the results after tracing from different stimuli, where (a) is the continuous brain response at the target stimulus and (b) is the continuous brain response at the non-target stimulus;
FIG. 3 is a specific flow chart of multi-feature extraction and fusion.
Detailed Description
The invention is described in further detail below with reference to the drawings and specific examples.
As shown in fig. 1, the target detection method based on electroencephalogram tracing and multi-feature extraction provided by the invention comprises the following steps:
s1: collecting scalp electroencephalogram signals of a person when observing image stimulation presented by a rapid sequence;
s2: preprocessing the acquired scalp electroencephalogram signals through a band-pass filter and an independent component analysis method;
s3: according to the distribution of the acquisition electrodes of the scalp electroencephalogram signals, calculating a head model Headmodel and the noise covariance of the electroencephalogram signals;
s4: mapping scalp brain electrical signals to cerebral cortex by adopting a minimum standard imaging algorithm and a sLORTEA brain tomography method to obtain cortical nerve activity signals with high space-time resolution;
s5: extracting the peak value of the cortical neural activity signal as a time domain signal; the time-frequency domain features are extracted through empirical mode decomposition and five-frequency-band features; extracting airspace characteristics of the cortical neural activity signal by adopting a co-spatial mode;
s6: constructing an MGIFNet neural network, extracting space-time data representation of signals through an MGI-based extractor, and extracting depth features of the signals;
s7: and carrying out multi-scale series feature fusion on low-level features such as time domain, space domain and the like and high-level depth features extracted by the MGIFNet neural network.
The embodiment of the invention provides a target detection method based on electroencephalogram tracing and multi-feature extraction, which is used for obtaining cortical neural activity by tracing multichannel electroencephalogram signals, extracting various features from cortical signals by empirical mode decomposition, co-space mode, MGI-based depth network and other methods, further completing target detection based on the electroencephalogram signals, solving or at least partially solving the technical problems that the feature extraction is difficult to explain and the accuracy is low in the target detection method based on the electroencephalogram in the prior art, and realizing the technical effect of improving the target detection performance of the electroencephalogram signals.
In order to achieve the technical effects, the general idea of the invention is as follows:
firstly, acquiring an electroencephalogram signal based on rapid sequence presentation image stimulation, preprocessing the electroencephalogram signal through a band-pass filter and independent component analysis, and tracing the preprocessed multichannel electroencephalogram signal through a calculation head model, a minimum standard imaging algorithm and an sLORTEA brain tomography method to obtain a cortical neural activity signal, and extracting a peak value and a peak-to-peak value of the cortical signal as time domain characteristics; performing alpha, beta, delta, gamma, theta characteristic extraction of five frequency bands by empirical mode decomposition and adopting a second-order natural mode function to serve as a time-frequency domain characteristic of the cortex signal; and constructing an optimal spatial filter by adopting a co-spatial mode to extract the spatial domain characteristics of the cortical signal. The invention develops an extractor based on MGI to extract the space-time data representation of the cortical signal and obtain the depth characteristic map of the cortical signal; and carrying out multi-scale series feature fusion on low-level features such as time domain, time-frequency domain, space domain and the like and high-level depth features extracted by the MGIFNet neural network.
The specific implementation process of the invention is as follows:
step S1 is first performed: scalp electroencephalogram signals of a person when observing image stimuli presented in a rapid sequence are collected.
Specifically, a rapid serial visual presentation experiment in a brain-computer interface format is designed, and an electroencephalogram signal of a scalp of an experimenter under image stimulation is acquired by using an electroencephalogram cap. According to the acquisition electrode distribution of scalp brain electrical signals, the invention calculates the noise covariance of a head model Headmodel matched with the electrode distribution and the brain electrical signals based on OpenMEEG, and further adopts a minimum standard imaging algorithm and a sLORTEA brain tomography method to map the scalp brain electrical signals to the cerebral cortex so as to obtain cortical nerve activity signals with high space-time resolution. And the cortical neural activity is analyzed, and the subsequent feature extraction work is carried out according to the characteristics of the cortical neural activity, so that theoretical explanation is provided for feature extraction and the whole target detection system.
Then step S2 is performed: the electroencephalogram signals are preprocessed through a band-pass filter and an independent component analysis method.
Specifically, 2-30Hz band-pass filtering processing is carried out on the electroencephalogram signals, and independent component analysis is further adopted on the filtered signals to remove electro-oculogram and myoelectric artifacts and obtain multichannel electroencephalogram signals.
Then step S3 is performed: and calculating the noise covariance of the head model Headmodel and the electroencephalogram signals based on OpenMEEG according to the distribution of the acquisition electrodes of the scalp electroencephalogram signals.
Specifically, according to the alignment of an electroencephalogram signal acquisition electrode and a standard planing structure, calculating a head model Headmodel based on OpenMEEG software, acquiring an electroencephalogram signal without a detection task before each experimental image stimulation as a noise signal, and calculating the noise covariance of the electroencephalogram signal;
step S4 is then performed: and mapping scalp brain electrical signals to the cerebral cortex by adopting a minimum standard imaging algorithm and an sLORTEA brain tomography method to obtain cortical nerve activity signals with high space-time resolution.
Specifically, tracing is to build inverse model mapping according to scalp brain electrical signals to obtain cerebral cortex nerve activity. Scalp brain electrical signals are mapped to the cerebral cortex by using a minimum standard imaging and sLORTEA brain tomography method to obtain cortical neural activity signals with high space-time resolution. The cortical continuous brain responses under the non-target stimulus after tracing are shown in fig. 2, which shows the differential behavior of the cortical neural activity under the different stimulus, wherein (a) in fig. 2 shows the continuous brain response under the target stimulus, and (b) in fig. 2 shows the continuous brain response under the non-target stimulus.
Step S5 is then performed: extracting the peak value of the cortex signal as a time domain signal; the time-frequency domain features are extracted through empirical mode decomposition and five-frequency-band features; and extracting the spatial domain characteristics of the cortex signals by adopting a co-space mode.
Specifically, according to the characteristic of continuous response of the target stimulus hypodermis, the peak value and the peak-peak value of the time sequence cortex signal are extracted as time domain features. The empirical mode decomposition is to decompose signals according to the time scale of the data, and is a time domain signal processing method. Empirical mode decomposition decomposes complex signals into a finite number of eigenmode functions, each eigenmode function containing characteristic information for a different time scale of the source signal. According to the characteristics of the brain electrical signals, the characteristics of five frequency bands are extracted by using a second-order eigenmode function and are used as the time-frequency domain characteristics of the cortex signals. The co-space mode is a commonly used airspace feature extraction method for classifying the electroencephalogram signals, and by constructing an optimal spatial filter to project cortical signals, the feature vector with the maximum difference between target signals and non-target signals and higher distinction can be obtained.
Step S6 is then performed: and constructing an MGIFNet neural network, and developing a spatial-temporal data representation of the signal extracted by the extractor based on the MGI for extracting the depth characteristic of the signal.
Specifically, an extractor based on MGI is built, time blocks and space blocks are sequentially adopted to extract relations between time sequence signal dynamics and brain regions respectively, so that space-time data representation of cortical signals is extracted, and an MGIFNet neural network is built to extract a depth feature map containing the space-time data representation of the cortex according to a depth network frame.
Finally, step S7 is executed: and carrying out multi-scale series feature fusion on low-level features such as time domain, space domain and the like and high-level depth features extracted by the MGIFNet neural network.
Fig. 1 is a general flow chart of an electroencephalogram-based target detection method in the present embodiment, and after the acquisition of an electroencephalogram signal is completed, preprocessing of the EEG signal is performed first, including bandpass filtering and independent component analysis; tracing the electroencephalogram signals; the process of multi-feature extraction is then: according to the cortical nerve activity difference, firstly, calculating peaks and peak-peak values of cortical signals as time domain features, performing empirical mode decomposition, performing five-frequency-band feature extraction on a second-order natural mode function to obtain time-frequency domain features of the cortical signals, constructing an optimal spatial filter by adopting a co-space mode to extract airspace features of the cortical signals, then constructing an extractor based on MGI (media gateway interface) to extract depth feature images of the cortical signals by adopting a deep learning network, and finally performing multi-scale series feature fusion on all the features.
Fig. 3 shows a multi-feature extraction method and a fusion flow including steps 5, 6, and 7. The multi-feature extraction method mainly comprises the steps of S5 and S6:
the specific steps of the step 5 specifically comprise:
s5.1: and extracting peak values and peak values of cortex time sequence signals as time domain features according to the characteristics of continuous brain response under target stimulation for cortex signals extracted according to brain regions after tracing.
S5.2: according to the time-frequency response characteristics under the target stimulus, decomposing the cortical signal by adopting an empirical mode decomposition method, decomposing the cortical signal into a series of eigenmode functions, and extracting the five-frequency-band characteristics of the second-order eigenmode functions to serve as the time-frequency-domain characteristics.
S5.3: because the cortical signal with high space-time resolution is obtained through tracing, the spatial domain features of the cortical signal are extracted by using a co-space mode.
Specifically, step S5.2 mainly includes:
step s5.2.1: the cortical signal x (t) is decomposed by eigenmode decomposition into a series of eigenmode functions (IMFs). The original brain electrical signal can be expressed as:
Figure SMS_24
. Wherein->
Figure SMS_25
Representing the residual function, i=1, 2, … n.
The eigenmode decomposition can filter out relatively high frequencies in the signal in one pass, and the eigenmode functions gradually get closer to low frequencies.
Step S5.2.2: and selecting a second-order IMF with the frequency basically concentrated at 0-60 Hz, and further extracting the characteristics of five frequency bands (alpha, beta, delta, gamma, theta).
Further, step S5.3 mainly includes:
step S5.3.1: respectively solving a covariance matrix and a mixed space covariance matrix of the two types of signals after normalization:
Figure SMS_26
,/>
Figure SMS_27
,/>
Figure SMS_28
wherein X is 1 、X 2 Representing two types of signal matrixes induced under two types of tasks (target stimulus and non-target stimulus) respectively;
Figure SMS_29
respectively represent matrix X 1 、X 2 Is a transpose of (2); />
Figure SMS_30
Representing summing elements on a diagonal of a matrix, R 1 、R 2 Respectively a signal matrix X 1 And X 2 Normalized covariance matrix,>
Figure SMS_31
、/>
Figure SMS_32
the average covariance matrix under two kinds of tasks is respectively, and R is the covariance matrix of the mixing space.
Step S5.3.2: performing eigenvalue decomposition on the mixed space covariance matrix R:
Figure SMS_33
wherein U is a matrix
Figure SMS_34
Feature vector matrix, ">
Figure SMS_35
Is a diagonal array of corresponding eigenvalues. The eigenvalues and the inverse descending order are arranged, and the whitening value matrix P is expressed as: />
Figure SMS_36
Step S5.3.3: the covariance matrix normalized by the two types of signals is changed as follows:
Figure SMS_38
,
Figure SMS_40
the method comprises the steps of carrying out a first treatment on the surface of the And then, performing principal component decomposition on the two matrixes to obtain: />
Figure SMS_43
, />
Figure SMS_39
The method comprises the steps of carrying out a first treatment on the surface of the The optimal spatial filter W should then be obtained as: />
Figure SMS_42
The method comprises the steps of carrying out a first treatment on the surface of the Wherein P and P T Respectively, whitening matrix and transpose thereof, R 1 、R 2 The covariance matrices normalized by the two types of signal matrices are respectively. And then to S 1 、S 2 Two matrices are used as principal components to decompose, and the following can be obtained:
Figure SMS_44
, />
Figure SMS_45
the method comprises the steps of carrying out a first treatment on the surface of the Wherein B is 1 、B 2 Respectively is a matrix S 1 、S 2 Is a feature vector of>
Figure SMS_37
;λ 1 、λ 2 Is a diagonal matrix of two eigenvalues. The optimal spatial filter W should then be obtained as: />
Figure SMS_41
Step S5.3.4: the spatial characteristics of the signals can be obtained by filtering the signal matrix through an optimal filter W:
Figure SMS_46
wherein X is i For task signal matrix, Z i Is the spatial signature of the corresponding signal.
The specific steps of the step S6 include:
s6.1: in order to further extract the depth characteristics of the cortex signals, a space-time data representation of the cortex signals extracted by an MGI-based extractor is built;
s6.2: and (3) constructing an MGIFNet neural network through a deep learning framework based on an MGI extractor to obtain a space-time depth characteristic map of the cortical signal.
Step S6 the MGI based extractor first samples at an exponentially decaying sampling rate in the time dimension, converting the data into multiple granularity levels, obtaining multiparticulate data information, facilitating modeling of the timing signal. The MGIFNet neural network adopts a time block to extract the dynamics of the cortical signal sequence, and adopts spatial convolution operation to extract the spatial representation of cortical signals for different brain region channels, namely, a spatial block is constructed to capture the relationship between brain regions.
In a specific implementation process, the multi-feature extraction method includes: mapping scalp electroencephalogram signals to cerebral cortex by tracing, extracting according to brain areas, and extracting with pertinence according to continuous brain response characteristics under target stimulation; the peak value and the peak value of the time sequence signal are extracted as time domain signals, the eigenvalue decomposition and the five-frequency band feature extraction are combined as a time-frequency domain feature extraction method, a co-space mode is adopted to construct an optimal spatial filter to extract spatial domain features of the signals, finally a deep learning network of an MGI-based extractor is constructed to extract a depth space-time representation feature map of the cortex signals, and multi-scale series feature fusion is carried out on all the features, wherein a flow diagram is shown in figure 3.
In step S6 of this embodiment, a spatial-temporal data representation of extracting cortical signals by using an MGI-based extractor is developed, a time block is used to extract dynamics of cortical signal sequences, and spatial convolution operation is used to extract spatial representations of cortical signals for different brain regions, that is, a spatial block is constructed to capture a relationship between brain regions, and an mgifet neural network is built to obtain a depth feature map of cortical signals. In step S7, a multi-scale system feature fusion method is constructed, and the two scale features of the low-level feature and the high-level feature are fused in series, where the low-level feature refers to the time domain, the time domain and the space domain that include more detail information, and the high-level feature refers to the depth feature with stronger semantic information after more convolution layers.
What is not described in detail in this specification is prior art known to those skilled in the art.
Finally, it should be noted that the above-mentioned embodiments are only for illustrating the technical solution of the present patent and not for limiting the same, and although the present patent has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that the technical solution of the present patent may be modified or equivalently replaced without departing from the spirit and scope of the technical solution of the present patent, and all such embodiments are included in the scope of the claims of the present patent.

Claims (10)

1. A target detection method based on electroencephalogram tracing and multi-feature extraction is characterized by comprising the following steps: the method comprises the following steps:
s1: collecting scalp electroencephalogram signals of a person when observing image stimulation presented by a rapid sequence;
s2: preprocessing the acquired scalp electroencephalogram signals through a band-pass filter and an independent component analysis method;
s3: according to the distribution of the acquisition electrodes of the scalp electroencephalogram signals, calculating a head model Headmodel and the noise covariance of the electroencephalogram signals;
s4: mapping scalp brain electrical signals to cerebral cortex by adopting a minimum standard imaging algorithm and a sLORTEA brain tomography method to obtain cortical nerve activity signals with high space-time resolution;
s5: extracting the peak value of the cortical neural activity signal as a time domain signal; the time-frequency domain features are extracted through empirical mode decomposition and five-frequency-band features; extracting airspace characteristics of the cortical neural activity signal by adopting a co-spatial mode;
s6: constructing an MGIFNet neural network, extracting space-time data representation of signals through an MGI-based extractor, and extracting depth features of the signals;
s7: and carrying out multi-scale series feature fusion on the time domain, space domain low-layer features and the high-layer depth features extracted by the MGIFNet neural network.
2. The target detection method based on electroencephalogram tracing and multi-feature extraction according to claim 1, wherein the method is characterized in that: in step S1, through designing a rapid serial visual presentation experiment in a brain-computer interface mode, an electroencephalogram signal of a scalp of an experimenter under image stimulation is acquired by using an electroencephalogram cap.
3. The target detection method based on electroencephalogram tracing and multi-feature extraction according to claim 1, wherein the method is characterized in that: in step S2, 2-30Hz band-pass filtering is carried out on the electroencephalogram signals, and the filtered signals are further subjected to independent component analysis to remove electro-oculogram and myoelectric artifacts and obtain multichannel electroencephalogram signals.
4. The target detection method based on electroencephalogram tracing and multi-feature extraction according to claim 1, wherein the method is characterized in that: in step S3, a head model Headmodel is calculated based on OpenMEEG software, and an electroencephalogram signal without a detection task before each experimental image stimulation is collected as a noise signal, so as to calculate the noise covariance of the electroencephalogram signal.
5. The target detection method based on electroencephalogram tracing and multi-feature extraction according to claim 1, wherein the method is characterized in that: the specific steps of the step S5 include:
s5.1: extracting peak values and peak values of the cortical time sequence signals as time domain features according to the characteristics of continuous brain response under target stimulation for the cortical neural activity signals obtained in the step S4;
s5.2: decomposing the cortical neural activity signal by adopting an empirical mode decomposition method according to the time-frequency response characteristic under target stimulation, decomposing the cortical neural activity signal into a series of eigenmode functions, and extracting five-frequency-band characteristics of the second-order eigenmode functions to serve as time-frequency-domain characteristics;
s5.3: since the cortical neural activity signal with high spatial-temporal resolution is obtained through tracing, the spatial features of the cortical neural activity signal are extracted using a co-spatial mode.
6. The target detection method based on electroencephalogram tracing and multi-feature extraction according to claim 5, wherein the target detection method is characterized by comprising the following steps of: the specific steps of step S5.2 include:
s5.2.1: decomposing the cortical neural activity signal x (t) through eigenmode decomposition, decomposing the cortical neural activity signal into a series of eigenmode functions IMF, and representing the original electroencephalogram signal as:
Figure QLYQS_1
wherein->
Figure QLYQS_2
Representing a residual function, i=1, 2, … n;
s5.2.2: and selecting a second-order natural mode function IMF with the frequency basically concentrated at 0-60 Hz, and further extracting five-frequency-band characteristics, wherein the five-frequency-band comprises alpha, beta, delta, gamma, theta.
7. The target detection method based on electroencephalogram tracing and multi-feature extraction as claimed in claim 6, wherein the method is characterized by comprising the following steps: the specific steps of step S5.3 include:
step S5.3.1: respectively solving a covariance matrix and a mixed space covariance matrix of the two types of signals after normalization:
Figure QLYQS_3
,/>
Figure QLYQS_4
,/>
Figure QLYQS_5
wherein X is 1 、X 2 Two types of signal matrixes induced under two types of tasks of target stimulation and non-target stimulation are respectively represented;
Figure QLYQS_6
respectively represent matrix X 1 、X 2 Is a transpose of (2); />
Figure QLYQS_7
Representing summing elements on a diagonal of a matrix, R 1 、R 2 Respectively a signal matrix X 1 And X 2 Normalized covariance matrix,>
Figure QLYQS_8
the average covariance matrixes under two types of tasks are respectively, and R is the covariance matrix of the mixing space;
step S5.3.2: performing eigenvalue decomposition on the mixed space covariance matrix R:
Figure QLYQS_9
wherein U is a matrix->
Figure QLYQS_10
Feature vector matrix, ">
Figure QLYQS_11
Is a diagonal array formed by corresponding characteristic values; the eigenvalues and the inverse descending order are arranged, and the whitening value matrix P is expressed as: />
Figure QLYQS_12
Step S5.3.3: the covariance matrix normalized by the two types of signals is changed as follows:
Figure QLYQS_14
,
Figure QLYQS_17
the method comprises the steps of carrying out a first treatment on the surface of the And then the two matrixes are subjected to principal component decomposition to obtain: />
Figure QLYQS_20
, />
Figure QLYQS_15
The method comprises the steps of carrying out a first treatment on the surface of the And then obtaining an optimal spatial filter W as follows: />
Figure QLYQS_18
The method comprises the steps of carrying out a first treatment on the surface of the Wherein P and P T Respectively, whitening matrix and transpose thereof, R 1 、R 2 The covariance matrixes of the two types of signal matrixes are normalized respectively; and then to S 1 、S 2 The two matrixes are obtained after primary component decomposition: />
Figure QLYQS_19
,
Figure QLYQS_21
The method comprises the steps of carrying out a first treatment on the surface of the Wherein B is 1 、B 2 Respectively is a matrix S 1 、S 2 Is a feature vector of>
Figure QLYQS_13
;λ 1 、λ 2 A diagonal matrix of two eigenvalues; and then obtaining an optimal spatial filter W as follows: />
Figure QLYQS_16
Step S5.3.4: the spatial characteristics of the signals can be obtained by filtering the signal matrix through an optimal filter W:
Figure QLYQS_22
wherein X is i For task signal matrix, Z i Is the spatial signature of the corresponding signal.
8. The target detection method based on electroencephalogram tracing and multi-feature extraction according to claim 1, wherein the method is characterized in that: in step S6, an extractor based on MGI is built, time blocks and space blocks are sequentially adopted to extract relations between time sequence signal dynamics and brain areas respectively, so that space-time data representation of cortical signals is extracted, and according to a depth network framework, an MGIFNet neural network is built to extract a depth feature map containing the space-time data representation of the cortex.
9. The target detection method based on electroencephalogram tracing and multi-feature extraction according to claim 1, wherein the method is characterized in that: in step S7, the multi-scale series feature fusion includes two-scale information fusion, which is respectively a low-level feature composed of time domain, time domain and space domain features including more detail information, and a deep high-level feature with stronger semantic information after more convolution layers.
10. A computer readable storage medium storing a computer program, characterized in that the computer program, when executed by a processor, implements the method of any one of claims 1 to 9.
CN202310595880.0A 2023-05-25 2023-05-25 Target detection method based on electroencephalogram tracing and multi-feature extraction Active CN116369950B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310595880.0A CN116369950B (en) 2023-05-25 2023-05-25 Target detection method based on electroencephalogram tracing and multi-feature extraction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310595880.0A CN116369950B (en) 2023-05-25 2023-05-25 Target detection method based on electroencephalogram tracing and multi-feature extraction

Publications (2)

Publication Number Publication Date
CN116369950A true CN116369950A (en) 2023-07-04
CN116369950B CN116369950B (en) 2024-01-26

Family

ID=86971258

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310595880.0A Active CN116369950B (en) 2023-05-25 2023-05-25 Target detection method based on electroencephalogram tracing and multi-feature extraction

Country Status (1)

Country Link
CN (1) CN116369950B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111110230A (en) * 2020-01-09 2020-05-08 燕山大学 Motor imagery electroencephalogram feature enhancement method and system
KR20200071807A (en) * 2018-11-30 2020-06-22 인하대학교 산학협력단 Human emotion state recognition method and system using fusion of image and eeg signals
CN111616701A (en) * 2020-04-24 2020-09-04 杭州电子科技大学 Electroencephalogram multi-domain feature extraction method based on multivariate variational modal decomposition
CN113158793A (en) * 2021-03-15 2021-07-23 东北电力大学 Multi-class motor imagery electroencephalogram signal identification method based on multi-feature fusion
CN113918008A (en) * 2021-08-30 2022-01-11 北京大学 Brain-computer interface system based on source space brain magnetic signal decoding and application method
CN114533086A (en) * 2022-02-21 2022-05-27 昆明理工大学 Motor imagery electroencephalogram decoding method based on spatial domain characteristic time-frequency transformation
CN114861738A (en) * 2022-07-05 2022-08-05 武汉理工大学 Electroencephalogram tracing and dipole selection-based motor imagery classification method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20200071807A (en) * 2018-11-30 2020-06-22 인하대학교 산학협력단 Human emotion state recognition method and system using fusion of image and eeg signals
CN111110230A (en) * 2020-01-09 2020-05-08 燕山大学 Motor imagery electroencephalogram feature enhancement method and system
CN111616701A (en) * 2020-04-24 2020-09-04 杭州电子科技大学 Electroencephalogram multi-domain feature extraction method based on multivariate variational modal decomposition
CN113158793A (en) * 2021-03-15 2021-07-23 东北电力大学 Multi-class motor imagery electroencephalogram signal identification method based on multi-feature fusion
CN113918008A (en) * 2021-08-30 2022-01-11 北京大学 Brain-computer interface system based on source space brain magnetic signal decoding and application method
CN114533086A (en) * 2022-02-21 2022-05-27 昆明理工大学 Motor imagery electroencephalogram decoding method based on spatial domain characteristic time-frequency transformation
CN114861738A (en) * 2022-07-05 2022-08-05 武汉理工大学 Electroencephalogram tracing and dipole selection-based motor imagery classification method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
QUAN LIU等: "Research on Channel Selection and Multi-Feature Fusion of EEG Signals for Mental Fatigue Detection", ENTROPY, pages 1 - 17 *
屈若为等: "基于真实头模型与多偶极子算法的癫痫致痫灶脑电溯源方法研究", 生物医学工程学杂志, vol. 40, no. 2, pages 272 - 279 *

Also Published As

Publication number Publication date
CN116369950B (en) 2024-01-26

Similar Documents

Publication Publication Date Title
Chen et al. Removal of muscle artifacts from the EEG: A review and recommendations
Bablani et al. Classification of EEG data using k-nearest neighbor approach for concealed information test
Kachenoura et al. ICA: a potential tool for BCI systems
AlSharabi et al. EEG signal processing for Alzheimer’s disorders using discrete wavelet transform and machine learning approaches
Xu et al. BCI competition 2003-data set IIb: enhancing P300 wave detection using ICA-based subspace projections for BCI applications
Jung et al. Imaging brain dynamics using independent component analysis
CN111329474B (en) Electroencephalogram identity recognition method and system based on deep learning and information updating method
Bigdely-Shamlo et al. Brain activity-based image classification from rapid serial visual presentation
CN111184509A (en) Emotion-induced electroencephalogram signal classification method based on transfer entropy
CN114266276B (en) Motor imagery electroencephalogram signal classification method based on channel attention and multi-scale time domain convolution
CN114533086B (en) Motor imagery brain electrolysis code method based on airspace characteristic time-frequency transformation
CN108520239B (en) Electroencephalogram signal classification method and system
Gao et al. Multi-ganglion ANN based feature learning with application to P300-BCI signal classification
Malekmohammadi et al. An efficient hardware implementation for a motor imagery brain computer interface system
CN113191395A (en) Target detection method based on multi-level information fusion of two brains
CN114947883A (en) Time-frequency domain information fusion deep learning electroencephalogram noise reduction method
Geng et al. [Retracted] A Fusion Algorithm for EEG Signal Processing Based on Motor Imagery Brain‐Computer Interface
Ahn et al. Multiscale convolutional transformer for EEG classification of mental imagery in different modalities
CN114601476A (en) EEG signal emotion recognition method based on video stimulation
Feng et al. Feature extraction algorithm based on csp and wavelet packet for motor imagery eeg signals
Blanco-Díaz et al. Enhancing P300 detection using a band-selective filter bank for a visual P300 speller
Saavedra et al. Wavelet‐Based Semblance Methods to Enhance the Single‐Trial Detection of Event‐Related Potentials for a BCI Spelling System
Ahmed et al. Effective hybrid method for the detection and rejection of electrooculogram (EOG) and power line noise artefacts from electroencephalogram (EEG) mixtures
CN116369950B (en) Target detection method based on electroencephalogram tracing and multi-feature extraction
CN114027840A (en) Emotional electroencephalogram recognition method based on variational modal decomposition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant