CN113768474A - Anesthesia depth monitoring method and system based on graph convolution neural network - Google Patents

Anesthesia depth monitoring method and system based on graph convolution neural network Download PDF

Info

Publication number
CN113768474A
CN113768474A CN202111346082.1A CN202111346082A CN113768474A CN 113768474 A CN113768474 A CN 113768474A CN 202111346082 A CN202111346082 A CN 202111346082A CN 113768474 A CN113768474 A CN 113768474A
Authority
CN
China
Prior art keywords
graph
anesthesia
neural network
data
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111346082.1A
Other languages
Chinese (zh)
Other versions
CN113768474B (en
Inventor
马力
刘泉
艾青松
陈昆
谢田立
肖智文
明法畅
邹家喻
徐子严
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University of Technology WUT
Original Assignee
Wuhan University of Technology WUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University of Technology WUT filed Critical Wuhan University of Technology WUT
Priority to CN202111346082.1A priority Critical patent/CN113768474B/en
Publication of CN113768474A publication Critical patent/CN113768474A/en
Application granted granted Critical
Publication of CN113768474B publication Critical patent/CN113768474B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4821Determining level or depth of anaesthesia
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • A61B5/37Intracranial electroencephalography [IC-EEG], e.g. electrocorticography [ECoG]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7225Details of analog processing, e.g. isolation amplifier, gain or sensitivity adjustment, filtering, baseline or drift compensation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Public Health (AREA)
  • Pathology (AREA)
  • Veterinary Medicine (AREA)
  • Data Mining & Analysis (AREA)
  • Animal Behavior & Ethology (AREA)
  • Evolutionary Computation (AREA)
  • Surgery (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Psychiatry (AREA)
  • Mathematical Physics (AREA)
  • Signal Processing (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Physiology (AREA)
  • Computing Systems (AREA)
  • Evolutionary Biology (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Psychology (AREA)
  • Fuzzy Systems (AREA)
  • Software Systems (AREA)
  • Neurosurgery (AREA)
  • Anesthesiology (AREA)
  • Power Engineering (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

The invention discloses an anesthesia depth monitoring method and system based on a graph convolution neural network, wherein the system comprises the following steps: a data preprocessing module: the system is used for preprocessing an electroencephalogram signal of the cerebral cortex; a functional network construction module: the method comprises the steps of calculating phase lag indexes PLI of sample data, and calculating an adjacency matrix for each sample to obtain network topological graph samples at different anesthesia stages; a graph conversion module: the system is used for converting the dual graph, converting the graph sample into a weighted graph constructed based on the phase lag index, and constructing a new graph for the node characteristics; the double-flow graph convolution neural network module: the model 1 is used for extracting side weight value information, the model 2 is used for extracting node characteristic information, and the two models output prediction probabilities are classified and added to obtain a prediction result. The invention finds out the new characteristic of distinguishing different anesthesia states, the classification precision of the waking state, the moderate anesthesia state and the deep anesthesia state reaches 95.4 percent, and different states of anesthesia can be well monitored.

Description

Anesthesia depth monitoring method and system based on graph convolution neural network
Technical Field
The invention relates to the technical field of biomedical signal processing technology and deep learning, in particular to an anesthesia depth monitoring method and system based on a graph convolution neural network.
Background
In general anesthesia surgery, an anesthesiologist needs to monitor the anesthesia status of a patient in real time. The anesthesia monitor can assist an anesthesiologist in mastering the anesthesia depth of a patient and avoid the occurrence of unexpected intraoperative awareness. If the anesthesia is too deep, the patient can be difficult to recover after the operation, and even adverse sequelae are generated on the nervous system; if the anesthesia is too shallow, it may cause the patient to awaken during the operation, leaving the patient with psychological shadows. Therefore, it is very important to perform a real-time monitoring of the depth of anesthesia of a patient during surgery.
Currently, the clinical techniques commonly used to monitor the depth of anesthesia include EEG dual-frequency index analysis, Auditory Evoked Potential (AEP), entropy of anesthesia, and the like. These techniques are used to monitor the depth of anesthesia by processing EEG signals that record surface signals of the sulcus gyrus, and have the advantage of being non-invasive, and readily available. The currently popular anesthesia depth monitoring techniques still have some drawbacks, such as: BIS is not effective for isoflurane-induced anesthesia, is also widely different for different people, and its algorithm is not disclosed. It is therefore necessary to explore more stable anesthesia depth monitoring algorithms.
Disclosure of Invention
The invention aims to overcome the defects of the prior art, constructs a brain function network based on electroencephalogram signals, and provides an anesthesia depth monitoring method and system based on a graph convolution neural network by combining a bigraph convolution neural network.
In order to achieve the above object, the invention provides an anesthesia depth monitoring method based on a graph convolution neural network, which is characterized in that the method comprises the following steps:
1) collecting electroencephalogram signals of a plurality of channels, and preprocessing original signals;
2) intercepting a plurality of time slices in different anesthesia stages as data samples, calculating phase lag index PLI, calculating an adjacency matrix for each sample, and obtaining network topological graph samples in different anesthesia stages, wherein the anesthesia stages comprise a waking stage, a moderate anesthesia stage and a deep anesthesia stage;
3) converting the adjacency matrix of the network topological graph into a dual graph, wherein the edge connections of the dual graph obtained by conversion are the same, the edge weight information is kept on the node characteristics of the dual graph, the graph sample before conversion is a weighted graph constructed based on the phase lag index, the node characteristics are kept at the same time, and a full connection matrix is constructed to represent the topological structure of the node characteristics;
4) constructing a double-flow graph convolution neural network, dividing the graph sample into two flow graph data, and finding two public adjacency matrixes of the two flow graph data; in the data of the two flow diagrams, one flow diagram data is a weighted diagram constructed based on a phase lag index, and the other flow diagram data is a full-connection diagram retaining original node characteristics;
5) inputting the data of the two flow graphs into two models of a double-flow graph convolution neural network respectively, performing graph coarsening and rapid pooling, reducing data dimensionality, aggregating similar nodes, and outputting the predicted value of each anesthesia stage through a full connection layer;
6) and respectively adding the predicted values of different anesthesia stages output by the two models of the double-flow graph convolution neural network, and outputting the category with the maximum predicted value as the prediction result of the anesthesia stage.
Preferably, the cortical electrogram signal in step 1) is a 16-channel ECoG signal of frontal lobe-apical lobe of brain of the subject; the preprocessing comprises 0.1-100 HZ filtering, 50HZ notch filtering and 200HZ resampling.
Preferably, the phase lag index PLI is calculated by:
setting the signal sequence of two channels as
Figure 855018DEST_PATH_IMAGE001
And
Figure 131278DEST_PATH_IMAGE002
using Hilbert transform to calculate instantaneous phase
Figure 364813DEST_PATH_IMAGE003
Figure 601891DEST_PATH_IMAGE004
Wherein the content of the first and second substances,
Figure 844653DEST_PATH_IMAGE005
to represent
Figure 709841DEST_PATH_IMAGE006
A Hilbert (Hilbert) transform,i(ii) a signal of either 1 or 2,jfor imaginary symbols:
Figure 114278DEST_PATH_IMAGE007
wherein P.V. represents the Cauchy's principal value, t is time,τis an integral variable;
the relative lock between the two channels is calculated as:
Figure 87919DEST_PATH_IMAGE008
in the formula (I), the compound is shown in the specification,z 2*(t) Is composed ofz 2(t) The conjugate complex number of (a);
the PLI value is calculated as follows:
Figure 868793DEST_PATH_IMAGE009
PLI ranges between 0 and 1, with 0 indicating no phase lock between the two channels and 1 indicating perfect phase coupling between the two channels.
Preferably, the dual-flow graph convolution neural network constructed in the step 4) adopts a spectrum domain graph convolution method GCN, expands graph convolution into a graph frequency domain through fourier change, and filters signals by using a filter.
Preferably, in the step 5), both the two models of the dual-flow graph convolution neural network adopt a spectrogram convolution method of approximating a convolution kernel by a Chebyshev polynomial and a graph coarsening and rapid pooling method based on a Graclus multistage clustering algorithm.
Preferably, the Graclus multistage clustering algorithm measures the continuous roughness of the graph by a greedy algorithm, so that the spectral clustering target is minimized.
Preferably, the average absolute value of the amplitudes of the channel signals of each time slice is calculated in the step 2) as the node characteristics.
Preferably, the data samples in the step 2) are randomly divided into a training set, a verification set and a test set according to a ratio of 8:1:1, the training set is used for training a neural network model of a graph, the verification set is used for adjusting hyper-parameters of the model, the capability of the model is preliminarily evaluated, and the test set is used for evaluating the generalization capability of the final model.
The invention also provides an anesthesia depth monitoring system based on the graph convolution neural network, which comprises a data preprocessing module, a functional network construction module, a graph conversion module and a double-current graph convolution neural network module;
the data preprocessing module: the system is used for preprocessing an electroencephalogram signal of the cerebral cortex;
the functional network construction module: the method comprises the steps of intercepting sample data into a plurality of time segments of different anesthesia stages, calculating a phase lag index PLI, calculating an adjacency matrix for each sample, and obtaining network topological graph samples of the different anesthesia stages, wherein the anesthesia stages comprise a waking stage, a moderate anesthesia stage and a deep anesthesia stage;
the graph conversion module: the method comprises the steps that an adjacency matrix of a network topological graph sample is converted into a dual graph, edge connections of the dual graph obtained through conversion are the same, edge weight information is kept on node characteristics of the dual graph, the converted graph sample is a weighted graph constructed based on a phase lag index, and in addition, original node characteristics are kept and used for constructing a new full-connection graph;
the double-flow graph convolution neural network module is as follows: the model 1 is used for extracting side weight value information, the model 2 is used for extracting node characteristic information, and the two models output prediction probabilities are classified and added to obtain a prediction result.
The present invention further provides a computer-readable storage medium storing a computer program, wherein the computer program is executed by a processor to implement the aforementioned method for monitoring anesthesia depth based on a convolutional neural network.
The beneficial effects of the invention include:
1) the traditional spectrogram convolution method can realize excellent graph classification benefit on the basis of an adjacency matrix, is sensitive to node characteristic information, but cannot process graphs of different network topological structures.
2) The graph comprises the edge weight value and the node characteristics of the graph, and the edge weight value of the graph is subjected to dual graph conversion to become the node characteristic input of the first model. Then, for the node characteristics of the graph, the invention provides a second graph convolution model, after the edge weight value of the graph is extracted, the original node information is kept, and the relationship among the rest nodes is considered to be equal, so for the second graph convolution model, the invention uses a full connection matrix with self connection removed as the adjacent matrix.
3) The invention designs a dual-flow graph convolutional neural network structure, and adopts a graph classification method of spectrogram convolution, wherein one flow is used for extracting side weight information, the other flow is used for extracting node characteristic information, and after the two models are trained, the prediction probabilities are summed to predict a test set;
4) the invention finds out a new characteristic of distinguishing different anesthesia states, combines the brain network and the atlas neural network to be applied to anesthesia depth monitoring, has the classification precision of 95.4 percent for the waking state, the moderate anesthesia state and the deep anesthesia state, can well monitor different states of anesthesia, and provides a new method for clinical anesthesia monitoring.
The thought provided by the invention is not only suitable for the anesthesia data, but also suitable for electroencephalogram signal classification under other scenes.
Drawings
FIG. 1 is a block diagram of the system of the present invention.
Fig. 2 is a 16-channel map of the invention selected to cover the prefrontal-parietal lobes of the macaque.
Fig. 3 is an example of phase lag index calculation adjacency matrix.
Fig. 4 is a network topology diagram of the prefrontal lobe-parietal lobe of the macaque under different states (from left to right, respectively: waking state, moderate anesthesia and deep anesthesia).
Fig. 5 shows the adjacency matrix of the prefrontal lobe-parietal lobe of the macaque in different states (from left to right, respectively: waking state, moderate anesthesia and deep anesthesia).
Fig. 6 is a diagram illustrating an example of a conversion of a dual map.
Fig. 7 is a adjacency matrix for two models.
FIG. 8 is a confusion matrix for model 1, model 2, and the dual-flow model to predict a test set in the present invention.
FIG. 9 is a ROC graph of the model 1, model 2 and the dual-flow model of the present invention for predicting the test set.
Detailed Description
The invention is described in further detail below with reference to the figures and specific embodiments.
The invention provides an anesthesia depth monitoring method based on a graph convolution neural network, which comprises the following steps:
1) collecting electroencephalogram signals of a plurality of channels, and preprocessing original signals;
2) intercepting a plurality of time segments of different anesthesia stages as data samples, calculating phase lag index PLI, calculating an adjacency matrix for each sample to obtain network topology map samples of different anesthesia stages, and calculating the average value of the absolute value of the amplitude of each time segment to be used as the node characteristic of the map samples, wherein the anesthesia stages comprise an awake stage, a moderate anesthesia stage and a deep anesthesia stage;
3) converting an adjacency matrix of a network topological graph into a dual graph, wherein edge connections of the dual graph obtained by conversion are the same, edge weight information is kept on node characteristics of the dual graph, a graph sample before conversion is a weighted graph constructed based on a phase lag index, the node characteristics are kept at the same time, and a full connection matrix is constructed to represent a topological structure of the node characteristics;
4) constructing a double-flow graph convolution neural network, dividing the graph sample into two flow graph data, and finding two public adjacency matrixes of the two flow graph data; in the data of the two flow diagrams, one flow diagram data is a weighted diagram constructed based on a phase lag index, and the other flow diagram data is a full-connection diagram retaining original node characteristics;
5) inputting the data of the two flow graphs into two models of a double-flow graph convolution neural network respectively, performing graph coarsening and rapid pooling, reducing data dimensionality, aggregating similar nodes, and outputting the predicted value of each anesthesia stage through a full connection layer;
6) and respectively adding the predicted values of different anesthesia stages output by the two models of the double-flow graph convolution neural network, and outputting the category with the maximum predicted value as the prediction result of the anesthesia stage.
The following is a detailed description of the implementation of each step:
in order to achieve the proposed invention, the anesthesia depth of the macaques under the induction of the ketamine-medetomidine is explored by utilizing the anesthesia experimental data of the macaques in a public database, namely, http:// neuro. The ECoG signals are electrocorticogram signals, and as with EEG signals, are recordings of electrical activity in the area between pairs of electrodes on the scalp. And ECoG is an invasive brain-computer interface, and has higher spatial resolution and higher signal quality compared with EEG signals. The experiment comprises a waking stage before anesthesia, an anesthesia induction stage, an anesthesia maintenance stage, an anesthesia recovery stage and a waking stage after anesthesia. The method comprises the steps of intercepting time segments with the same length (1 s) at different stages, constructing a brain topological network based on a functional connection method (phase lag index), calculating the average absolute value of the amplitude of each channel signal of each time segment as a node characteristic, obtaining image samples of three stages (waking, moderate anesthesia and deep anesthesia), randomly dividing data into a training set, a verification set and a test set according to the ratio of 8:1:1, training the neural network model of the image by using the training set, adjusting the hyper-parameters of the model by using the verification set, primarily evaluating the capability of the model, and evaluating the generalization capability of the final model by using the test set.
Firstly, the anesthesia experiment of the macaque is described in detail, and the anesthesia experiment comprises three stages.
11) A waking stage:
(a) AwakeEyeopen-START/END: the macaque opens eyes to rest.
(b) AwakeEyeClose-START/END: by covering the eyes, the macaque closes the eyes to rest.
12) And (3) an anesthesia stage:
(a) AnesticDrugInjection: intramuscular injection of ketamine-medetomidine.
(b) Anesthetized-START/END: the macaque is in a loss of consciousness (LOC) state. Macaques enter the LOC state when they no longer work with the monkey's hands or touch their nostrils with a cotton swab or do not react in humans. Furthermore, LOC can also be confirmed by observing slow wave oscillations in the neural signals.
13) A recovery period:
(a) AntagonistInjection: injection of atipamezole allowed the monkeys to recover from anesthesia.
(b) RecoveryEyeclosed-START/END: the point at which the slow wave oscillations disappear in the neural signal is considered the onset of eye closure recovery, in which the rhesus monkey eye rests calmly.
(c) RecoveryEyeOped-START/END: the monkey sits calmly with the eyes open by removing the eye mask.
The above is the task design of the anesthesia experiment.
The experimental data are 5 experiments of 2 macaques, the signals comprise all the stages before and after anesthesia of the macaques, and the sampling rate is 1 kHZ. A 16-channel ECoG signal covering the prefrontal lobe-parietal lobe of the macaque is selected, as shown in fig. 2, and the black dots mark the signal channels selected by the present invention.
In the step 2), intercepting a data sample, calculating PLI, and calculating node characteristics of the network topological graph.
21) Intercepting a data sample:
after data are preprocessed, a plurality of 1s segments are uniformly intercepted at each stage of each experiment, 1000 1s segments are respectively intercepted at each experiment in a waking stage, a moderate anesthesia stage and a deep anesthesia stage, and the moderate anesthesia data volume is small, so that the step of the sliding window is 0.1 s.
(a) And intercepting the waking stage data in a waking stage before anesthesia and a waking stage after anesthesia, wherein the waking stage after anesthesia is the later stage of a recovery period so as to determine that the macaque is in a waking state.
(b) Moderate anesthesia data is intermediate period data for the induction period of anesthesia (anesthetic injection-arrival at LOC).
(c) The deep anesthesia data is data of anesthesia maintenance period.
Thus, 5 × 3 × 1000, i.e., 15000 data samples, 5 for 5 experiments, 3 for 3 stages, 1000 for each stage of each experiment were obtained. Each sample after resampling comprises 16 channels, 200 sampling points, and one sample will correspond to one network topology.
22) Calculating PLI to obtain an adjacency matrix;
the correlation between channels is calculated by using a Phase Lag Index (PLI), and the calculation method of the phase lag index comprises the following steps:
setting the signal sequence of two channels as
Figure 854067DEST_PATH_IMAGE001
And
Figure 429404DEST_PATH_IMAGE002
using Hilbert transform to calculate instantaneous phase
Figure 641074DEST_PATH_IMAGE003
Figure 960060DEST_PATH_IMAGE004
Figure 65419DEST_PATH_IMAGE005
To represent
Figure 546079DEST_PATH_IMAGE006
A Hilbert (Hilbert) transform,i(ii) a signal of either 1 or 2,jfor imaginary symbols, the calculation method is as follows:
Figure 369679DEST_PATH_IMAGE007
wherein P.V. represents the Cauchy's principal value, t is time,τis an integral variable; after calculating the phase of each channel signal, the relative lock between the two channels can be calculated as:
Figure 852875DEST_PATH_IMAGE008
in the formula (I), the compound is shown in the specification,z 2*(t) Is composed ofz 2(t) The conjugate complex number of (a);
PLI ranges between 0 and 1, with 0 indicating no phase lock between the two channels and 1 indicating perfect phase coupling between the two channels. PLI values were calculated as follows:
Figure 812741DEST_PATH_IMAGE009
after obtaining the data samples, the phase correlation between channels is calculated by using a phase lag index calculation formula, each sample obtains a 16 × 16 adjacency matrix, and the adjacency values are distributed between 0 and 1. An example adjacency matrix is computed as the phase lag index of fig. 3: the 1s electrical signal (200 points) of the node 1 and the node 2 calculates a correlation value according to the phase lag exponential formula in the invention
Figure 995460DEST_PATH_IMAGE010
Corresponding to the position of the adjacency matrix (1,2) and the correlation value of node 2 at node 1
Figure 40777DEST_PATH_IMAGE011
That is, the obtained adjacency matrix is a real symmetric matrix. Similarly, the correlation value of node 3 at node 15 corresponds to the position of the adjacency matrix (3,15), and the correlation value of node 15 at node 3 corresponds to the position of the adjacency matrix (3,15), and simultaneously
Figure 842510DEST_PATH_IMAGE012
By analogy, the entire 16 × 16 adjacency matrix is calculated.
FIG. 4 shows the network structure of Chibi brain prefrontal-parietal lobe of three different stages of macaque; fig. 5 shows adjacency matrixes corresponding to the kiwifruit Chibi topological network diagram in three different phases (in order to show more remarkable different-phase distinguishing effect, the adjacency matrixes drawn here increase self-connection).
23) And calculating the absolute value of the amplitude of each channel signal of each segment, and then averaging to obtain the absolute value as the node characteristic of each network topology graph, wherein each node corresponds to one characteristic value.
The example of the conversion of the dual map in step 3) is shown in fig. 5: the idea of the conversion of the dual graph is to convert edges into nodes and convert nodes into edges, and if the edges are connected with the nodes between the edges, the edges are added on the obtained dual graph.
As in fig. 5, the original graph contains edges 01, 02, 03, 12, 13, 23, which become nodes in the right graph, and the edge weights become node features. In addition, 01, 02 have a common node 0; 01, 03 have a common node 0; 02, 03 have a common node 0; by analogy, the edge renewing graph with the common node is added with the connection, and the connection value is regarded as 1.
Based on the above idea of the conversion of the dual graph, the present invention converts a plurality of adjacent matrices (16 × 16) obtained by phase lag exponent calculation into the dual graph (120 × 120), wherein 120= (16 × 16-16)/2. The edge connections of the converted dual graph are the same, namely, the adjacent matrix shown in FIG. 7 (left), and the original edge weights are expressed on the node characteristics of the new graph.
On the other hand, the node features obtained by the previous calculation of the amplitude information are also retained as the input of another graph convolution neural network model, i.e. a second GCN model, which will filter the retained node features based on a 16 × 16 adjacency matrix, and since the side information between the nodes is taken out and input into the first GCN model, the relationship between the nodes of the retained node information can be considered to be equal, and then a full connection matrix with self-connection removed is taken as its adjacency matrix.
Step 4), constructing a dual-flow graph convolutional neural network:
the double-flow graph convolution neural network comprises two models, wherein a GCN model I (hereinafter referred to as a model 1) designs a graph convolution-pooling structure with 6 layers and is used for extracting side weight information, namely graph data with the size of 120 x 120, and a GCN model II (hereinafter referred to as a model 2) designs a graph convolution-pooling structure with 4 layers and is used for extracting node characteristic information, namely graph data with the size of 16 x 16.
In computer vision, the CNN can effectively extract picture features with orderly arranged pixel points, namely European structure data, and a lot of non-European structure data such as social networks, protein structures and the like exist in scientific research. To apply machine learning methods to these non-euclidean structural data, GCN has become the focus of research.
The difficulty of applying convolution to graph structure data at first is the problem of parameter sharing, the picture with orderly arranged elements can meet the requirement of translation invariance, convolution kernels with the same size can be defined on the whole graph to carry out convolution operation, and the convolution kernels with the same size cannot be used for carrying out convolution operation when the number of adjacent nodes of each node of the graph is different.
The graph convolution neural network comprises a spectrum method and a space method, and in the face of the problem of parameter sharing, the spectrum method tries to define convolution in a spectrum domain, but not in a node domain, the node domain cannot meet translation invariance, and convolution kernels with the same size cannot be defined, so that the definition of convolution is realized in the spectrum domain and then changed back to the space domain, and the problem of parameter sharing is solved. In order to solve the problem of parameter sharing, the spatial method firstly determines neighbor nodes of a target node, then arranges the neighbor nodes in sequence, and selects a fixed number of neighbor nodes from each node, so that the parameter sharing can be realized, which is different from the spectrum method in thinking. Both methods perform better on the tasks associated with the graph. The invention adopts a spectrogram convolution method to realize the image classification of two models.
The invention adopts a method GCN of directly defining convolution in a spectral domain, which realizes the classification of a graph based on an adjacency matrix.
Spectrum convolution: the graph convolution is extended into the frequency domain of the graph by fourier changes.
For input signals
Figure 922462DEST_PATH_IMAGE013
One in the Fourier domain
Figure 10504DEST_PATH_IMAGE014
Filters being parameters
Figure 667750DEST_PATH_IMAGE015
Figure 132229DEST_PATH_IMAGE016
Where U is the eigenvector matrix of the laplacian matrix L of the graph. Laplace matrix:
Figure 66687DEST_PATH_IMAGE017
wherein the content of the first and second substances,Ais a contiguous matrix, D is a degree matrix,
Figure 466576DEST_PATH_IMAGE018
is a diagonal matrix composed of eigenvalues of the laplacian matrix L,
Figure 752064DEST_PATH_IMAGE019
is the fourier transform on the graph.
To reduce the amount of calculation, will
Figure 754655DEST_PATH_IMAGE020
K-order approximation is performed with the chebyshev polynomial to obtain an improved convolution kernel:
Figure 166788DEST_PATH_IMAGE021
wherein
Figure 862212DEST_PATH_IMAGE022
Figure 369416DEST_PATH_IMAGE023
The largest eigenvalue in the laplacian matrix L,
Figure 910119DEST_PATH_IMAGE024
are the coefficients of the chebyshev polynomial.
Then, using a filter
Figure 694535DEST_PATH_IMAGE025
The signal x is filtered:
Figure 295281DEST_PATH_IMAGE026
wherein
Figure 289782DEST_PATH_IMAGE027
Is Laplace after scaling to
Figure 634176DEST_PATH_IMAGE028
The Chebyshev polynomial of order k, means
Figure 522366DEST_PATH_IMAGE029
We can use the recursive relationship and
Figure 559592DEST_PATH_IMAGE030
Figure 775810DEST_PATH_IMAGE031
to calculate
Figure 64840DEST_PATH_IMAGE032
After data preprocessing, data interception in different stages, functional network construction and other operations are completed, 30000 image samples in different stages are obtained, the obtained image is a weighted image constructed based on a phase lag index, the weighted image is different from a conventional binary image, and the weighted image expresses a brain network topological structure in more detail; in addition, in order to realize higher classification precision, the absolute value and the average value of the channel signal amplitude are selected and used as the node characteristics of the graph.
After a graph sample is constructed based on the topological structure and the node information, the graph sample is divided into two kinds of graph data, two public adjacency matrixes of the two kinds of graph data are found, and the adjacency matrixes are used as the topological structures of the graphs and input into a graph convolution neural network to be used for calculating the Laplace matrix of the graphs. Inputting the data of the first graph of the graph to be obtained by conversion of a dual graph method, converting edge weights into node characteristics sensitive to graph convolution, and obtaining an adjacent matrix with the size of 120 multiplied by 120; the other stream data is the original node feature that is kept, after the original edge weight is taken out, the relationship between nodes is considered to be equal, and then a 16 × 16 full-connection adjacency matrix (self-connection is removed) is constructed.
In step 5), the obtained two types of graph data are respectively input into two models of a double-flow graph convolution neural network, such as the double-flow graph convolution neural network method described in the invention, and the input characteristics of the first flow graph convolution neural networkxIs one-dimensional, i.e., 120 × 1, and the input adjacency matrix is 120 × 120 in size as shown in fig. 7 (left); features of second flow graph convolution neural network inputxAlso 1-dimensional, i.e., 16 × 1, the input adjacency matrix is 16 × 16 in size as shown in fig. 7 (right). The convolution method and the pooling method used by the two-flow graph neural network model are the same, namely a spectrogram convolution method for approximating a convolution kernel by a Chebyshev polynomial and a graph coarsening and rapid pooling method based on a Graclus multi-level clustering algorithm. But because the input data features are different in size, different convolution-pooling structures are designed.
And in the step 6), the two models of the double-flow graph convolution neural network respectively output prediction probabilities. After two models in the double-flow graph convolution neural network respectively pass through softmax classification layers, the prediction probability of each class is output, the trained model 1 and the trained model 2 respectively predict a test set, and the prediction probability of the model 1 to the waking state is obtained
Figure 682903DEST_PATH_IMAGE033
The predicted probability of a moderate anesthetic state is
Figure 891030DEST_PATH_IMAGE034
The prediction probability for the deep anesthesia state is
Figure 594544DEST_PATH_IMAGE035
. The prediction probability of model 2 for an awake state is
Figure 906839DEST_PATH_IMAGE036
The predicted probability of moderate anesthesia is
Figure 644988DEST_PATH_IMAGE037
The predicted probability for deep anesthesia is
Figure 289596DEST_PATH_IMAGE038
. Adding the prediction probabilities of the two models for each category to respectively obtain the prediction probabilities of the double-flow graph convolution neural network for the waking state, the moderate anesthesia state and the deep anesthesia state:
Figure 355772DEST_PATH_IMAGE039
Figure 845659DEST_PATH_IMAGE040
Figure 438315DEST_PATH_IMAGE041
and taking the maximum value to obtain the prediction category corresponding to the value.
Based on the method, the anesthesia depth monitoring system based on the graph convolution neural network, disclosed by the invention, is shown in fig. 1 and comprises a data preprocessing module, a functional network construction module, a graph conversion module and a double-flow graph convolution neural network module.
A data preprocessing module: the system is used for preprocessing an electroencephalogram signal, filtering and downsampling original data, and filtering clutter and alternating current signals; downsampling may reduce the amount of data to be processed. The pretreatment comprises the following steps: filtering at 0.5-100 HZ; 50HZ notch filtering; and (5) resampling at 200 HZ. All of the above pretreatment operations were performed in MATLAB2016 b.
A functional network construction module: the method is used for intercepting sample data into a plurality of time segments at different anesthesia stages, calculating correlation among channels by a phase lag index method, constructing an adjacency matrix, obtaining a network topological graph of a key brain area, and calculating a signal amplitude average absolute value as a graph node characteristic.
A graph conversion module: the method is used for converting the adjacency matrix of the network topological graph sample into a dual graph and converting the edge weight into node characteristics sensitive to graph convolution, so that different graphs can obtain the same adjacency matrix, and the direct application on a spectrogram convolution graph classification method is realized; meanwhile, original node characteristics are reserved, and a new full-connection adjacency matrix is constructed to serve as data of another flow graph.
The double-flow graph convolution neural network module: the model 1 is used for extracting side weight value information, the model 2 is used for extracting node characteristic information, and the two models output prediction probabilities are classified and added to obtain a prediction result. The double-flow graph convolution neural network module is designed with a double-flow structure and is respectively used for learning two types of graph data obtained by the graph conversion module, graph features are extracted by utilizing spectrogram convolution, similar nodes are aggregated by graph coarsening and rapid pooling operation, calculated amount is reduced, finally, prediction probabilities of different states of anesthesia are predicted by a softmax layer, model fusion is realized by adding the prediction probabilities of the two models, and the test set is predicted by using the added probability.
Table 1 shows the parameters of each layer inside the dual-flow map convolutional neural network module model 1, table 2 shows the parameters of each layer inside the dual-flow map convolutional neural network module model 2, where O represents the number of anesthesia depth classification tasks,
Figure 112878DEST_PATH_IMAGE042
Figure 790984DEST_PATH_IMAGE043
the number of filters per convolution layer is shown.
Figure 84563DEST_PATH_IMAGE044
Figure 531724DEST_PATH_IMAGE045
The structure of the double-layer graph convolution neural network built in the invention is described as follows:
in the GCN model of the invention, the dimension of the graph is unchanged after the graph convolution layer, and the dimension of the graph is reduced by half for the maximum pooling layer, which means that the dimension of the graph becomes N/2 xN/2 after the N x N Laplace matrix passes through the maximum pooling layer, and for an adjacent matrix with the size of 120 x 120, a 6-layer pooling structure of 64-32-16-8-4-2-1 or a 3-layer pooling structure of 120-60-30-15 can be used, and the former is selected by the invention. For a contiguous matrix of size 16X 16, the present invention selects a 4-level pooling structure of 16-8-4-2-1.
The input adjacent matrix gives description of a graph structure, coarsening operation is carried out on the adjacent matrix to obtain a multi-level coarsening matrix, meanwhile, rearrangement and rapid pooling operation are carried out on original data according to the multi-level coarsening matrix obtained through coarsening, the original data are reorganized into 3D data through a rearrangement relation returned by coarsening, and the 3D data are input into a network to be convoluted.
The graph volume layer learns the characteristics of the graph data, the pooling layer reduces the data dimension, and similar nodes are aggregated. And after the two models are trained, testing the test set by using the trained models to obtain the prediction category.
Table 3 shows the accuracy of predicting the test set by the model 1 and the model 2 and the combined dual-flow model, and it can be seen that the model 1 and the model 2 can achieve quite good accuracy, and the combination of the two models can achieve better prediction effect, which explains the distinctiveness of different characteristics learned by the two models.
Figure 127922DEST_PATH_IMAGE046
Fig. 8 is a confusion matrix of a test set, fig. 9 is an ROC graph of the test set, including evaluation of model 1, model 2 and a dual-flow model, the test set evaluates the generalization ability of the final model, and the three-classification accuracy of the test set obtained by the present invention reaches 95.4%.
The training and testing process of the dual-flow graph convolution neural network model is completed in a Python3.6 Tensorflow 1.13.1 environment.
Finally, it should be noted that the above detailed description is only for illustrating the technical solution of the patent and not for limiting, although the patent is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that the technical solution of the patent can be modified or replaced by equivalents without departing from the spirit and scope of the technical solution of the patent, which should be covered by the claims of the patent.

Claims (10)

1. An anesthesia depth monitoring method based on a graph convolution neural network is characterized in that: the method comprises the following steps:
1) collecting electroencephalogram signals of a plurality of channels, and preprocessing original signals;
2) intercepting a plurality of time slices in different anesthesia stages as data samples, calculating phase lag index PLI, calculating an adjacency matrix for each sample, and obtaining network topological graph samples in different anesthesia stages, wherein the anesthesia stages comprise a waking stage, a moderate anesthesia stage and a deep anesthesia stage;
3) converting the adjacency matrix of the network topological graph into a dual graph, wherein the edge connections of the dual graph obtained by conversion are the same, the edge weight information is kept on the node characteristics of the dual graph, the converted graph sample is a weighted graph constructed based on a phase lag index, the node characteristics are kept at the same time, and a full connection matrix is constructed to represent the topological structure of the node characteristics;
4) constructing a double-flow graph convolution neural network, dividing the graph sample into two flow graph data, and finding two public adjacency matrixes of the two flow graph data; in the data of the two flow diagrams, one flow diagram data is a weighted diagram constructed based on a phase lag index, and the other flow diagram data is a full-connection adjacency matrix retaining original node characteristics;
5) inputting the data of the two flow graphs into two models of a double-flow graph convolution neural network respectively, performing graph coarsening and rapid pooling, reducing data dimensionality, aggregating similar nodes, and outputting the predicted value of each anesthesia stage through a full connection layer;
6) and respectively adding the predicted values of different anesthesia stages output by the two models of the double-flow graph convolution neural network, and outputting the category with the maximum predicted value as the prediction result of the anesthesia stage.
2. The method for monitoring the anesthesia depth based on the atlas nerve network as claimed in claim 1, wherein the electrocorticogram signal in step 1) is a 16-channel ECoG signal of frontal lobe-apical lobe of brain of the subject; the preprocessing comprises 0.1-100 HZ filtering, 50HZ notch filtering and 200HZ resampling.
3. The anesthesia depth monitoring method based on the atlas neural network of claim 1, characterized in that: the phase lag index PLI is calculated by the following method:
setting the signal sequence of two channels as
Figure 962102DEST_PATH_IMAGE001
And
Figure 918425DEST_PATH_IMAGE002
using Hilbert transform to calculate instantaneous phase
Figure 485673DEST_PATH_IMAGE003
Figure 111826DEST_PATH_IMAGE004
Wherein the content of the first and second substances,
Figure 498945DEST_PATH_IMAGE005
to represent
Figure 134326DEST_PATH_IMAGE006
A Hilbert (Hilbert) transform,i(ii) a signal of either 1 or 2,jfor imaginary symbols:
Figure 165867DEST_PATH_IMAGE007
in the formula, P.V. represents a Cauchy main value, t is time, and tau is an integral variable;
the relative lock between the two channels is calculated as:
Figure 228501DEST_PATH_IMAGE008
in the formula (I), the compound is shown in the specification,z 2*(t) Is composed ofz 2(t) The conjugate complex number of (a);
the PLI value is calculated as follows:
Figure 102916DEST_PATH_IMAGE009
PLI ranges between 0 and 1, with 0 indicating no phase lock between the two channels and 1 indicating perfect phase coupling between the two channels.
4. The anesthesia depth monitoring method based on the atlas neural network of claim 1, characterized in that: and 4) constructing a double-flow graph convolution neural network in the step 4), expanding graph convolution to a graph frequency domain through Fourier change by adopting a spectrum domain graph convolution method GCN, and filtering signals by using a filter.
5. The anesthesia depth monitoring method based on the atlas neural network of claim 1, characterized in that: in the step 5), the two models of the double-flow graph convolution neural network both adopt a spectrogram convolution method of approximating a convolution kernel by a Chebyshev polynomial and a graph coarsening and rapid pooling method based on a Graclus multistage clustering algorithm.
6. The anesthesia depth monitoring method based on the atlas neural network of claim 5, characterized in that: and the Graclus multilevel clustering algorithm measures the continuous roughness of the graph by adopting a greedy algorithm so as to minimize the spectral clustering target.
7. The anesthesia depth monitoring method based on the atlas neural network of claim 1, characterized in that: and 2) calculating the average absolute value of the amplitudes of the channel signals of each time slice as a node characteristic.
8. The anesthesia depth monitoring method based on the atlas neural network of claim 1, characterized in that: the data samples in the step 2) are randomly divided into a training set, a verification set and a test set according to the ratio of 8:1:1, the training set is used for training the neural network model of the graph, the verification set is used for adjusting the hyper-parameters of the model, the capability of the model is preliminarily evaluated, and the test set is used for evaluating the generalization capability of the final model.
9. The utility model provides an anesthesia depth monitoring system based on graph convolution neural network which characterized in that: the system comprises a data preprocessing module, a functional network construction module, a graph conversion module and a double-flow graph convolution neural network module;
the data preprocessing module: the system is used for preprocessing an electroencephalogram signal of the cerebral cortex;
the functional network construction module: the method comprises the steps of intercepting sample data into a plurality of time segments of different anesthesia stages, calculating a phase lag index PLI, calculating an adjacency matrix for each sample, and obtaining network topological graph samples of the different anesthesia stages, wherein the anesthesia stages comprise a waking stage, a moderate anesthesia stage and a deep anesthesia stage;
the graph conversion module: the method is used for converting the adjacency matrix of the network topological graph sample into a dual graph, the edge connections of the dual graph obtained by conversion are the same, the edge weight information is kept on the node characteristics of the dual graph, and the converted graph sample is a weighted graph constructed based on a phase lag index;
the double-flow graph convolution neural network module is as follows: the model 1 is used for extracting side weight value information, the model 2 is used for extracting node characteristic information, and the two models output prediction probabilities are classified and added to obtain a prediction result.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method of any one of claims 1 to 8.
CN202111346082.1A 2021-11-15 2021-11-15 Anesthesia depth monitoring method and system based on graph convolution neural network Active CN113768474B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111346082.1A CN113768474B (en) 2021-11-15 2021-11-15 Anesthesia depth monitoring method and system based on graph convolution neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111346082.1A CN113768474B (en) 2021-11-15 2021-11-15 Anesthesia depth monitoring method and system based on graph convolution neural network

Publications (2)

Publication Number Publication Date
CN113768474A true CN113768474A (en) 2021-12-10
CN113768474B CN113768474B (en) 2022-03-18

Family

ID=78873958

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111346082.1A Active CN113768474B (en) 2021-11-15 2021-11-15 Anesthesia depth monitoring method and system based on graph convolution neural network

Country Status (1)

Country Link
CN (1) CN113768474B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114557708A (en) * 2022-02-21 2022-05-31 天津大学 Device and method for detecting somatosensory stimulation consciousness based on electroencephalogram dual-feature fusion

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE19829018A1 (en) * 1998-06-30 2000-01-13 Markus Lendl System for the control of the anesthesia of a patient during surgery
US20040010203A1 (en) * 2002-07-12 2004-01-15 Bionova Technologies Inc. Method and apparatus for the estimation of anesthetic depth using wavelet analysis of the electroencephalogram
WO2017001495A1 (en) * 2015-06-29 2017-01-05 Koninklijke Philips N.V. Optimal drug dosing based on current anesthesia practice
CN110680285A (en) * 2019-10-29 2020-01-14 张萍萍 Anesthesia degree monitoring device based on neural network
CN111091712A (en) * 2019-12-25 2020-05-01 浙江大学 Traffic flow prediction method based on cyclic attention dual graph convolution network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE19829018A1 (en) * 1998-06-30 2000-01-13 Markus Lendl System for the control of the anesthesia of a patient during surgery
US20040010203A1 (en) * 2002-07-12 2004-01-15 Bionova Technologies Inc. Method and apparatus for the estimation of anesthetic depth using wavelet analysis of the electroencephalogram
WO2017001495A1 (en) * 2015-06-29 2017-01-05 Koninklijke Philips N.V. Optimal drug dosing based on current anesthesia practice
CN110680285A (en) * 2019-10-29 2020-01-14 张萍萍 Anesthesia degree monitoring device based on neural network
CN111091712A (en) * 2019-12-25 2020-05-01 浙江大学 Traffic flow prediction method based on cyclic attention dual graph convolution network

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114557708A (en) * 2022-02-21 2022-05-31 天津大学 Device and method for detecting somatosensory stimulation consciousness based on electroencephalogram dual-feature fusion

Also Published As

Publication number Publication date
CN113768474B (en) 2022-03-18

Similar Documents

Publication Publication Date Title
Wen et al. Deep convolution neural network and autoencoders-based unsupervised feature learning of EEG signals
Huang et al. S-EEGNet: Electroencephalogram signal classification based on a separable convolution neural network with bilinear interpolation
CN110969108B (en) Limb action recognition method based on autonomic motor imagery electroencephalogram
CN110353702A (en) A kind of emotion identification method and system based on shallow-layer convolutional neural networks
Göksu BCI oriented EEG analysis using log energy entropy of wavelet packets
Miao et al. A spatial-frequency-temporal optimized feature sparse representation-based classification method for motor imagery EEG pattern recognition
CN113768519B (en) Method for analyzing consciousness level of patient based on deep learning and resting state electroencephalogram data
CN113128552A (en) Electroencephalogram emotion recognition method based on depth separable causal graph convolution network
Chen et al. Epilepsy classification for mining deeper relationships between EEG channels based on GCN
CN113768474B (en) Anesthesia depth monitoring method and system based on graph convolution neural network
Vallabhaneni et al. Deep learning algorithms in eeg signal decoding application: a review
Wang et al. Multiband decomposition and spectral discriminative analysis for motor imagery BCI via deep neural network
Ranjani et al. Classifying the autism and epilepsy disorder based on EEG signal using deep convolutional neural network (DCNN)
Sun et al. A novel complex network-based graph convolutional network in major depressive disorder detection
Islam et al. Virtual image from EEG to recognize appropriate emotion using convolutional neural network
Ma et al. A feature extraction algorithm of brain network of motor imagination based on a directed transfer function
Meng et al. Sparse representation-based classification with two-dimensional dictionary optimization for motor imagery EEG pattern recognition
Wu et al. A multi-stream deep learning model for EEG-based depression identification
CN113558637A (en) Music perception brain network construction method based on phase transfer entropy
CN115270847A (en) Design decision electroencephalogram recognition method based on wavelet packet decomposition and convolutional neural network
Li et al. Enhancing P300 based character recognition performance using a combination of ensemble classifiers and a fuzzy fusion method
CN112084935B (en) Emotion recognition method based on expansion of high-quality electroencephalogram sample
Wu et al. Eeg-based depression identification using a deep learning model
Liu et al. EEG classification algorithm of motor imagery based on CNN-Transformer fusion network
GÜl et al. Automated pre-seizure detection for epileptic patients using machine learning methods

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant