CN113768474B - Anesthesia depth monitoring method and system based on graph convolution neural network - Google Patents
Anesthesia depth monitoring method and system based on graph convolution neural network Download PDFInfo
- Publication number
- CN113768474B CN113768474B CN202111346082.1A CN202111346082A CN113768474B CN 113768474 B CN113768474 B CN 113768474B CN 202111346082 A CN202111346082 A CN 202111346082A CN 113768474 B CN113768474 B CN 113768474B
- Authority
- CN
- China
- Prior art keywords
- graph
- anesthesia
- neural network
- data
- model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 206010002091 Anaesthesia Diseases 0.000 title claims abstract description 107
- 230000037005 anaesthesia Effects 0.000 title claims abstract description 107
- 238000000034 method Methods 0.000 title claims abstract description 52
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 43
- 238000012544 monitoring process Methods 0.000 title claims abstract description 17
- 239000011159 matrix material Substances 0.000 claims abstract description 61
- 230000009977 dual effect Effects 0.000 claims abstract description 26
- 238000006243 chemical reaction Methods 0.000 claims abstract description 21
- 230000002618 waking effect Effects 0.000 claims abstract description 20
- 238000007781 pre-processing Methods 0.000 claims abstract description 17
- 238000010276 construction Methods 0.000 claims abstract description 8
- 210000003710 cerebral cortex Anatomy 0.000 claims abstract description 3
- 238000012360 testing method Methods 0.000 claims description 19
- 238000010586 diagram Methods 0.000 claims description 18
- 238000011176 pooling Methods 0.000 claims description 18
- 238000012549 training Methods 0.000 claims description 10
- 238000004422 calculation algorithm Methods 0.000 claims description 9
- 238000001914 filtration Methods 0.000 claims description 9
- 238000001228 spectrum Methods 0.000 claims description 8
- 210000004556 brain Anatomy 0.000 claims description 7
- 238000013527 convolutional neural network Methods 0.000 claims description 7
- 238000003062 neural network model Methods 0.000 claims description 6
- 238000012795 verification Methods 0.000 claims description 6
- 238000002566 electrocorticography Methods 0.000 claims description 5
- 238000012952 Resampling Methods 0.000 claims description 4
- 108010076504 Protein Sorting Signals Proteins 0.000 claims description 3
- 230000004931 aggregating effect Effects 0.000 claims description 3
- 238000004590 computer program Methods 0.000 claims description 3
- 230000008878 coupling Effects 0.000 claims description 3
- 238000010168 coupling process Methods 0.000 claims description 3
- 238000005859 coupling reaction Methods 0.000 claims description 3
- 230000003595 spectral effect Effects 0.000 claims description 3
- 230000008859 change Effects 0.000 claims description 2
- 230000001054 cortical effect Effects 0.000 claims description 2
- 241000282553 Macaca Species 0.000 description 15
- 238000002474 experimental method Methods 0.000 description 9
- 238000004364 calculation method Methods 0.000 description 7
- 208000003443 Unconsciousness Diseases 0.000 description 5
- 238000013461 design Methods 0.000 description 4
- 238000011084 recovery Methods 0.000 description 4
- 241000282693 Cercopithecidae Species 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 230000006698 induction Effects 0.000 description 3
- 230000001537 neural effect Effects 0.000 description 3
- 230000002360 prefrontal effect Effects 0.000 description 3
- 230000000717 retained effect Effects 0.000 description 3
- 239000000243 solution Substances 0.000 description 3
- 238000000547 structure data Methods 0.000 description 3
- 230000003444 anaesthetic effect Effects 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 238000012423 maintenance Methods 0.000 description 2
- 229960002140 medetomidine Drugs 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000008707 rearrangement Effects 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 238000001356 surgical procedure Methods 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- 230000010356 wave oscillation Effects 0.000 description 2
- 235000009436 Actinidia deliciosa Nutrition 0.000 description 1
- 244000298697 Actinidia deliciosa Species 0.000 description 1
- 238000012935 Averaging Methods 0.000 description 1
- 229920000742 Cotton Polymers 0.000 description 1
- 208000032358 Intraoperative Awareness Diseases 0.000 description 1
- PIWKPBJCKXDKJR-UHFFFAOYSA-N Isoflurane Chemical compound FC(F)OC(Cl)C(F)(F)F PIWKPBJCKXDKJR-UHFFFAOYSA-N 0.000 description 1
- 241000282560 Macaca mulatta Species 0.000 description 1
- 230000002411 adverse Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- HSWPZIDYAHLZDD-UHFFFAOYSA-N atipamezole Chemical compound C1C2=CC=CC=C2CC1(CC)C1=CN=CN1 HSWPZIDYAHLZDD-UHFFFAOYSA-N 0.000 description 1
- 229960003002 atipamezole Drugs 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000003925 brain function Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000000763 evoking effect Effects 0.000 description 1
- 230000004399 eye closure Effects 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000002695 general anesthesia Methods 0.000 description 1
- 238000002347 injection Methods 0.000 description 1
- 239000007924 injection Substances 0.000 description 1
- 238000010255 intramuscular injection Methods 0.000 description 1
- 239000007927 intramuscular injection Substances 0.000 description 1
- 229960002725 isoflurane Drugs 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 210000000653 nervous system Anatomy 0.000 description 1
- 102000004169 proteins and genes Human genes 0.000 description 1
- 108090000623 proteins and genes Proteins 0.000 description 1
- 210000004761 scalp Anatomy 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/48—Other medical applications
- A61B5/4821—Determining level or depth of anaesthesia
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/24—Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
- A61B5/316—Modalities, i.e. specific diagnostic methods
- A61B5/369—Electroencephalography [EEG]
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/24—Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
- A61B5/316—Modalities, i.e. specific diagnostic methods
- A61B5/369—Electroencephalography [EEG]
- A61B5/37—Intracranial electroencephalography [IC-EEG], e.g. electrocorticography [ECoG]
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7225—Details of analog processing, e.g. isolation amplifier, gain or sensitivity adjustment, filtering, baseline or drift compensation
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
- A61B5/7267—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/08—Feature extraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/12—Classification; Matching
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Theoretical Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Public Health (AREA)
- Pathology (AREA)
- Veterinary Medicine (AREA)
- Data Mining & Analysis (AREA)
- Animal Behavior & Ethology (AREA)
- Evolutionary Computation (AREA)
- Surgery (AREA)
- Medical Informatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Heart & Thoracic Surgery (AREA)
- Psychiatry (AREA)
- Mathematical Physics (AREA)
- Signal Processing (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Physiology (AREA)
- Computing Systems (AREA)
- Evolutionary Biology (AREA)
- Computational Linguistics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Psychology (AREA)
- Fuzzy Systems (AREA)
- Software Systems (AREA)
- Neurosurgery (AREA)
- Anesthesiology (AREA)
- Power Engineering (AREA)
- Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)
Abstract
The invention discloses an anesthesia depth monitoring method and system based on a graph convolution neural network, wherein the system comprises the following steps: a data preprocessing module: the system is used for preprocessing an electroencephalogram signal of the cerebral cortex; a functional network construction module: the method comprises the steps of calculating phase lag indexes PLI of sample data, and calculating an adjacency matrix for each sample to obtain network topological graph samples at different anesthesia stages; a graph conversion module: the system is used for converting the dual graph, converting the graph sample into a weighted graph constructed based on the phase lag index, and constructing a new graph for the node characteristics; the double-flow graph convolution neural network module: the model 1 is used for extracting side weight value information, the model 2 is used for extracting node characteristic information, and the two models output prediction probabilities are classified and added to obtain a prediction result. The invention finds out the new characteristic of distinguishing different anesthesia states, the classification precision of the waking state, the moderate anesthesia state and the deep anesthesia state reaches 95.4 percent, and different states of anesthesia can be well monitored.
Description
Technical Field
The invention relates to the technical field of biomedical signal processing technology and deep learning, in particular to an anesthesia depth monitoring method and system based on a graph convolution neural network.
Background
In general anesthesia surgery, an anesthesiologist needs to monitor the anesthesia status of a patient in real time. The anesthesia monitor can assist an anesthesiologist in mastering the anesthesia depth of a patient and avoid the occurrence of unexpected intraoperative awareness. If the anesthesia is too deep, the patient can be difficult to recover after the operation, and even adverse sequelae are generated on the nervous system; if the anesthesia is too shallow, it may cause the patient to awaken during the operation, leaving the patient with psychological shadows. Therefore, it is very important to perform a real-time monitoring of the depth of anesthesia of a patient during surgery.
Currently, the clinical techniques commonly used to monitor the depth of anesthesia include EEG dual-frequency index analysis, Auditory Evoked Potential (AEP), entropy of anesthesia, and the like. These techniques are used to monitor the depth of anesthesia by processing EEG signals that record surface signals of the sulcus gyrus, and have the advantage of being non-invasive, and readily available. The currently popular anesthesia depth monitoring techniques still have some drawbacks, such as: BIS is not effective for isoflurane-induced anesthesia, is also widely different for different people, and its algorithm is not disclosed. It is therefore necessary to explore more stable anesthesia depth monitoring algorithms.
Disclosure of Invention
The invention aims to overcome the defects of the prior art, constructs a brain function network based on electroencephalogram signals, and provides an anesthesia depth monitoring method and system based on a graph convolution neural network by combining a bigraph convolution neural network.
In order to achieve the above object, the invention provides an anesthesia depth monitoring method based on a graph convolution neural network, which is characterized in that the method comprises the following steps:
1) collecting electroencephalogram signals of a plurality of channels, and preprocessing original signals;
2) intercepting a plurality of time slices in different anesthesia stages as data samples, calculating phase lag index PLI, calculating an adjacency matrix for each sample, and obtaining network topological graph samples in different anesthesia stages, wherein the anesthesia stages comprise a waking stage, a moderate anesthesia stage and a deep anesthesia stage;
3) converting the adjacency matrix of the network topological graph into a dual graph, wherein the edge connections of the dual graph obtained by conversion are the same, the edge weight information is kept on the node characteristics of the dual graph, the graph sample before conversion is a weighted graph constructed based on the phase lag index, the node characteristics are kept at the same time, and a full connection matrix is constructed to represent the topological structure of the node characteristics;
4) constructing a double-flow graph convolution neural network, dividing the graph sample into two flow graph data, and finding two public adjacency matrixes of the two flow graph data; in the data of the two flow diagrams, one flow diagram data is a weighted diagram constructed based on a phase lag index, and the other flow diagram data is a full-connection diagram retaining original node characteristics;
5) inputting the data of the two flow graphs into two models of a double-flow graph convolution neural network respectively, performing graph coarsening and rapid pooling, reducing data dimensionality, aggregating similar nodes, and outputting the predicted value of each anesthesia stage through a full connection layer;
6) and respectively adding the predicted values of different anesthesia stages output by the two models of the double-flow graph convolution neural network, and outputting the category with the maximum predicted value as the prediction result of the anesthesia stage.
Preferably, the cortical electrogram signal in step 1) is a 16-channel ECoG signal of frontal lobe-apical lobe of brain of the subject; the preprocessing comprises 0.1-100 HZ filtering, 50HZ notch filtering and 200HZ resampling.
Preferably, the phase lag index PLI is calculated by:
setting the signal sequence of two channels asAndusing Hilbert transform to calculate instantaneous phase:
Wherein,to representA Hilbert (Hilbert) transform,i(ii) a signal of either 1 or 2,jfor imaginary symbols:
wherein P.V. represents the Cauchy's principal value, t is time,τis an integral variable;
the relative lock between the two channels is calculated as:
in the formula,z 2*(t) Is composed ofz 2(t) The conjugate complex number of (a);
the PLI value is calculated as follows:
PLI ranges between 0 and 1, with 0 indicating no phase lock between the two channels and 1 indicating perfect phase coupling between the two channels.
Preferably, the dual-flow graph convolution neural network constructed in the step 4) adopts a spectrum domain graph convolution method GCN, expands graph convolution into a graph frequency domain through fourier change, and filters signals by using a filter.
Preferably, in the step 5), both the two models of the dual-flow graph convolution neural network adopt a spectrogram convolution method of approximating a convolution kernel by a Chebyshev polynomial and a graph coarsening and rapid pooling method based on a Graclus multistage clustering algorithm.
Preferably, the Graclus multistage clustering algorithm measures the continuous roughness of the graph by a greedy algorithm, so that the spectral clustering target is minimized.
Preferably, the average absolute value of the amplitudes of the channel signals of each time slice is calculated in the step 2) as the node characteristics.
Preferably, the data samples in the step 2) are randomly divided into a training set, a verification set and a test set according to a ratio of 8:1:1, the training set is used for training a neural network model of a graph, the verification set is used for adjusting hyper-parameters of the model, the capability of the model is preliminarily evaluated, and the test set is used for evaluating the generalization capability of the final model.
The invention also provides an anesthesia depth monitoring system based on the graph convolution neural network, which comprises a data preprocessing module, a functional network construction module, a graph conversion module and a double-current graph convolution neural network module;
the data preprocessing module: the system is used for preprocessing an electroencephalogram signal of the cerebral cortex;
the functional network construction module: the method comprises the steps of intercepting sample data into a plurality of time segments of different anesthesia stages, calculating a phase lag index PLI, calculating an adjacency matrix for each sample, and obtaining network topological graph samples of the different anesthesia stages, wherein the anesthesia stages comprise a waking stage, a moderate anesthesia stage and a deep anesthesia stage;
the graph conversion module: the method comprises the steps that an adjacency matrix of a network topological graph sample is converted into a dual graph, edge connections of the dual graph obtained through conversion are the same, edge weight information is kept on node characteristics of the dual graph, the converted graph sample is a weighted graph constructed based on a phase lag index, and in addition, original node characteristics are kept and used for constructing a new full-connection graph;
the double-flow graph convolution neural network module is as follows: the model 1 is used for extracting side weight value information, the model 2 is used for extracting node characteristic information, and the two models output prediction probabilities are classified and added to obtain a prediction result.
The present invention further provides a computer-readable storage medium storing a computer program, wherein the computer program is executed by a processor to implement the aforementioned method for monitoring anesthesia depth based on a convolutional neural network.
The beneficial effects of the invention include:
1) the traditional spectrogram convolution method can realize excellent graph classification benefit on the basis of an adjacency matrix, is sensitive to node characteristic information, but cannot process graphs of different network topological structures.
2) The graph comprises the edge weight value and the node characteristics of the graph, and the edge weight value of the graph is subjected to dual graph conversion to become the node characteristic input of the first model. Then, for the node characteristics of the graph, the invention provides a second graph convolution model, after the edge weight value of the graph is extracted, the original node information is kept, and the relationship among the rest nodes is considered to be equal, so for the second graph convolution model, the invention uses a full connection matrix with self connection removed as the adjacent matrix.
3) The invention designs a dual-flow graph convolutional neural network structure, and adopts a graph classification method of spectrogram convolution, wherein one flow is used for extracting side weight information, the other flow is used for extracting node characteristic information, and after the two models are trained, the prediction probabilities are summed to predict a test set;
4) the invention finds out a new characteristic of distinguishing different anesthesia states, combines the brain network and the atlas neural network to be applied to anesthesia depth monitoring, has the classification precision of 95.4 percent for the waking state, the moderate anesthesia state and the deep anesthesia state, can well monitor different states of anesthesia, and provides a new method for clinical anesthesia monitoring.
The thought provided by the invention is not only suitable for the anesthesia data, but also suitable for electroencephalogram signal classification under other scenes.
Drawings
FIG. 1 is a block diagram of the system of the present invention.
Fig. 2 is a 16-channel map of the invention selected to cover the prefrontal-parietal lobes of the macaque.
Fig. 3 is an example of phase lag index calculation adjacency matrix.
Fig. 4 is a network topology diagram of the prefrontal lobe-parietal lobe of the macaque under different states (from left to right, respectively: waking state, moderate anesthesia and deep anesthesia).
Fig. 5 shows the adjacency matrix of the prefrontal lobe-parietal lobe of the macaque in different states (from left to right, respectively: waking state, moderate anesthesia and deep anesthesia).
Fig. 6 is a diagram illustrating an example of a conversion of a dual map.
Fig. 7 is a adjacency matrix for two models.
FIG. 8 is a confusion matrix for model 1, model 2, and the dual-flow model to predict a test set in the present invention.
FIG. 9 is a ROC graph of the model 1, model 2 and the dual-flow model of the present invention for predicting the test set.
Detailed Description
The invention is described in further detail below with reference to the figures and specific embodiments.
The invention provides an anesthesia depth monitoring method based on a graph convolution neural network, which comprises the following steps:
1) collecting electroencephalogram signals of a plurality of channels, and preprocessing original signals;
2) intercepting a plurality of time segments of different anesthesia stages as data samples, calculating phase lag index PLI, calculating an adjacency matrix for each sample to obtain network topology map samples of different anesthesia stages, and calculating the average value of the absolute value of the amplitude of each time segment to be used as the node characteristic of the map samples, wherein the anesthesia stages comprise an awake stage, a moderate anesthesia stage and a deep anesthesia stage;
3) converting an adjacency matrix of a network topological graph into a dual graph, wherein edge connections of the dual graph obtained by conversion are the same, edge weight information is kept on node characteristics of the dual graph, a graph sample before conversion is a weighted graph constructed based on a phase lag index, the node characteristics are kept at the same time, and a full connection matrix is constructed to represent a topological structure of the node characteristics;
4) constructing a double-flow graph convolution neural network, dividing the graph sample into two flow graph data, and finding two public adjacency matrixes of the two flow graph data; in the data of the two flow diagrams, one flow diagram data is a weighted diagram constructed based on a phase lag index, and the other flow diagram data is a full-connection diagram retaining original node characteristics;
5) inputting the data of the two flow graphs into two models of a double-flow graph convolution neural network respectively, performing graph coarsening and rapid pooling, reducing data dimensionality, aggregating similar nodes, and outputting the predicted value of each anesthesia stage through a full connection layer;
6) and respectively adding the predicted values of different anesthesia stages output by the two models of the double-flow graph convolution neural network, and outputting the category with the maximum predicted value as the prediction result of the anesthesia stage.
The following is a detailed description of the implementation of each step:
in order to achieve the proposed invention, the anesthesia depth of the macaques under the induction of the ketamine-medetomidine is explored by utilizing the anesthesia experimental data of the macaques in a public database, namely, http:// neuro. The ECoG signals are electrocorticogram signals, and as with EEG signals, are recordings of electrical activity in the area between pairs of electrodes on the scalp. And ECoG is an invasive brain-computer interface, and has higher spatial resolution and higher signal quality compared with EEG signals. The experiment comprises a waking stage before anesthesia, an anesthesia induction stage, an anesthesia maintenance stage, an anesthesia recovery stage and a waking stage after anesthesia. The method comprises the steps of intercepting time segments with the same length (1 s) at different stages, constructing a brain topological network based on a functional connection method (phase lag index), calculating the average absolute value of the amplitude of each channel signal of each time segment as a node characteristic, obtaining image samples of three stages (waking, moderate anesthesia and deep anesthesia), randomly dividing data into a training set, a verification set and a test set according to the ratio of 8:1:1, training the neural network model of the image by using the training set, adjusting the hyper-parameters of the model by using the verification set, primarily evaluating the capability of the model, and evaluating the generalization capability of the final model by using the test set.
Firstly, the anesthesia experiment of the macaque is described in detail, and the anesthesia experiment comprises three stages.
11) A waking stage:
(a) AwakeEyeopen-START/END: the macaque opens eyes to rest.
(b) AwakeEyeClose-START/END: by covering the eyes, the macaque closes the eyes to rest.
12) And (3) an anesthesia stage:
(a) AnesticDrugInjection: intramuscular injection of ketamine-medetomidine.
(b) Anesthetized-START/END: the macaque is in a loss of consciousness (LOC) state. Macaques enter the LOC state when they no longer work with the monkey's hands or touch their nostrils with a cotton swab or do not react in humans. Furthermore, LOC can also be confirmed by observing slow wave oscillations in the neural signals.
13) A recovery period:
(a) AntagonistInjection: injection of atipamezole allowed the monkeys to recover from anesthesia.
(b) RecoveryEyeclosed-START/END: the point at which the slow wave oscillations disappear in the neural signal is considered the onset of eye closure recovery, in which the rhesus monkey eye rests calmly.
(c) RecoveryEyeOped-START/END: the monkey sits calmly with the eyes open by removing the eye mask.
The above is the task design of the anesthesia experiment.
The experimental data are 5 experiments of 2 macaques, the signals comprise all the stages before and after anesthesia of the macaques, and the sampling rate is 1 kHZ. A 16-channel ECoG signal covering the prefrontal lobe-parietal lobe of the macaque is selected, as shown in fig. 2, and the black dots mark the signal channels selected by the present invention.
In the step 2), intercepting a data sample, calculating PLI, and calculating node characteristics of the network topological graph.
21) Intercepting a data sample:
after data are preprocessed, a plurality of 1s segments are uniformly intercepted at each stage of each experiment, 1000 1s segments are respectively intercepted at each experiment in a waking stage, a moderate anesthesia stage and a deep anesthesia stage, and the moderate anesthesia data volume is small, so that the step of the sliding window is 0.1 s.
(a) And intercepting the waking stage data in a waking stage before anesthesia and a waking stage after anesthesia, wherein the waking stage after anesthesia is the later stage of a recovery period so as to determine that the macaque is in a waking state.
(b) Moderate anesthesia data is intermediate period data for the induction period of anesthesia (anesthetic injection-arrival at LOC).
(c) The deep anesthesia data is data of anesthesia maintenance period.
Thus, 5 × 3 × 1000, i.e., 15000 data samples, 5 for 5 experiments, 3 for 3 stages, 1000 for each stage of each experiment were obtained. Each sample after resampling comprises 16 channels, 200 sampling points, and one sample will correspond to one network topology.
22) Calculating PLI to obtain an adjacency matrix;
the correlation between channels is calculated by using a Phase Lag Index (PLI), and the calculation method of the phase lag index comprises the following steps:
setting the signal sequence of two channels asAndusing Hilbert transform to calculate instantaneous phase:
To representA Hilbert (Hilbert) transform,i(ii) a signal of either 1 or 2,jfor imaginary symbols, the calculation method is as follows:
wherein P.V. represents the Cauchy's principal value, t is time,τis an integral variable; after calculating the phase of each channel signal, the relative lock between the two channels can be calculated as:
in the formula,z 2*(t) Is composed ofz 2(t) The conjugate complex number of (a);
PLI ranges between 0 and 1, with 0 indicating no phase lock between the two channels and 1 indicating perfect phase coupling between the two channels. PLI values were calculated as follows:
after obtaining the data sample, calculating the channel by using a phase lag index calculation formulaWith phase correlation between samples, a 16 x 16 adjacency matrix is obtained for each sample, with adjacency values distributed between 0 and 1. An example adjacency matrix is computed as the phase lag index of fig. 3: the 1s electrical signal (200 points) of the node 1 and the node 2 calculates a correlation value according to the phase lag exponential formula in the inventionCorresponding to the position of the adjacency matrix (1,2) and the correlation value of node 2 at node 1That is, the obtained adjacency matrix is a real symmetric matrix. Similarly, the correlation value of node 3 at node 15 corresponds to the position of the adjacency matrix (3,15), and the correlation value of node 15 at node 3 corresponds to the position of the adjacency matrix (3,15), and simultaneouslyBy analogy, the entire 16 × 16 adjacency matrix is calculated.
FIG. 4 shows the network structure of Chibi brain prefrontal-parietal lobe of three different stages of macaque; fig. 5 shows adjacency matrixes corresponding to the kiwifruit Chibi topological network diagram in three different phases (in order to show more remarkable different-phase distinguishing effect, the adjacency matrixes drawn here increase self-connection).
23) And calculating the absolute value of the amplitude of each channel signal of each segment, and then averaging to obtain the absolute value as the node characteristic of each network topology graph, wherein each node corresponds to one characteristic value.
The example of the conversion of the dual map in step 3) is shown in fig. 6: the idea of the conversion of the dual graph is to convert edges into nodes and convert nodes into edges, and if the edges are connected with the nodes between the edges, the edges are added on the obtained dual graph.
As in fig. 6, the original graph contains edges 01, 02, 03, 12, 13, 23, which become nodes in the right graph, and the edge weights become node features. In addition, 01, 02 have a common node 0; 01, 03 have a common node 0; 02, 03 have a common node 0; by analogy, the edge renewing graph with the common node is added with the connection, and the connection value is regarded as 1.
Based on the above idea of the conversion of the dual graph, the present invention converts a plurality of adjacent matrices (16 × 16) obtained by phase lag exponent calculation into the dual graph (120 × 120), wherein 120= (16 × 16-16)/2. The edge connections of the converted dual graph are the same, namely, the adjacent matrix shown in FIG. 7 (left), and the original edge weights are expressed on the node characteristics of the new graph.
On the other hand, the node features obtained by the previous calculation of the amplitude information are also retained as the input of another graph convolution neural network model, i.e. a second GCN model, which will filter the retained node features based on a 16 × 16 adjacency matrix, and since the side information between the nodes is taken out and input into the first GCN model, the relationship between the nodes of the retained node information can be considered to be equal, and then a full connection matrix with self-connection removed is taken as its adjacency matrix.
Step 4), constructing a dual-flow graph convolutional neural network:
the double-flow graph convolution neural network comprises two models, wherein a GCN model I (hereinafter referred to as a model 1) designs a graph convolution-pooling structure with 6 layers and is used for extracting side weight information, namely graph data with the size of 120 x 120, and a GCN model II (hereinafter referred to as a model 2) designs a graph convolution-pooling structure with 4 layers and is used for extracting node characteristic information, namely graph data with the size of 16 x 16.
In computer vision, the CNN can effectively extract picture features with orderly arranged pixel points, namely European structure data, and a lot of non-European structure data such as social networks, protein structures and the like exist in scientific research. To apply machine learning methods to these non-euclidean structural data, GCN has become the focus of research.
The difficulty of applying convolution to graph structure data at first is the problem of parameter sharing, the picture with orderly arranged elements can meet the requirement of translation invariance, convolution kernels with the same size can be defined on the whole graph to carry out convolution operation, and the convolution kernels with the same size cannot be used for carrying out convolution operation when the number of adjacent nodes of each node of the graph is different.
The graph convolution neural network comprises a spectrum method and a space method, and in the face of the problem of parameter sharing, the spectrum method tries to define convolution in a spectrum domain, but not in a node domain, the node domain cannot meet translation invariance, and convolution kernels with the same size cannot be defined, so that the definition of convolution is realized in the spectrum domain and then changed back to the space domain, and the problem of parameter sharing is solved. In order to solve the problem of parameter sharing, the spatial method firstly determines neighbor nodes of a target node, then arranges the neighbor nodes in sequence, and selects a fixed number of neighbor nodes from each node, so that the parameter sharing can be realized, which is different from the spectrum method in thinking. Both methods perform better on the tasks associated with the graph. The invention adopts a spectrogram convolution method to realize the image classification of two models.
The invention adopts a method GCN of directly defining convolution in a spectral domain, which realizes the classification of a graph based on an adjacency matrix.
Spectrum convolution: the graph convolution is extended into the frequency domain of the graph by fourier changes.
Where U is the eigenvector matrix of the laplacian matrix L of the graph. Laplace matrix:
wherein,Ais a contiguous matrix, D is a degree matrix,is a diagonal matrix composed of eigenvalues of the laplacian matrix L,is the fourier transform on the graph.
To reduce the amount of calculation, willK-order approximation is performed with the chebyshev polynomial to obtain an improved convolution kernel:
wherein,The largest eigenvalue in the laplacian matrix L,are the coefficients of the chebyshev polynomial.
whereinIs Laplace after scaling toThe Chebyshev polynomial of order k, meansWe can use the recursive relationship and,to calculate。
After data preprocessing, data interception in different stages, functional network construction and other operations are completed, 30000 image samples in different stages are obtained, the obtained image is a weighted image constructed based on a phase lag index, the weighted image is different from a conventional binary image, and the weighted image expresses a brain network topological structure in more detail; in addition, in order to realize higher classification precision, the absolute value and the average value of the channel signal amplitude are selected and used as the node characteristics of the graph.
After a graph sample is constructed based on the topological structure and the node information, the graph sample is divided into two kinds of graph data, two public adjacency matrixes of the two kinds of graph data are found, and the adjacency matrixes are used as the topological structures of the graphs and input into a graph convolution neural network to be used for calculating the Laplace matrix of the graphs. Inputting the data of the first graph of the graph to be obtained by conversion of a dual graph method, converting edge weights into node characteristics sensitive to graph convolution, and obtaining an adjacent matrix with the size of 120 multiplied by 120; the other stream data is the original node feature that is kept, after the original edge weight is taken out, the relationship between nodes is considered to be equal, and then a 16 × 16 full-connection adjacency matrix (self-connection is removed) is constructed.
In step 5), the obtained two types of graph data are respectively input into two models of a double-flow graph convolution neural network, such as the double-flow graph convolution neural network method and the first flow graph convolution neural networkFeatures of collateral inputxIs one-dimensional, i.e., 120 × 1, and the input adjacency matrix is 120 × 120 in size as shown in fig. 7 (left); features of second flow graph convolution neural network inputxAlso 1-dimensional, i.e., 16 × 1, the input adjacency matrix is 16 × 16 in size as shown in fig. 7 (right). The convolution method and the pooling method used by the two-flow graph neural network model are the same, namely a spectrogram convolution method for approximating a convolution kernel by a Chebyshev polynomial and a graph coarsening and rapid pooling method based on a Graclus multi-level clustering algorithm. But because the input data features are different in size, different convolution-pooling structures are designed.
And in the step 6), the two models of the double-flow graph convolution neural network respectively output prediction probabilities. After two models in the double-flow graph convolution neural network respectively pass through softmax classification layers, the prediction probability of each class is output, the trained model 1 and the trained model 2 respectively predict a test set, and the prediction probability of the model 1 to the waking state is obtainedThe predicted probability of a moderate anesthetic state isThe prediction probability for the deep anesthesia state is. The prediction probability of model 2 for an awake state isThe predicted probability of moderate anesthesia isThe predicted probability for deep anesthesia is. Adding the predicted probabilities of the two models for each category to respectively obtain the probability of the double-flow graph convolution neural network for the waking state, the moderate anesthesia state,Predicted probability of deep anesthesia status:,,and taking the maximum value to obtain the prediction category corresponding to the value.
Based on the method, the anesthesia depth monitoring system based on the graph convolution neural network, disclosed by the invention, is shown in fig. 1 and comprises a data preprocessing module, a functional network construction module, a graph conversion module and a double-flow graph convolution neural network module.
A data preprocessing module: the system is used for preprocessing an electroencephalogram signal, filtering and downsampling original data, and filtering clutter and alternating current signals; downsampling may reduce the amount of data to be processed. The pretreatment comprises the following steps: filtering at 0.5-100 HZ; 50HZ notch filtering; and (5) resampling at 200 HZ. All of the above pretreatment operations were performed in MATLAB2016 b.
A functional network construction module: the method is used for intercepting sample data into a plurality of time segments at different anesthesia stages, calculating correlation among channels by a phase lag index method, constructing an adjacency matrix, obtaining a network topological graph of a key brain area, and calculating a signal amplitude average absolute value as a graph node characteristic.
A graph conversion module: the method is used for converting the adjacency matrix of the network topological graph sample into a dual graph and converting the edge weight into node characteristics sensitive to graph convolution, so that different graphs can obtain the same adjacency matrix, and the direct application on a spectrogram convolution graph classification method is realized; meanwhile, original node characteristics are reserved, and a new full-connection adjacency matrix is constructed to serve as data of another flow graph.
The double-flow graph convolution neural network module: the model 1 is used for extracting side weight value information, the model 2 is used for extracting node characteristic information, and the two models output prediction probabilities are classified and added to obtain a prediction result. The double-flow graph convolution neural network module is designed with a double-flow structure and is respectively used for learning two types of graph data obtained by the graph conversion module, graph features are extracted by utilizing spectrogram convolution, similar nodes are aggregated by graph coarsening and rapid pooling operation, calculated amount is reduced, finally, prediction probabilities of different states of anesthesia are predicted by a softmax layer, model fusion is realized by adding the prediction probabilities of the two models, and the test set is predicted by using the added probability.
Table 1 shows the parameters of each layer inside the dual-flow map convolutional neural network module model 1, table 2 shows the parameters of each layer inside the dual-flow map convolutional neural network module model 2, where O represents the number of anesthesia depth classification tasks,,the number of filters per convolution layer is shown.
The structure of the double-layer graph convolution neural network built in the invention is described as follows:
in the GCN model of the invention, the dimension of the graph is unchanged after the graph convolution layer, and the dimension of the graph is reduced by half for the maximum pooling layer, which means that the dimension of the graph becomes N/2 xN/2 after the N x N Laplace matrix passes through the maximum pooling layer, and for an adjacent matrix with the size of 120 x 120, a 6-layer pooling structure of 64-32-16-8-4-2-1 or a 3-layer pooling structure of 120-60-30-15 can be used, and the former is selected by the invention. For a contiguous matrix of size 16X 16, the present invention selects a 4-level pooling structure of 16-8-4-2-1.
The input adjacent matrix gives description of a graph structure, coarsening operation is carried out on the adjacent matrix to obtain a multi-level coarsening matrix, meanwhile, rearrangement and rapid pooling operation are carried out on original data according to the multi-level coarsening matrix obtained through coarsening, the original data are reorganized into 3D data through a rearrangement relation returned by coarsening, and the 3D data are input into a network to be convoluted.
The graph volume layer learns the characteristics of the graph data, the pooling layer reduces the data dimension, and similar nodes are aggregated. And after the two models are trained, testing the test set by using the trained models to obtain the prediction category.
Table 3 shows the accuracy of predicting the test set by the model 1 and the model 2 and the combined dual-flow model, and it can be seen that the model 1 and the model 2 can achieve quite good accuracy, and the combination of the two models can achieve better prediction effect, which explains the distinctiveness of different characteristics learned by the two models.
Fig. 8 is a confusion matrix of a test set, fig. 9 is an ROC graph of the test set, including evaluation of model 1, model 2 and a dual-flow model, the test set evaluates the generalization ability of the final model, and the three-classification accuracy of the test set obtained by the present invention reaches 95.4%.
The training and testing process of the dual-flow graph convolution neural network model is completed in a Python3.6 Tensorflow 1.13.1 environment.
Finally, it should be noted that the above detailed description is only for illustrating the technical solution of the patent and not for limiting, although the patent is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that the technical solution of the patent can be modified or replaced by equivalents without departing from the spirit and scope of the technical solution of the patent, which should be covered by the claims of the patent.
Claims (9)
1. The utility model provides an anesthesia depth monitoring system based on graph convolution neural network which characterized in that: the system comprises a data preprocessing module, a functional network construction module, a graph conversion module and a double-flow graph convolution neural network module;
the data preprocessing module: the system is used for preprocessing an electroencephalogram signal of the cerebral cortex;
the functional network construction module: the method comprises the steps of intercepting sample data into a plurality of time segments of different anesthesia stages, calculating a phase lag index PLI, calculating an adjacency matrix for each sample, and obtaining network topological graph samples of the different anesthesia stages, wherein the anesthesia stages comprise a waking stage, a moderate anesthesia stage and a deep anesthesia stage;
the graph conversion module: the method is used for converting the adjacency matrix of the network topological graph sample into a dual graph, the edge connections of the dual graph obtained by conversion are the same, the edge weight information is kept on the node characteristics of the dual graph, and the converted graph sample is a weighted graph constructed based on a phase lag index;
the double-flow graph convolution neural network module is as follows: the model 1 is used for extracting side weight value information, the model 2 is used for extracting node characteristic information, and the two models output prediction probabilities are classified and added to obtain a prediction result.
2. A computer-readable storage medium storing a computer program which, when executed by a processor, implements a method for monitoring depth of anesthesia based on a convolutional neural network, comprising: the method comprises the following steps:
1) collecting electroencephalogram signals of a plurality of channels, and preprocessing original signals;
2) intercepting a plurality of time slices in different anesthesia stages as data samples, calculating phase lag index PLI, calculating an adjacency matrix for each sample, and obtaining network topological graph samples in different anesthesia stages, wherein the anesthesia stages comprise a waking stage, a moderate anesthesia stage and a deep anesthesia stage;
3) converting the adjacency matrix of the network topological graph into a dual graph, wherein the edge connections of the dual graph obtained by conversion are the same, the edge weight information is kept on the node characteristics of the dual graph, the converted graph sample is a weighted graph constructed based on a phase lag index, the node characteristics are kept at the same time, and a full connection matrix is constructed to represent the topological structure of the node characteristics;
4) constructing a double-flow graph convolution neural network, dividing the graph sample into two flow graph data, and finding two public adjacency matrixes of the two flow graph data; in the data of the two flow diagrams, one flow diagram data is a weighted diagram constructed based on a phase lag index, and the other flow diagram data is a full-connection adjacency matrix retaining original node characteristics;
5) inputting the data of the two flow graphs into two models of a double-flow graph convolution neural network respectively, performing graph coarsening and rapid pooling, reducing data dimensionality, aggregating similar nodes, and outputting the predicted value of each anesthesia stage through a full connection layer;
6) and respectively adding the predicted values of different anesthesia stages output by the two models of the double-flow graph convolution neural network, and outputting the category with the maximum predicted value as the prediction result of the anesthesia stage.
3. The computer-readable storage medium of claim 2, wherein the cortical electrogram signal of step 1) is a 16-channel ECoG signal of frontal lobe-apical lobe of brain of the subject; the preprocessing comprises 0.1-100 HZ filtering, 50HZ notch filtering and 200HZ resampling.
4. A computer-readable storage medium according to claim 2, wherein: the phase lag index PLI is calculated by the following method:
setting the signal sequence of two channels asAndusing Hilbert transform to calculate instantaneous phase:
Wherein,to representA Hilbert (Hilbert) transform,i(ii) a signal of either 1 or 2,jfor imaginary symbols:
in the formula, P.V. represents a Cauchy main value, t is time, and tau is an integral variable;
the relative lock between the two channels is calculated as:
in the formula,z 2*(t) Is composed ofz 2(t) The conjugate complex number of (a);
the PLI value is calculated as follows:
PLI ranges between 0 and 1, with 0 indicating no phase lock between the two channels and 1 indicating perfect phase coupling between the two channels.
5. A computer-readable storage medium according to claim 2, wherein: and 4) constructing a double-flow graph convolution neural network in the step 4), expanding graph convolution to a graph frequency domain through Fourier change by adopting a spectrum domain graph convolution method GCN, and filtering signals by using a filter.
6. A computer-readable storage medium according to claim 2, wherein: in the step 5), the two models of the double-flow graph convolution neural network both adopt a spectrogram convolution method of approximating a convolution kernel by a Chebyshev polynomial and a graph coarsening and rapid pooling method based on a Graclus multistage clustering algorithm.
7. A computer-readable storage medium according to claim 6, wherein: and the Graclus multilevel clustering algorithm measures the continuous roughness of the graph by adopting a greedy algorithm so as to minimize the spectral clustering target.
8. A computer-readable storage medium according to claim 2, wherein: and 2) calculating the average absolute value of the amplitudes of the channel signals of each time slice as a node characteristic.
9. A computer-readable storage medium according to claim 2, wherein: the data samples in the step 2) are randomly divided into a training set, a verification set and a test set according to the ratio of 8:1:1, the training set is used for training the neural network model of the graph, the verification set is used for adjusting the hyper-parameters of the model, the capability of the model is preliminarily evaluated, and the test set is used for evaluating the generalization capability of the final model.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111346082.1A CN113768474B (en) | 2021-11-15 | 2021-11-15 | Anesthesia depth monitoring method and system based on graph convolution neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111346082.1A CN113768474B (en) | 2021-11-15 | 2021-11-15 | Anesthesia depth monitoring method and system based on graph convolution neural network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113768474A CN113768474A (en) | 2021-12-10 |
CN113768474B true CN113768474B (en) | 2022-03-18 |
Family
ID=78873958
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111346082.1A Active CN113768474B (en) | 2021-11-15 | 2021-11-15 | Anesthesia depth monitoring method and system based on graph convolution neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113768474B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114557708B (en) * | 2022-02-21 | 2024-08-20 | 天津大学 | Somatosensory stimulation consciousness detection device and method based on brain electricity dual-feature fusion |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE19829018B4 (en) * | 1998-06-30 | 2005-01-05 | Markus Lendl | Method for setting a dose rate selectively variable metering device for anesthetic and anesthetic system thereto |
US7373198B2 (en) * | 2002-07-12 | 2008-05-13 | Bionova Technologies Inc. | Method and apparatus for the estimation of anesthetic depth using wavelet analysis of the electroencephalogram |
WO2017001495A1 (en) * | 2015-06-29 | 2017-01-05 | Koninklijke Philips N.V. | Optimal drug dosing based on current anesthesia practice |
CN110680285A (en) * | 2019-10-29 | 2020-01-14 | 张萍萍 | Anesthesia degree monitoring device based on neural network |
CN111091712A (en) * | 2019-12-25 | 2020-05-01 | 浙江大学 | Traffic flow prediction method based on cyclic attention dual graph convolution network |
-
2021
- 2021-11-15 CN CN202111346082.1A patent/CN113768474B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN113768474A (en) | 2021-12-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Wen et al. | Deep convolution neural network and autoencoders-based unsupervised feature learning of EEG signals | |
Huang et al. | S-EEGNet: Electroencephalogram signal classification based on a separable convolution neural network with bilinear interpolation | |
KR102221264B1 (en) | Method for estimating human emotions using deep psychological affect network and system therefor | |
CN110969108B (en) | Limb action recognition method based on autonomic motor imagery electroencephalogram | |
Vrbancic et al. | Automatic classification of motor impairment neural disorders from EEG signals using deep convolutional neural networks | |
Miao et al. | A spatial-frequency-temporal optimized feature sparse representation-based classification method for motor imagery EEG pattern recognition | |
CN113128552A (en) | Electroencephalogram emotion recognition method based on depth separable causal graph convolution network | |
CN112190261A (en) | Autism electroencephalogram signal classification device based on resting brain network | |
Chen et al. | Epilepsy classification for mining deeper relationships between EEG channels based on GCN | |
Vallabhaneni et al. | Deep learning algorithms in eeg signal decoding application: a review | |
Zeng et al. | Epileptic seizure detection with deep EEG features by convolutional neural network and shallow classifiers | |
Sun et al. | A novel complex network-based graph convolutional network in major depressive disorder detection | |
CN113768474B (en) | Anesthesia depth monitoring method and system based on graph convolution neural network | |
Ranjani et al. | Classifying the autism and epilepsy disorder based on EEG signal using deep convolutional neural network (DCNN) | |
CN116340824A (en) | Electromyographic signal action recognition method based on convolutional neural network | |
Islam et al. | Virtual image from EEG to recognize appropriate emotion using convolutional neural network | |
CN116522106A (en) | Motor imagery electroencephalogram signal classification method based on transfer learning parallel multi-scale filter bank time domain convolution | |
Al-dabag et al. | EEG motor movement classification based on cross-correlation with effective channel | |
CN113128384B (en) | Brain-computer interface software key technical method of cerebral apoplexy rehabilitation system based on deep learning | |
Liu et al. | EEG classification algorithm of motor imagery based on CNN-Transformer fusion network | |
Shi et al. | A brain topography graph embedded convolutional neural network for EEG-based motor imagery classification | |
Sarraf | EEG-based movement imagery classification using machine learning techniques and Welch’s power spectral density estimation | |
Li et al. | Enhancing P300 based character recognition performance using a combination of ensemble classifiers and a fuzzy fusion method | |
Wu et al. | A multi-stream deep learning model for EEG-based depression identification | |
Jayashekar et al. | Hybrid Feature Extraction for EEG Motor Imagery Classification Using Multi-Class SVM. |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |