CN113768474B - Anesthesia depth monitoring method and system based on graph convolution neural network - Google Patents
Anesthesia depth monitoring method and system based on graph convolution neural network Download PDFInfo
- Publication number
- CN113768474B CN113768474B CN202111346082.1A CN202111346082A CN113768474B CN 113768474 B CN113768474 B CN 113768474B CN 202111346082 A CN202111346082 A CN 202111346082A CN 113768474 B CN113768474 B CN 113768474B
- Authority
- CN
- China
- Prior art keywords
- graph
- anesthesia
- neural network
- convolutional neural
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 206010002091 Anaesthesia Diseases 0.000 title claims abstract description 104
- 230000037005 anaesthesia Effects 0.000 title claims abstract description 101
- 238000000034 method Methods 0.000 title claims abstract description 45
- 238000012544 monitoring process Methods 0.000 title claims abstract description 14
- 238000013528 artificial neural network Methods 0.000 title 1
- 239000011159 matrix material Substances 0.000 claims abstract description 63
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 53
- 230000009977 dual effect Effects 0.000 claims abstract description 28
- 238000006243 chemical reaction Methods 0.000 claims abstract description 16
- 238000007781 pre-processing Methods 0.000 claims abstract description 13
- 210000003710 cerebral cortex Anatomy 0.000 claims abstract description 4
- 238000012360 testing method Methods 0.000 claims description 19
- 230000003595 spectral effect Effects 0.000 claims description 18
- 238000011176 pooling Methods 0.000 claims description 17
- 210000004556 brain Anatomy 0.000 claims description 9
- 238000004422 calculation algorithm Methods 0.000 claims description 9
- 230000000717 retained effect Effects 0.000 claims description 9
- 238000004364 calculation method Methods 0.000 claims description 7
- 238000002566 electrocorticography Methods 0.000 claims description 7
- 238000001914 filtration Methods 0.000 claims description 7
- 238000012549 training Methods 0.000 claims description 7
- 238000010200 validation analysis Methods 0.000 claims description 5
- 238000012952 Resampling Methods 0.000 claims description 4
- 238000004590 computer program Methods 0.000 claims description 4
- 238000003062 neural network model Methods 0.000 claims description 4
- 108010076504 Protein Sorting Signals Proteins 0.000 claims description 3
- 230000008878 coupling Effects 0.000 claims description 3
- 238000010168 coupling process Methods 0.000 claims description 3
- 238000005859 coupling reaction Methods 0.000 claims description 3
- 238000011160 research Methods 0.000 claims description 3
- 230000009466 transformation Effects 0.000 claims description 3
- 238000012795 verification Methods 0.000 claims 1
- 241000282553 Macaca Species 0.000 description 11
- 238000002474 experimental method Methods 0.000 description 10
- 241000282560 Macaca mulatta Species 0.000 description 7
- 208000003443 Unconsciousness Diseases 0.000 description 5
- 238000013461 design Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 238000011084 recovery Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 239000000243 solution Substances 0.000 description 3
- 238000001356 surgical procedure Methods 0.000 description 3
- 241000282693 Cercopithecidae Species 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 230000006698 induction Effects 0.000 description 2
- 238000012423 maintenance Methods 0.000 description 2
- 229960002140 medetomidine Drugs 0.000 description 2
- 230000001537 neural effect Effects 0.000 description 2
- 230000010355 oscillation Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 238000000547 structure data Methods 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- 229920000742 Cotton Polymers 0.000 description 1
- 208000032358 Intraoperative Awareness Diseases 0.000 description 1
- PIWKPBJCKXDKJR-UHFFFAOYSA-N Isoflurane Chemical compound FC(F)OC(Cl)C(F)(F)F PIWKPBJCKXDKJR-UHFFFAOYSA-N 0.000 description 1
- 244000100945 Piper peltatum Species 0.000 description 1
- 230000002411 adverse Effects 0.000 description 1
- 230000003444 anaesthetic effect Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- HSWPZIDYAHLZDD-UHFFFAOYSA-N atipamezole Chemical compound C1C2=CC=CC=C2CC1(CC)C1=CN=CN1 HSWPZIDYAHLZDD-UHFFFAOYSA-N 0.000 description 1
- 229960003002 atipamezole Drugs 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000003925 brain function Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 230000000763 evoking effect Effects 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000002695 general anesthesia Methods 0.000 description 1
- 238000002347 injection Methods 0.000 description 1
- 239000007924 injection Substances 0.000 description 1
- 238000010255 intramuscular injection Methods 0.000 description 1
- 239000007927 intramuscular injection Substances 0.000 description 1
- 229960002725 isoflurane Drugs 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 210000001259 mesencephalon Anatomy 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000000653 nervous system Anatomy 0.000 description 1
- 102000004169 proteins and genes Human genes 0.000 description 1
- 108090000623 proteins and genes Proteins 0.000 description 1
- 210000004761 scalp Anatomy 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/48—Other medical applications
- A61B5/4821—Determining level or depth of anaesthesia
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/24—Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
- A61B5/316—Modalities, i.e. specific diagnostic methods
- A61B5/369—Electroencephalography [EEG]
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/24—Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
- A61B5/316—Modalities, i.e. specific diagnostic methods
- A61B5/369—Electroencephalography [EEG]
- A61B5/37—Intracranial electroencephalography [IC-EEG], e.g. electrocorticography [ECoG]
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7225—Details of analogue processing, e.g. isolation amplifier, gain or sensitivity adjustment, filtering, baseline or drift compensation
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
- A61B5/7267—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/08—Feature extraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/12—Classification; Matching
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Theoretical Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Public Health (AREA)
- Pathology (AREA)
- Veterinary Medicine (AREA)
- Data Mining & Analysis (AREA)
- Animal Behavior & Ethology (AREA)
- Evolutionary Computation (AREA)
- Surgery (AREA)
- Medical Informatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Heart & Thoracic Surgery (AREA)
- Psychiatry (AREA)
- Mathematical Physics (AREA)
- Signal Processing (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Physiology (AREA)
- Computing Systems (AREA)
- Evolutionary Biology (AREA)
- Computational Linguistics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Psychology (AREA)
- Fuzzy Systems (AREA)
- Software Systems (AREA)
- Neurosurgery (AREA)
- Anesthesiology (AREA)
- Power Engineering (AREA)
- Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)
Abstract
本发明公开了一种基于图卷积神经网络的麻醉深度监测方法和系统,所述系统包括:数据预处理模块:用于对脑皮层电图信号进行预处理;功能网络构建模块:用于计算样本数据的相位滞后指数PLI,每个样本计算一个邻接矩阵,得到不同麻醉阶段的网络拓扑图样本;图转换模块:用于转换对偶图,将图样本转换为基于相位滞后指数构建的带权图,同时对节点特征构建新图;双流图卷积神经网络模块:用于存储双流图卷积神经网络的两个模型,模型1用于提取边权值信息,模型2用于提取节点特征信息,将两个模型输出预测概率分类相加,得出预测结果。本发明找到了区分不同麻醉状态的新特征,对清醒状态、中度麻醉状态、深度麻醉状态分类精度达到95.4%,可以很好监测出麻醉的不同状态。
The invention discloses a method and system for monitoring depth of anesthesia based on a graph convolutional neural network. The system comprises: a data preprocessing module: used for preprocessing cerebral cortex electrogram signals; a functional network building module: used for calculating The phase lag index PLI of the sample data, each sample calculates an adjacency matrix to obtain the network topology map samples of different anesthesia stages; the graph conversion module: used to convert the dual graph, and convert the graph samples into a weighted graph constructed based on the phase lag index , and construct a new graph for node features at the same time; two-stream graph convolutional neural network module: used to store two models of two-stream graph convolutional neural network, model 1 is used to extract edge weight information, model 2 is used to extract node feature information, The prediction results are obtained by summing the predicted probability classifications of the output of the two models. The invention finds a new feature for distinguishing different anesthesia states, and the classification accuracy of awake state, moderate anesthesia state and deep anesthesia state reaches 95.4%, and can monitor different states of anesthesia well.
Description
技术领域technical field
本发明涉及生物医学信号处理技术及深度学习技术领域,具体地指一种基于图卷积神经网络的麻醉深度监测方法及系统。The invention relates to the fields of biomedical signal processing technology and deep learning technology, in particular to a method and system for monitoring depth of anesthesia based on a graph convolutional neural network.
背景技术Background technique
在全身麻醉的外科手术中,麻醉师需要实时监测病人的麻醉状态。麻醉监测仪可以辅助麻醉师掌握病人的麻醉深度并避免意外的术中意识的发生。如果麻醉过深,可能会使患者术后难以苏醒,甚至对神经系统产生不良后遗症;如果麻醉过浅,可能会导致患者在术中苏醒,给病人留下心理阴影。因此,在手术中对患者进行实时的麻醉深度监测非常重要。In general anesthesia surgery, the anesthesiologist needs to monitor the patient's anesthesia status in real time. Anesthesia monitors can assist anesthesiologists in grasping the depth of anesthesia in patients and avoid accidental intraoperative awareness. If the anesthesia is too deep, it may make it difficult for the patient to wake up after surgery, and even have adverse sequelae to the nervous system; if the anesthesia is too shallow, it may cause the patient to wake up during the operation, leaving a psychological shadow on the patient. Therefore, it is very important to monitor the depth of anesthesia in real time for patients during surgery.
目前,常用于监测麻醉深度的临床技术有EEG双频指数分析、听觉诱发电位(AEP)、麻醉熵等。这些技术都是通过处理EEG信号来进行麻醉深度监测的,EEG记录了大脑沟回的表面信号,具有非侵入、无损伤、易获得的优势。目前流行的麻醉深度监测技术仍具有一些缺陷,例如:BIS不对异氟醚诱导的麻醉有效,对不同人也存在较大差异,并且它的算法是不公开的。因此探索更加稳定的麻醉深度监测算法非常必要。Currently, clinical techniques commonly used to monitor the depth of anesthesia include EEG bispectral index analysis, auditory evoked potential (AEP), and anesthesia entropy. These techniques are all used to monitor the depth of anesthesia by processing EEG signals, which record the surface signals of the sulcus gyrus of the brain, and have the advantages of non-invasive, non-invasive, and easy-to-obtain. The current popular anesthesia depth monitoring technology still has some defects. For example, BIS is not effective for isoflurane-induced anesthesia, and there are great differences in different people, and its algorithm is not public. Therefore, it is necessary to explore a more stable anesthesia depth monitoring algorithm.
发明内容SUMMARY OF THE INVENTION
本发明的目的在于克服现有技术的不足,基于脑电信号构建脑功能网络,结合双流图卷积神经网络提出一种基于图卷积神经网络的麻醉深度监测方法及系统。The purpose of the present invention is to overcome the deficiencies of the prior art, construct a brain function network based on EEG signals, and provide a method and system for monitoring depth of anesthesia based on a graph convolutional neural network in combination with a dual-stream graph convolutional neural network.
为实现上述目的,本发明所设计的一种基于图卷积神经网络的麻醉深度监测方法,其特殊之处在于,所述方法包括如下步骤:In order to achieve the above purpose, a method for monitoring the depth of anesthesia based on a graph convolutional neural network designed by the present invention is special in that the method comprises the following steps:
1)采集若干个通道的脑皮层电图信号,并对原始信号进行预处理;1) Collect several channels of EEG signals, and preprocess the original signals;
2)截取不同麻醉阶段的若干个时间片段作为数据样本,计算相位滞后指数PLI,每个样本计算一个邻接矩阵,得到不同麻醉阶段的网络拓扑图样本,所述麻醉阶段包括清醒阶段、中度麻醉阶段、深度麻醉阶段;2) Intercept several time segments of different anesthesia stages as data samples, calculate the phase lag index PLI, calculate an adjacency matrix for each sample, and obtain the network topology map samples of different anesthesia stages, the anesthesia stages include awake stage, moderate anesthesia stage, deep anesthesia stage;
3)将所述网络拓扑图的邻接矩阵转换为对偶图,转换得到的对偶图的边连接都相同,边权值信息保留在对偶图的节点特征上,转换前的图样本为基于相位滞后指数构建的带权图,同时将节点特征保留,并构建一个全连接矩阵,以表示这些节点特征的拓扑结构;3) Convert the adjacency matrix of the network topology graph into a dual graph, the edge connections of the dual graphs obtained by the conversion are all the same, and the edge weight information is retained on the node features of the dual graph, and the graph samples before conversion are based on the phase lag index. The constructed weighted graph, while retaining the node features, and constructing a fully connected matrix to represent the topology of these node features;
4)构建双流图卷积神经网络,将所述图样本分为两流图数据,并找到两流图数据的两个公共邻接矩阵;所述两流图数据中,一流图数据为基于相位滞后指数构建的带权图,另一流图数据为保留原节点特征的全连接图;4) Constructing a two-stream graph convolutional neural network, dividing the graph samples into two-stream graph data, and finding two common adjacency matrices of the two-stream graph data; in the two-stream graph data, the first-stream graph data is based on phase lag The weighted graph constructed by the index, and the other flow graph data is a fully connected graph that retains the characteristics of the original node;
5)将两流图数据分别输入至双流图卷积神经网络两个模型,进行图粗化和快速池化,降低数据维度,聚合相似节点,通过全连接层输出各麻醉阶段的预测值;5) Input the two-stream graph data into the two models of the two-stream graph convolutional neural network respectively, perform graph coarsening and fast pooling, reduce the data dimension, aggregate similar nodes, and output the predicted value of each anesthesia stage through the fully connected layer;
6)将双流图卷积神经网络两个模型输出的不同麻醉阶段的预测值分别相加,将预测值最大的类别作为麻醉阶段预测结果输出。6) The predicted values of different anesthesia stages output by the two models of the dual-stream graph convolutional neural network are added respectively, and the category with the largest predicted value is output as the prediction result of the anesthesia stage.
优选地,所述步骤1)中脑皮层电图信号为研究对象大脑额叶-顶叶的16通道ECoG信号;所述预处理包括0.1~100HZ滤波、50HZ凹陷滤波和200HZ重采样。Preferably, the step 1) EMG signal of the midbrain cortex is a 16-channel ECoG signal of the frontal lobe-parietal lobe of the subject's brain; the preprocessing includes 0.1-100 Hz filtering, 50 Hz sag filtering and 200 Hz resampling.
优选地,所述相位滞后指数PLI的计算方法为:Preferably, the calculation method of the phase lag index PLI is:
设两个通道的信号序列为和,使用希尔伯特变换计算瞬时相位: Let the signal sequence of the two channels be and , using the Hilbert transform to calculate the instantaneous phase :
其中,表示的希尔伯特(Hilbert)变换,i=1或2,j为虚数符号: in, express The Hilbert transform of , i = 1 or 2, and j is the imaginary sign:
式中,P.V.表示柯西主值,t为时间,τ为积分变量;In the formula, PV represents the principal value of Cauchy, t is the time, and τ is the integral variable;
两个通道之间的相对锁定计算为:The relative lock between the two channels is calculated as:
式中,z 2*(t) 为z 2(t)的共轭复数;In the formula, z 2 *( t ) is the complex conjugate of z 2 ( t );
则PLI值计算如下:Then the PLI value is calculated as follows:
PLI的范围在0到1之间,0表示两个通道之间没有相位锁定,而1表示两个通道之间有完美的相位耦合。The range of PLI is between 0 and 1, with 0 indicating no phase locking between the two channels and 1 indicating perfect phase coupling between the two channels.
优选地,所述步骤4)中构建双流图卷积神经网络采用谱域图卷积方法GCN,通过傅里叶变化将图卷积拓展到图的频域中,并使用滤波器对信号进行滤波。Preferably, in the step 4), the dual-stream graph convolutional neural network is constructed using the spectral domain graph convolution method GCN, the graph convolution is extended to the frequency domain of the graph through Fourier transformation, and the signal is filtered by using a filter. .
优选地,所述步骤5)中双流图卷积神经网络两个模型均采用以切比雪夫多项式逼近卷积核的谱图卷积方法,及基于Graclus多级聚类算法的图粗化及快速池化方法。Preferably, in the step 5), both models of the dual-stream graph convolutional neural network adopt the spectral graph convolution method in which the convolution kernel is approximated by Chebyshev polynomials, and the graph coarsening and fast graph based on the Graclus multi-level clustering algorithm pooling method.
优选地,所述Graclus多级聚类算法采用贪婪算法对图的连续粗度进行度量,使谱聚类目标最小化。Preferably, the Graclus multi-level clustering algorithm adopts a greedy algorithm to measure the continuous roughness of the graph, so as to minimize the spectral clustering objective.
优选地,所述步骤2)中计算每个时间片段的各通道信号幅度平均绝对值作为节点特征。Preferably, in the step 2), the average absolute value of the signal amplitude of each channel of each time segment is calculated as the node feature.
优选地,所述步骤2)中的数据样本按照8:1:1的比例随机划分为训练集、验证集、测试集,使用训练集训练图神经网络模型、验证集调整模型的超参数并对模型的能力进行初步评估、测试集评估最终模型的泛化能力。Preferably, the data samples in the step 2) are randomly divided into a training set, a validation set, and a test set according to the ratio of 8:1:1, and the training set is used to train the neural network model of the graph and the validation set to adjust the hyperparameters of the model and adjust the model's hyperparameters. The ability of the model is initially evaluated, and the test set evaluates the generalization ability of the final model.
本发明还提出一种基于图卷积神经网络的麻醉深度监测系统,所述系统包括数据预处理模块、功能网络构建模块、图转换模块和双流图卷积神经网络模块;The present invention also provides an anesthesia depth monitoring system based on a graph convolutional neural network, the system includes a data preprocessing module, a functional network building module, a graph conversion module and a dual-stream graph convolutional neural network module;
所述数据预处理模块:用于对脑皮层电图信号进行预处理;The data preprocessing module: used to preprocess the cerebral cortex signal;
所述功能网络构建模块:用于将样本数据截取为不同麻醉阶段的若干个时间片段,计算相位滞后指数PLI,每个样本计算一个邻接矩阵,得到不同麻醉阶段的网络拓扑图样本,所述麻醉阶段包括清醒阶段、中度麻醉阶段、深度麻醉阶段;The functional network building module is used to cut the sample data into several time segments of different anesthesia stages, calculate the phase lag index PLI, calculate an adjacency matrix for each sample, and obtain the network topology map samples of different anesthesia stages. Stages include awake stage, moderate anesthesia stage, and deep anesthesia stage;
所述图转换模块:用于将网络拓扑图样本的邻接矩阵转换为对偶图,转换得到的对偶图的边连接都相同,边权值信息保留在对偶图的节点特征上,则转换后的图样本为基于相位滞后指数构建的带权图,此外,原节点特征保留并用于构建新的全连接图;The graph conversion module is used to convert the adjacency matrix of the network topology graph sample into a dual graph. The edge connections of the converted dual graphs are the same, and the edge weight information is retained on the node features of the dual graph. The sample is a weighted graph constructed based on the phase lag index. In addition, the original node features are retained and used to construct a new fully connected graph;
所述双流图卷积神经网络模块:用于存储双流图卷积神经网络的两个模型,模型1用于提取边权值信息,模型2用于提取节点特征信息,将两个模型输出预测概率分类相加,得出预测结果。The two-stream graph convolutional neural network module: used to store two models of the two-stream graph convolutional neural network,
本发明另外提出一种计算机可读存储介质,存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现上述的基于图卷积神经网络的麻醉深度监测方法。The present invention further provides a computer-readable storage medium storing a computer program, characterized in that, when the computer program is executed by a processor, the above-mentioned method for monitoring anesthesia depth based on a graph convolutional neural network is implemented.
本发明的有益效果包括:The beneficial effects of the present invention include:
1)传统的谱图卷积方法在一个邻接矩阵的基础上可以实现优秀图分类效益,它对节点特征信息敏感,但是它不能处理不同网络拓扑结构的图,而本发明采用对偶图转换的方法,对偶图转换巧妙地将本应用中的相位滞后指数计算所得的带权图转化为相同邻接矩阵的图,更重要的是将图卷积本不敏感的边权值特征转换为节点特征,于此便可以直接采用效益很好的传统谱图卷积图方法。1) The traditional spectral graph convolution method can achieve excellent graph classification benefits on the basis of an adjacency matrix. It is sensitive to node feature information, but it cannot handle graphs with different network topologies, while the present invention adopts the dual graph conversion method. , the dual graph transformation cleverly converts the weighted graph obtained by the phase lag index calculation in this application into a graph of the same adjacency matrix, and more importantly, converts the edge weight features that are not sensitive to the graph convolution into node features. In this way, the traditional spectral convolution method, which is very efficient, can be directly used.
2)图本身包括其边权值及其节点特征,将图的边权值进行对偶图转换,变成了第一个模型的节点特征输入。那么,对于图的节点特征,本发明提出第二个图卷积模型,前面将图的边权值提取出后,保留原有的节点信息,并认为剩下的节点之间的关系是均等的,所以对于第二个图卷积模型,本发明使用一个去掉自连接的全连接矩阵作为其邻接矩阵。2) The graph itself includes its edge weights and its node features. The edge weights of the graph are converted into a dual graph and become the node feature input of the first model. Then, for the node characteristics of the graph, the present invention proposes a second graph convolution model. After the edge weights of the graph are extracted, the original node information is retained, and the relationship between the remaining nodes is considered equal. , so for the second graph convolution model, the present invention uses a fully connected matrix without self-connection as its adjacency matrix.
3)本发明设计了一个双流图卷积神经网络结构,均采用谱图卷积的图分类方法,一流用于提取边权值信息,另一流用于提取节点特征信息,两个模型训练完后,将预测概率求和对测试集进行预测;3) The present invention designs a dual-stream graph convolutional neural network structure, both of which use the graph classification method of spectral graph convolution. One stream is used to extract edge weight information, and the other stream is used to extract node feature information. After the two models are trained, , sum the predicted probabilities to predict the test set;
4)本发明找到了区分不同麻醉状态的新特征,将脑网络和图卷积神经网络相结合应用于麻醉深度监测,对清醒状态、中度麻醉状态、深度麻醉状态分类精度达到95.4%,可以很好监测出麻醉的不同状态,这为临床麻醉监测提供了一种新方法。4) The present invention finds a new feature to distinguish different anesthesia states, and combines the brain network and the graph convolutional neural network to monitor the depth of anesthesia. The different states of anesthesia are well monitored, which provides a new method for clinical anesthesia monitoring.
本发明所提出的思路,不仅适用于本次麻醉数据,同时适用于其它场景下的脑电信号分类。The idea proposed by the present invention is not only applicable to this anesthesia data, but also applicable to EEG signal classification in other scenarios.
附图说明Description of drawings
图1为本发明系统的结构框图。FIG. 1 is a structural block diagram of the system of the present invention.
图2为本发明选取的覆盖猕猴前额叶-顶叶的16通道图。FIG. 2 is a 16-channel map covering the prefrontal-parietal lobe of the macaque selected by the present invention.
图3为相位滞后指数计算邻接矩阵示例。Figure 3 is an example of calculating the adjacency matrix of the phase lag index.
图4为本发明的猕猴前额叶-顶叶在不同状态下网络拓扑图(由左至右分别是:清醒状态,中度麻醉,深度麻醉)。Figure 4 is a network topology diagram of the prefrontal-parietal lobe of the macaque monkey in different states (from left to right: awake state, moderate anesthesia, deep anesthesia).
图5为本发明的猕猴前额叶-顶叶在不同状态下的邻接矩阵(由左至右分别是:清醒状态,中度麻醉,深度麻醉)。Figure 5 is the adjacency matrix of the prefrontal-parietal lobe of the macaque in different states (from left to right: awake state, moderate anesthesia, deep anesthesia).
图6为对偶图转换示例。Figure 6 is an example of dual graph conversion.
图7为两个模型的邻接矩阵。Figure 7 shows the adjacency matrices of the two models.
图8为本发明中模型1、模型2及双流模型对测试集进行预测的混淆矩阵。FIG. 8 is a confusion matrix for predicting the test set by
图9为本发明中模型1、模型2及双流模型对测试集进行预测的ROC曲线图。FIG. 9 is a ROC curve diagram of the prediction of the test set by the
具体实施方式Detailed ways
以下结合附图和具体实施例对本发明作进一步的详细描述。The present invention will be further described in detail below with reference to the accompanying drawings and specific embodiments.
本发明所提出的一种基于图卷积神经网络的麻醉深度监测方法,包括如下步骤:A method for monitoring depth of anesthesia based on a graph convolutional neural network proposed by the present invention includes the following steps:
1)采集若干个通道的脑皮层电图信号,并对原始信号进行预处理;1) Collect several channels of EEG signals, and preprocess the original signals;
2)截取不同麻醉阶段的若干个时间片段作为数据样本,计算相位滞后指数PLI,每个样本计算一个邻接矩阵,得到不同麻醉阶段的网络拓扑图样本,此外计算每个时间片段的幅值绝对值的平均值,以此作为图样本的节点特征,所述麻醉阶段包括清醒阶段、中度麻醉阶段、深度麻醉阶段;2) Intercept several time segments of different anesthesia stages as data samples, calculate the phase lag index PLI, calculate an adjacency matrix for each sample, obtain the network topology map samples of different anesthesia stages, and calculate the absolute value of the amplitude of each time segment. The average value of , as the node feature of the graph sample, the anesthesia stage includes the awake stage, the moderate anesthesia stage, and the deep anesthesia stage;
3)将网络拓扑图的邻接矩阵转换为对偶图,转换得到的对偶图的边连接都相同,边权值信息保留在对偶图的节点特征上,转换前的图样本为基于相位滞后指数构建的带权图,同时将节点特征保留,并构建一个全连接矩阵,以表示这些节点特征的拓扑结构;3) Convert the adjacency matrix of the network topology graph to a dual graph. The edge connections of the dual graphs obtained by the conversion are all the same, and the edge weight information is retained on the node features of the dual graph. The graph samples before conversion are constructed based on the phase lag index. Weighted graph, while retaining node features, and constructing a fully connected matrix to represent the topology of these node features;
4)构建双流图卷积神经网络,将所述图样本分为两流图数据,并找到两流图数据的两个公共邻接矩阵;所述两流图数据中,一流图数据为基于相位滞后指数构建的带权图,另一流图数据为保留原节点特征的全连接图;4) Constructing a two-stream graph convolutional neural network, dividing the graph samples into two-stream graph data, and finding two common adjacency matrices of the two-stream graph data; in the two-stream graph data, the first-stream graph data is based on phase lag The weighted graph constructed by the index, and the other flow graph data is a fully connected graph that retains the characteristics of the original node;
5)将两流图数据分别输入至双流图卷积神经网络两个模型,进行图粗化和快速池化,降低数据维度,聚合相似节点,通过全连接层输出各麻醉阶段的预测值;5) Input the two-stream graph data into the two models of the two-stream graph convolutional neural network respectively, perform graph coarsening and fast pooling, reduce the data dimension, aggregate similar nodes, and output the predicted value of each anesthesia stage through the fully connected layer;
6)将双流图卷积神经网络两个模型输出的不同麻醉阶段的预测值分别相加,将预测值最大的类别作为麻醉阶段预测结果输出。6) The predicted values of different anesthesia stages output by the two models of the dual-stream graph convolutional neural network are added respectively, and the category with the largest predicted value is output as the prediction result of the anesthesia stage.
下面对各步骤的实施过程进行详细说明:The implementation process of each step is described in detail below:
为完成所提出的发明目的,本发明利用公共数据库 Neurotycho (http://neurotycho.org/) 中的猕猴的麻醉实验数据,探究了ketamine-medetomidine诱导下猕猴的麻醉深度,使用的实验数据为2只猕猴的5次麻醉实验的覆盖脑前额叶-顶叶的16通道ECoG信号。ECoG信号为脑皮层电图信号,与EEG信号一样,都是记录的头皮上成对电极之间区域的电活动。而ECoG是侵入式脑机接口,比起EEG信号,空间分辨率更高,信号质量也更高。实验包括麻醉前清醒阶段、麻醉诱导阶段、麻醉维持阶段、麻醉恢复阶段、麻醉后清醒阶段。在不同阶段截取相同长度(1s)的时间片段后,基于功能连接方法(相位滞后指数)构建脑拓扑网络,并计算每个时间片段的各通道信号幅度平均绝对值作为节点特征,得到三个阶段(清醒、中度麻醉、深度麻醉)的图样本,将数据按照8:1:1的比例随机划分为训练集、验证集、测试集,使用训练集训练图神经网络模型、验证集调整模型的超参数并对模型的能力进行初步评估、测试集评估最终模型的泛化能力。In order to accomplish the purpose of the proposed invention, the present invention uses the experimental data of anesthesia of rhesus monkeys in the public database Neurotycho (http://neurotycho.org/) to explore the depth of anesthesia of rhesus monkeys induced by ketamine-medetomidine, and the experimental data used are 2 16-channel ECoG signals covering the prefrontal-parietal lobe of the brain from 5 anesthesia experiments in a single macaque. ECoG signals are electrocortical signals, and like EEG signals, they are all recorded electrical activity in the area between paired electrodes on the scalp. ECoG is an invasive brain-computer interface, which has higher spatial resolution and higher signal quality than EEG signals. The experiment included the awake stage before anesthesia, the anesthesia induction stage, the anesthesia maintenance stage, the anesthesia recovery stage, and the post-anesthesia awake stage. After intercepting time segments of the same length (1s) at different stages, a brain topology network was constructed based on the functional connectivity method (phase lag index), and the average absolute value of the signal amplitude of each channel in each time segment was calculated as the node feature to obtain three stages. (Awake, moderate anesthesia, deep anesthesia) graph samples, the data is randomly divided into training set, validation set, and test set according to the ratio of 8:1:1, and the training set is used to train the graph neural network model, and the validation set is used to adjust the model. Hyperparameters and initial evaluation of the ability of the model, test set to evaluate the generalization ability of the final model.
首先对猕猴的麻醉实验进行详细介绍,麻醉实验包括三个阶段。Firstly, the anesthesia experiment of rhesus monkeys is introduced in detail. The anesthesia experiment includes three stages.
11)清醒阶段:11) Awake stage:
(a)AwakeEyeOpened-START/END:猕猴睁眼休息。(a) AwakeEyeOpened-START/END: The macaque rests with its eyes open.
(b)AwakeEyeClose-START/END:通过遮住眼睛,猕猴闭着眼睛休息。(b) AwakeEyeClose-START/END: By covering the eyes, the macaques rest with their eyes closed.
12)麻醉阶段:12) Anesthesia stage:
(a)AnesticDrugInjection:肌内注射ketamine-medetomidine。(a) AnesticDrugInjection: Intramuscular injection of ketamine-medetomidine.
(b)Anesthetized-START/END:猕猴处于意识丧失 (LOC) 状态。当猕猴不再对猴手操作或用棉签触摸鼻孔或人中不做出反应时,它们就会进入LOC状态。此外,通过观察神经信号中的慢波振荡也可以证实LOC。(b) Anesthetized-START/END: Macaques are in a state of loss of consciousness (LOC). When macaques no longer respond to monkey hand manipulation or nostril touch with cotton swabs, or in humans, they enter the LOC state. In addition, LOC can also be confirmed by observing slow-wave oscillations in neural signals.
13)恢复期:13) Recovery period:
(a)AntagonistInjection:注射atipamezole使猴子从麻醉状态中恢复。(a) AntagonistInjection: Atipamezole was injected to recover monkeys from anesthesia.
(b)RecoveryEyeClosed-START/END:神经信号中慢波振荡消失的点被视为闭眼恢复开始,在恢复闭眼状态下,猕猴闭着眼睛平静地休息。(b) RecoveryEyeClosed-START/END: The point at which the slow-wave oscillations in the neural signal disappeared was regarded as the beginning of eye-closed recovery, in which the macaques rested peacefully with their eyes closed.
(c)RecoveryEyeOpened-START/END:通过取下眼罩,猴子睁开眼睛平静地坐着。(c) RecoveryEyeOpened-START/END: By removing the blindfold, the monkey sat calmly with its eyes open.
以上为麻醉实验的任务设计。The above is the task design of the anesthesia experiment.
实验数据为2只猕猴的5次实验,信号包括猕猴麻醉前后所有阶段,采样率为1kHZ。选取覆盖猕猴前额叶-顶叶的16通道ECoG信号,如图2所示,黑色点标注了本发明选取的信号通道。The experimental data are 5 experiments of 2 rhesus monkeys, the signal includes all stages before and after anesthesia of rhesus monkeys, and the sampling rate is 1kHZ. A 16-channel ECoG signal covering the prefrontal-parietal lobe of the macaque is selected, as shown in Figure 2, the black dots mark the signal channels selected by the present invention.
在步骤2)中,包括截取数据样本、计算PLI、计算网络拓扑图的节点特征。In step 2), including intercepting data samples, calculating PLI, and calculating node characteristics of the network topology map.
21)截取数据样本:21) Intercept data samples:
将数据进行预处理后,在每次实验的各阶段均匀截取多个1s片段,清醒阶段、中度麻醉阶段、深度麻醉阶段在每次实验分别截取1000个1s片段,中度麻醉数据量较少,所以滑窗的步进为0.1s。After preprocessing the data, multiple 1s segments were evenly intercepted at each stage of each experiment, and 1,000 1s segments were intercepted in each experiment in the awake stage, moderate anesthesia stage, and deep anesthesia stage, and the amount of moderate anesthesia data was small. , so the step of the sliding window is 0.1s.
(a)清醒阶段数据在麻醉前清醒阶段和麻醉后清醒阶段截取,其中麻醉后清醒阶段为恢复期后期,以确定猕猴处于清醒状态。(a) The awake phase data were intercepted during the pre-anesthesia awake phase and the post-anesthesia awake phase, where the post-anesthesia awake phase was the late recovery phase to determine the macaques' awake state.
(b)中度麻醉数据为麻醉诱导期(麻醉剂注射-到达LOC)的中间时期数据。(b) Moderate anesthesia data is the data of the intermediate period of anesthesia induction period (anesthetic injection - reaching LOC).
(c)深度麻醉数据为麻醉维持期的数据。(c) The deep anesthesia data is the data of the anesthesia maintenance period.
于是得到5×3×1000,即15000个数据样本,5为5次实验,3为3个阶段,1000为每次实验每个阶段1000个样本。每一样本在重采样后包括16个通道、200个采样点,一个样本将对应一个网络拓扑图。Therefore, 5×3×1000 is obtained, that is, 15,000 data samples, 5 is 5 experiments, 3 is 3 stages, and 1000 is 1,000 samples in each stage of each experiment. Each sample includes 16 channels and 200 sampling points after resampling, and one sample corresponds to a network topology map.
22)计算PLI,得到邻接矩阵;22) Calculate the PLI to obtain the adjacency matrix;
使用相位滞后指数(PLI)计算通道间的相关性,相位滞后指数的计算方法为:The correlation between channels is calculated using the phase lag index (PLI), which is calculated as:
设两个通道的信号序列为 和,使用希尔伯特变换计算瞬时相位: Let the signal sequence of the two channels be and , using the Hilbert transform to calculate the instantaneous phase :
表示的希尔伯特(Hilbert)变换,i=1或2,j为虚数符号,计算方 法如下: express The Hilbert transform of , i = 1 or 2, j is an imaginary number symbol, the calculation method is as follows:
式中,P.V.表示柯西主值,t为时间,τ为积分变量;在计算每个通道信号的相位之后,两个通道之间的相对锁定可以计算为:where PV represents the principal value of Cauchy, t is the time, and τ is the integral variable; after calculating the phase of each channel signal, the relative locking between the two channels can be calculated as:
式中,z 2*(t) 为z 2(t)的共轭复数;In the formula, z 2 *( t ) is the complex conjugate of z 2 ( t );
PLI的范围在0到1之间,0表示两个通道之间没有相位锁定,而1表示两个通道之间有完美的相位耦合。PLI值计算如下:The range of PLI is between 0 and 1, with 0 indicating no phase locking between the two channels and 1 indicating perfect phase coupling between the two channels. The PLI value is calculated as follows:
得到数据样本后,使用相位滞后指数计算公式计算通道间的相位相关性,每个样
本得到一个16×16的邻接矩阵,邻接值分布在0-1之间。如图3相位滞后指数计算邻接矩阵
示例:节点1于节点2的1s电信号(200个点)根据发明内容中的相位滞后指数公式计算出一
个相关值,对应邻接矩阵(1,2)的位置,而节点2于节点1的相关值,即
得到的邻接矩阵是一个实对称矩阵。同样地,节点3于节点15的相关值对应邻接矩阵(3,15)
的位置,节点15于节点3的相关值对应邻接矩阵(3,15)的位置,同时,以
此类推,便计算出了整个16×16邻接矩阵。
After the data samples are obtained, the phase correlation between channels is calculated using the phase lag index calculation formula, and a 16×16 adjacency matrix is obtained for each sample, and the adjacency values are distributed between 0-1. As shown in Figure 3, the phase lag index calculation adjacency matrix example: the 1s electrical signal (200 points) of
图4给出了三个不同阶段猕猴Chibi脑前额叶-顶叶的网络结构;图5给出了三个不同阶段下猕猴Chibi拓扑网络图对应的邻接矩阵(为表现更显著的不同阶段区分效果,此处绘制的邻接矩阵增加了自连接)。Figure 4 shows the network structure of the prefrontal-parietal lobe of the rhesus monkey Chibi brain at three different stages; Figure 5 shows the adjacency matrix corresponding to the topological network map of the rhesus monkey Chibi at three different stages (for the distinguishing effect of different stages with more significant performance) , the adjacency matrix drawn here adds self-connections).
23)计算每个片段的各通道信号幅度绝对值,再取平均值,以此作为每个网络拓扑图的节点特征,每一个节点对应一个特征值。23) Calculate the absolute value of the signal amplitude of each channel of each segment, and then take the average value as the node feature of each network topology map, and each node corresponds to a feature value.
步骤3)中对偶图转换示例如图6所示:对偶图转换的思想是将边转化为节点,节点转化为边,若边与边之间有节点相连,则在得到的对偶图上加上边。An example of dual graph conversion in step 3) is shown in Figure 6: The idea of dual graph conversion is to convert edges into nodes and nodes into edges. If there are nodes connected between edges and edges, add edges to the obtained dual graph .
如图6中原始图包含边01,02,03,12,13,23,将其变成右图中的节点,同时其边权值变成了节点特征。另外,01,02有共同节点0;01,03有共同节点0;02,03有共同节点0;以此类推,将有共同节点的边再新图上加上连接,连接值视为1。As shown in Figure 6, the original graph contains edges 01, 02, 03, 12, 13, and 23, which are turned into nodes in the right graph, and their edge weights become node features. In addition, 01, 02 have a
基于以上对偶图转换的思想,本发明将相位滞后指数计算得到的多个邻接矩阵(16×16)转换为对偶图(120×120),其中120=(16×16-16)/2。转换得到的对偶图的边连接都是一样的即图7(左)表示的邻接矩阵,原来的边权值就表现在了新图的节点特征上。Based on the idea of the above dual graph conversion, the present invention converts multiple adjacency matrices (16×16) obtained by calculating the phase lag index into a dual graph (120×120), where 120=(16×16-16)/2. The edge connections of the converted dual graph are the same, that is, the adjacency matrix shown in Figure 7 (left), and the original edge weights are represented in the node features of the new graph.
另一方面,前面通过幅值信息计算得到的节点特征也保留了下来,作为另一图卷积神经网络模型的输入,即GCN模型二,GCN模型二将基于一个16×16的邻接矩阵对这些保留的节点特征进行滤波,由于前面将节点与节点之间的边信息取出并输入GCN模型一了,于是可以认为保留的节点信息的节点与节点之间的关系是均等的,便将一除去自连接的全连接矩阵作为其邻接矩阵。On the other hand, the node features calculated by the amplitude information previously are also retained as the input of another graph convolutional neural network model, namely
步骤4)构建双流图卷积神经网络:Step 4) Build a two-stream graph convolutional neural network:
双流图卷积神经网络包括两个模型,GCN模型一(以下简称模型1)设计了6层的图卷积-池化结构,用于提取边权值信息,即大小为120×120的图数据,GCN模型二(以下简称模型2)设计了4层的图卷积-池化结构,用于提取节点特征信息,即16×16的图数据。The dual-stream graph convolutional neural network includes two models. GCN model 1 (hereinafter referred to as model 1) designs a 6-layer graph convolution-pooling structure to extract edge weight information, that is, graph data with a size of 120×120 , GCN model 2 (hereinafter referred to as model 2) designs a 4-layer graph convolution-pooling structure for extracting node feature information, that is, 16×16 graph data.
在计算机视觉中CNN可以有效的提取像素点排列整齐的图片特征,即欧式结构数据,而科学研究中还有很多非欧式结构的数据,如社交网络、蛋白质结构等。为将机器学习方法应用于这些非欧式结构数据,GCN成为了研究的重点。In computer vision, CNN can effectively extract image features with neatly arranged pixels, that is, Euclidean structure data, and there are many non-Euclidean structure data in scientific research, such as social network, protein structure, etc. To apply machine learning methods to these non-European structured data, GCN has become the focus of research.
一开始将卷积应用于图结构数据的难点是参数共享问题,元素排列整齐的图片可以满足平移不变性,可以在整张图上定义相同尺寸的卷积核来进行卷积运算,而图的每个节点的相邻节点个数都不一样,就无法用一个相同大小的卷积核来进行卷积运算了。The difficulty in applying convolution to graph-structured data at the beginning is the problem of parameter sharing. Pictures with neatly arranged elements can satisfy translation invariance, and convolution kernels of the same size can be defined on the entire graph to perform convolution operations. The number of adjacent nodes of each node is different, so a convolution kernel of the same size cannot be used for convolution operations.
图卷积神经网络包括谱方法和空间方法,面对参数共享的问题,谱方法试图在谱域定义卷积,而非在节点域,节点域无法满足平移不变性,不能定义相同大小的卷积核,因此在谱域实现卷积的定义,再变回空间域,便解决了参数共享这一问题。空间方法则在节点域直接定义卷积,为解决参数共享的问题,空间方法先确定目标节点的邻居节点,再将这些邻居节点依次排列,每一节点再选取固定个数的邻居节点,这样便可以实现参数共享,这与谱方法是不同的思路。这两种方法在与图有关的任务上都有更好的表现。本发明采用谱图卷积方法实现两个模型的图分类。Graph convolutional neural networks include spectral methods and spatial methods. Faced with the problem of parameter sharing, spectral methods try to define convolutions in the spectral domain, not in the node domain. The node domain cannot satisfy translation invariance and cannot define convolutions of the same size. Therefore, the definition of convolution is implemented in the spectral domain, and then changed back to the spatial domain, which solves the problem of parameter sharing. The spatial method directly defines convolution in the node domain. In order to solve the problem of parameter sharing, the spatial method first determines the neighbor nodes of the target node, then arranges these neighbor nodes in sequence, and selects a fixed number of neighbor nodes for each node. Parameter sharing can be achieved, which is a different idea from the spectral method. Both methods perform better on graph-related tasks. The invention adopts the spectral graph convolution method to realize graph classification of two models.
本发明采用直接在谱域定义卷积的方法GCN,它基于一个邻接矩阵实现对图的分类。The invention adopts the method GCN to define convolution directly in the spectral domain, which realizes the classification of graphs based on an adjacency matrix.
谱图卷积:通过傅里叶变化将图卷积拓展到图的频域中。Spectral graph convolution: Extends graph convolution to the frequency domain of the graph through Fourier transform.
对于输入信号,在傅里叶域取一个为参数的滤波器: for input signal , take one in the Fourier domain filter for parameters :
其中U是图的拉普拉斯矩阵L的特征向量矩阵。拉普拉斯矩阵:where U is the eigenvector matrix of the Laplacian matrix L of the graph. Laplacian matrix:
其中,A为邻接矩阵,D为度矩阵,是拉普拉斯矩阵L的特征值组成的对角矩阵,就是图上的傅里叶变换。 Among them, A is the adjacency matrix, D is the degree matrix, is the diagonal matrix composed of the eigenvalues of the Laplace matrix L, It is the Fourier transform of the graph.
为减少计算量,将用切比雪夫多项式进行K阶逼近,得到改进的卷积核: In order to reduce the amount of calculation, the K-order approximation with Chebyshev polynomials to get an improved convolution kernel:
其中,为拉普拉斯矩阵L中最大的特征值,是切比 雪夫多项式的系数。 in , is the largest eigenvalue in the Laplace matrix L, are the coefficients of the Chebyshev polynomial.
接着,用滤波器信号x进行滤波: Next, use the filter Filter the signal x:
其中是在缩放后的拉普拉斯为的k阶的切 比雪夫多项式,意味着,我们可以用递归关系和, 来计算。in is the scaled Laplacian as A Chebyshev polynomial of order k, which means , we can use the recurrence relation and , to calculate .
在完成数据预处理、不同阶段数据截取、功能网络构建等操作后,得到不同阶段的30000个图样本,得到的图为基于相位滞后指数构建的带权图,这与常规的二值图不同,带权图更详尽地表述了脑网络拓扑结构;此外,为实现更高的分类精度,本发明还选取了通道信号幅值的绝对值、取平均,作为图的节点特征。After completing data preprocessing, data interception at different stages, and functional network construction, 30,000 graph samples at different stages are obtained. The obtained graph is a weighted graph constructed based on the phase lag index, which is different from the conventional binary graph. The weighted graph expresses the topological structure of the brain network in more detail; in addition, in order to achieve higher classification accuracy, the present invention also selects the absolute value and average of the channel signal amplitude as the node feature of the graph.
基于拓扑结构及节点信息构建好图样本后,将图样本分为两流图数据,并找到两类图数据的两个公共邻接矩阵,邻接矩阵将作为这些图的拓扑结构输入到图卷积神经网络,用于计算图的拉普拉斯矩阵。输入到图一流图数据通过对偶图方法转换获得,将边权值转换为了图卷积敏感的节点特征,并得到120×120大小的邻接矩阵;另一流数据是保留的原节点特征,原来的边权值被取出后,将节点与节点之间的关系视为均等,于是构建一16×16的全连接邻接矩阵(已去掉自连接)。After the graph samples are constructed based on the topology and node information, the graph samples are divided into two-stream graph data, and two common adjacency matrices of the two types of graph data are found, and the adjacency matrices will be used as the topology of these graphs. A network that computes the Laplacian matrix of a graph. The first-stream graph data input into the graph is obtained by the dual graph method, and the edge weights are converted into graph convolution-sensitive node features, and an adjacency matrix of
在步骤5)中,将得到的两类图数据分别输入双流图卷积神经网络的两个模型,如发明内容所述的双流图卷积神经网络方法,第一流图卷积神经网络输入的特征x是一维的,即120×1,输入的邻接矩阵如图7(左)所示,大小为120×120;第二流图卷积神经网络输入的特征x也是1维的,即16×1,输入的邻接矩阵如图7(右)所示,大小为16×16。两流图神经网络模型使用的卷积方法和池化方法都是一样的,即以切比雪夫多项式逼近卷积核的谱图卷积方法,及基于Graclus多级聚类算法的图粗化及快速池化方法。只不过由于输入的数据特征大小不同,所以设计了不同的卷积-池化结构。In step 5), the obtained two types of graph data are respectively input into the two models of the two-stream graph convolutional neural network, such as the two-stream graph convolutional neural network method described in the content of the invention, the features of the first stream graph convolutional neural network input x is one-dimensional, that is, 120×1, and the input adjacency matrix is shown in Figure 7 (left), with a size of 120×120; the feature x input by the second flow graph convolutional neural network is also one-dimensional, that is, 16× 1. The input adjacency matrix is shown in Figure 7 (right), with a size of 16×16. The two-stream graph neural network model uses the same convolution method and pooling method, that is, the spectral graph convolution method that approximates the convolution kernel with Chebyshev polynomials, and the graph coarsening and graph coarsening based on the Graclus multi-level clustering algorithm. Fast pooling method. It's just that different convolution-pooling structures are designed due to the different sizes of the input data features.
在步骤6)中双流图卷积神经网络两个模型分别输出预测概率。双流图卷积神经网
络中两个模型中分别经过softmax分类层后输出对每个类别的预测概率,训练好的模型1和
模型2分别对测试集进行预测,得到模型1对清醒状态的预测概率为,对中度麻醉状态的
预测概率为,对深度麻醉状态的预测概率为。模型2对清醒状态的预测概率为,对
中度麻醉的预测概率为,对深度麻醉的预测概率为。将两个模型的对每个类别的预
测概率相加分别得到双流图卷积神经网络对清醒状态、中度麻醉状态、深度麻醉状态的预
测概率:,,,取其中的最大值,便得到该值对应的
预测类别。
In step 6), the two models of the dual-stream graph convolutional neural network output the predicted probabilities respectively. The two models in the dual-stream graph convolutional neural network go through the softmax classification layer respectively and output the predicted probability of each category. The trained
基于上述方法,本发明提出的基于图卷积神经网络的麻醉深度监测系统如图1所示,包括数据预处理模块、功能网络构建模块、图转换模块和双流图卷积神经网络模块。Based on the above method, the anesthesia depth monitoring system based on the graph convolutional neural network proposed by the present invention is shown in FIG.
数据预处理模块:用于对脑皮层电图信号进行预处理,将原始数据滤波及下采样,滤波可以滤除杂波、交流电信号;下采样可以减少要处理的数据量。预处理包括:0.5-100HZ滤波;50HZ凹陷滤波;200HZ重采样。以上预处理操作均在在MATLAB2016b中完成。Data preprocessing module: It is used to preprocess the cerebral cortex signal, filter and downsample the original data, filtering can filter out clutter and alternating current signal; downsampling can reduce the amount of data to be processed. Preprocessing includes: 0.5-100HZ filtering; 50HZ notch filtering; 200HZ resampling. The above preprocessing operations are all done in MATLAB2016b.
功能网络构建模块:用于将样本数据截取为不同麻醉阶段的若干个时间片段,通过相位滞后指数方法计算通道间的相关性,构建邻接矩阵,得到关键脑区的网络拓扑图,并计算信号幅度平均绝对值作为图节点特征。Functional network building block: It is used to cut the sample data into several time segments of different anesthesia stages, calculate the correlation between channels through the phase lag index method, construct the adjacency matrix, obtain the network topology map of key brain areas, and calculate the signal amplitude The mean absolute value is used as a graph node feature.
图转换模块:用于将网络拓扑图样本的邻接矩阵转换为对偶图,将边权值转化为图卷积敏感的节点特征,使得不同的图可以得到一个相同的邻接矩阵,实现在谱图卷积图分类方法上的直接应用;同时保留原节点特征,构建新的全连接邻接矩阵,作为另一流图数据。Graph conversion module: It is used to convert the adjacency matrix of the network topology graph sample into a dual graph, and convert the edge weights into graph convolution-sensitive node features, so that different graphs can obtain the same adjacency matrix. The direct application of the product graph classification method; at the same time, the original node features are preserved, and a new fully connected adjacency matrix is constructed as another flow graph data.
双流图卷积神经网络模块:用于存储双流图卷积神经网络的两个模型,模型1用于提取边权值信息,模型2用于提取节点特征信息,将两个模型输出预测概率分类相加,得出预测结果。双流图卷积神经网络模块设计了双流结构,分别用于学习图转换模块得到的两类图数据,同时利用谱图卷积提取图特征,图粗化及快速池化操作聚合相似节点,减少计算量,最后softmax层预测麻醉不同状态的预测概率,将两个模型的预测概率相加实现模型融合,使用相加后的概率对测试集进行预测。Two-stream graph convolutional neural network module: used to store two models of the two-stream graph convolutional neural network,
表1展示了双流图卷积神经网络模块模型1内部各层参数,表2展示了双流图卷积
神经网络模块模型2内部各层参数,其中O表示麻醉深度分类任务的数量,,表示每一图卷积层的滤波器数量。
Table 1 shows the internal parameters of the two-stream graph convolutional neural
本发明中搭建的双层图卷积神经网络结构描述如下:The structure of the double-layer graph convolutional neural network built in the present invention is described as follows:
本发明的GCN模型中,经过图卷积层,图的维度是不变的,对于最大池化层,图的维度会减少一半,这意味这,N×N的拉普拉斯矩阵,经过最大池化层后,图的维度会变成N/2×N/2,对于120×120大小的邻接矩阵,可以使用64-32-16-8-4-2-1的6层池化结构或120-60-30-15的3层池化结构,本发明选择了前者。对于16×16大小的邻接矩阵,本发明选择了16-8-4-2-1的4层池化结构。In the GCN model of the present invention, after the graph convolution layer, the dimension of the graph is unchanged. For the maximum pooling layer, the dimension of the graph will be reduced by half, which means that the N×N Laplacian matrix, after the maximum After the pooling layer, the dimension of the graph will become N/2×N/2. For an adjacency matrix of
输入的邻接矩阵给出了对图形结构的描述,对邻接矩阵进行粗化操作得到多级粗化矩阵,同时根据粗化得到的多级粗化矩阵对原始数据进行重排及快速池化操作,原始数据由粗化返回的重排关系重新组织为3D数据,它们将输入到网络中进行卷积。The input adjacency matrix gives a description of the graph structure. Coarse the adjacency matrix to obtain a multi-level coarsening matrix. At the same time, according to the multi-level coarsening matrix obtained by the coarsening, the original data is rearranged and quickly pooled. The raw data is reorganized into 3D data by the rearranged relations returned by the coarsening, which are fed into the network for convolution.
图卷积层学习了图数据的特征,池化层降低数据维度,聚合相似节点。两个模型训练好后,用训练好的模型对测试集进行测试,得到预测类别。The graph convolutional layer learns the features of the graph data, and the pooling layer reduces the data dimension and aggregates similar nodes. After the two models are trained, use the trained model to test the test set to get the predicted category.
表3为模型1和模型2及结合后的双流模型对测试集进行预测的准确率,可见,模型1与模型2本就能实现相当不错的精度,而两个模型的结合可以使预测达到了更好的效果,这说明了两个模型学习了不同特征的区别性。Table 3 shows the prediction accuracy of
图8为测试集的混淆矩阵,图9为测试集的ROC曲线图,包括对模型1,模型2及双流模型的评估,测试集评估最终模型的泛化能力,本发明得到测试集的三分类精度达到95.4%。FIG. 8 is the confusion matrix of the test set, and FIG. 9 is the ROC curve diagram of the test set, including the evaluation of
双流图卷积神经网络模型的训练及测试过程均在Python3.6 TensorFlow 1.13.1环境下完成。The training and testing process of the dual-stream graph convolutional neural network model were completed in the Python3.6 TensorFlow 1.13.1 environment.
最后需要说明的是,以上具体实施方式仅用以说明本专利技术方案而非限制,尽管参照较佳实施例对本专利进行了详细说明,本领域的普通技术人员应当理解,可以对本专利的技术方案进行修改或者等同替换,而不脱离本专利技术方案的精神和范围,其均应涵盖在本专利的权利要求范围当中。Finally, it should be noted that the above specific embodiments are only used to illustrate the technical solution of the patent and not to limit it. Although the patent has been described in detail with reference to the preferred embodiments, those of ordinary skill in the art should understand that the technical solution of the patent can be Modifications or equivalent substitutions are made without departing from the spirit and scope of the technical solutions of this patent, and they should all be covered by the scope of the claims of this patent.
Claims (9)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111346082.1A CN113768474B (en) | 2021-11-15 | 2021-11-15 | Anesthesia depth monitoring method and system based on graph convolution neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111346082.1A CN113768474B (en) | 2021-11-15 | 2021-11-15 | Anesthesia depth monitoring method and system based on graph convolution neural network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113768474A CN113768474A (en) | 2021-12-10 |
CN113768474B true CN113768474B (en) | 2022-03-18 |
Family
ID=78873958
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111346082.1A Active CN113768474B (en) | 2021-11-15 | 2021-11-15 | Anesthesia depth monitoring method and system based on graph convolution neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113768474B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114557708B (en) * | 2022-02-21 | 2024-08-20 | 天津大学 | Somatosensory stimulation consciousness detection device and method based on brain electricity dual-feature fusion |
CN114662530A (en) * | 2022-02-23 | 2022-06-24 | 北京航空航天大学杭州创新研究院 | Sleep stage staging method based on time sequence signal convolution and multi-signal fusion |
CN114931385A (en) * | 2022-05-12 | 2022-08-23 | 西安邮电大学 | An EEG channel selection method for fatigue driving based on PLI-Relief |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE19829018B4 (en) * | 1998-06-30 | 2005-01-05 | Markus Lendl | Method for setting a dose rate selectively variable metering device for anesthetic and anesthetic system thereto |
US7373198B2 (en) * | 2002-07-12 | 2008-05-13 | Bionova Technologies Inc. | Method and apparatus for the estimation of anesthetic depth using wavelet analysis of the electroencephalogram |
WO2017001495A1 (en) * | 2015-06-29 | 2017-01-05 | Koninklijke Philips N.V. | Optimal drug dosing based on current anesthesia practice |
CN110680285A (en) * | 2019-10-29 | 2020-01-14 | 张萍萍 | Anesthesia degree monitoring device based on neural network |
CN111091712A (en) * | 2019-12-25 | 2020-05-01 | 浙江大学 | A Traffic Flow Prediction Method Based on Recurrent Attention Dual Graph Convolutional Networks |
-
2021
- 2021-11-15 CN CN202111346082.1A patent/CN113768474B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN113768474A (en) | 2021-12-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113768474B (en) | Anesthesia depth monitoring method and system based on graph convolution neural network | |
CN110399857A (en) | A EEG Emotion Recognition Method Based on Graph Convolutional Neural Network | |
CN114052735A (en) | Electroencephalogram emotion recognition method and system based on depth field self-adaption | |
CN106503799A (en) | Deep learning model and the application in brain status monitoring based on multiple dimensioned network | |
CN112869711A (en) | Automatic sleep staging and migration method based on deep neural network | |
CN111544017A (en) | Fatigue detection method, device and storage medium based on GPDC graph convolutional neural network | |
Kong et al. | Causal graph convolutional neural network for emotion recognition | |
CN113128552A (en) | Electroencephalogram emotion recognition method based on depth separable causal graph convolution network | |
CN110584596B (en) | Sleep stage classification method based on dual-input convolutional neural network and application thereof | |
CN109299647B (en) | Vehicle control-oriented multitask motor imagery electroencephalogram feature extraction and mode recognition method | |
CN115251909B (en) | Method and device for evaluating hearing by electroencephalogram signals based on space-time convolutional neural network | |
CN108280414A (en) | A kind of recognition methods of the Mental imagery EEG signals based on energy feature | |
CN115581467A (en) | A recognition method of SSVEP based on time, frequency and time-frequency domain analysis and deep learning | |
CN117195099A (en) | An EEG signal emotion recognition algorithm integrating multi-scale features | |
CN113128384A (en) | Brain-computer interface software key technical method of stroke rehabilitation system based on deep learning | |
CN111513717A (en) | A method for extracting the functional state of the brain | |
CN116439672A (en) | Multi-resolution sleep stage classification method based on dynamic self-adaptive kernel graph neural network | |
CN113397562A (en) | Sleep spindle wave detection method based on deep learning | |
CN115770044B (en) | Emotion recognition method and device based on electroencephalogram phase amplitude coupling network | |
Wu et al. | A multi-stream deep learning model for EEG-based depression identification | |
Shi et al. | A brain topography graph embedded convolutional neural network for EEG-based motor imagery classification | |
CN117438068A (en) | A diagnostic method for autism spectrum disorder combined with weighted learning network | |
CN114767130A (en) | Multi-modal feature fusion electroencephalogram emotion recognition method based on multi-scale imaging | |
CN113317803B (en) | Neural disease feature extraction method based on graph theory and machine learning | |
CN117158912B (en) | Sleep stage detection system based on graph attention mechanism and space-time graph convolution |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |