CN113768474B - Anesthesia depth monitoring method and system based on graph convolution neural network - Google Patents

Anesthesia depth monitoring method and system based on graph convolution neural network Download PDF

Info

Publication number
CN113768474B
CN113768474B CN202111346082.1A CN202111346082A CN113768474B CN 113768474 B CN113768474 B CN 113768474B CN 202111346082 A CN202111346082 A CN 202111346082A CN 113768474 B CN113768474 B CN 113768474B
Authority
CN
China
Prior art keywords
graph
anesthesia
neural network
convolutional neural
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111346082.1A
Other languages
Chinese (zh)
Other versions
CN113768474A (en
Inventor
马力
刘泉
艾青松
陈昆
谢田立
肖智文
明法畅
邹家喻
徐子严
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University of Technology WUT
Original Assignee
Wuhan University of Technology WUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University of Technology WUT filed Critical Wuhan University of Technology WUT
Priority to CN202111346082.1A priority Critical patent/CN113768474B/en
Publication of CN113768474A publication Critical patent/CN113768474A/en
Application granted granted Critical
Publication of CN113768474B publication Critical patent/CN113768474B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4821Determining level or depth of anaesthesia
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • A61B5/37Intracranial electroencephalography [IC-EEG], e.g. electrocorticography [ECoG]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7225Details of analogue processing, e.g. isolation amplifier, gain or sensitivity adjustment, filtering, baseline or drift compensation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Public Health (AREA)
  • Pathology (AREA)
  • Veterinary Medicine (AREA)
  • Data Mining & Analysis (AREA)
  • Animal Behavior & Ethology (AREA)
  • Evolutionary Computation (AREA)
  • Surgery (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Psychiatry (AREA)
  • Mathematical Physics (AREA)
  • Signal Processing (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Physiology (AREA)
  • Computing Systems (AREA)
  • Evolutionary Biology (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Psychology (AREA)
  • Fuzzy Systems (AREA)
  • Software Systems (AREA)
  • Neurosurgery (AREA)
  • Anesthesiology (AREA)
  • Power Engineering (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

本发明公开了一种基于图卷积神经网络的麻醉深度监测方法和系统,所述系统包括:数据预处理模块:用于对脑皮层电图信号进行预处理;功能网络构建模块:用于计算样本数据的相位滞后指数PLI,每个样本计算一个邻接矩阵,得到不同麻醉阶段的网络拓扑图样本;图转换模块:用于转换对偶图,将图样本转换为基于相位滞后指数构建的带权图,同时对节点特征构建新图;双流图卷积神经网络模块:用于存储双流图卷积神经网络的两个模型,模型1用于提取边权值信息,模型2用于提取节点特征信息,将两个模型输出预测概率分类相加,得出预测结果。本发明找到了区分不同麻醉状态的新特征,对清醒状态、中度麻醉状态、深度麻醉状态分类精度达到95.4%,可以很好监测出麻醉的不同状态。

Figure 202111346082

The invention discloses a method and system for monitoring depth of anesthesia based on a graph convolutional neural network. The system comprises: a data preprocessing module: used for preprocessing cerebral cortex electrogram signals; a functional network building module: used for calculating The phase lag index PLI of the sample data, each sample calculates an adjacency matrix to obtain the network topology map samples of different anesthesia stages; the graph conversion module: used to convert the dual graph, and convert the graph samples into a weighted graph constructed based on the phase lag index , and construct a new graph for node features at the same time; two-stream graph convolutional neural network module: used to store two models of two-stream graph convolutional neural network, model 1 is used to extract edge weight information, model 2 is used to extract node feature information, The prediction results are obtained by summing the predicted probability classifications of the output of the two models. The invention finds a new feature for distinguishing different anesthesia states, and the classification accuracy of awake state, moderate anesthesia state and deep anesthesia state reaches 95.4%, and can monitor different states of anesthesia well.

Figure 202111346082

Description

一种基于图卷积神经网络的麻醉深度监测方法及系统A method and system for anesthesia depth monitoring based on graph convolutional neural network

技术领域technical field

本发明涉及生物医学信号处理技术及深度学习技术领域,具体地指一种基于图卷积神经网络的麻醉深度监测方法及系统。The invention relates to the fields of biomedical signal processing technology and deep learning technology, in particular to a method and system for monitoring depth of anesthesia based on a graph convolutional neural network.

背景技术Background technique

在全身麻醉的外科手术中,麻醉师需要实时监测病人的麻醉状态。麻醉监测仪可以辅助麻醉师掌握病人的麻醉深度并避免意外的术中意识的发生。如果麻醉过深,可能会使患者术后难以苏醒,甚至对神经系统产生不良后遗症;如果麻醉过浅,可能会导致患者在术中苏醒,给病人留下心理阴影。因此,在手术中对患者进行实时的麻醉深度监测非常重要。In general anesthesia surgery, the anesthesiologist needs to monitor the patient's anesthesia status in real time. Anesthesia monitors can assist anesthesiologists in grasping the depth of anesthesia in patients and avoid accidental intraoperative awareness. If the anesthesia is too deep, it may make it difficult for the patient to wake up after surgery, and even have adverse sequelae to the nervous system; if the anesthesia is too shallow, it may cause the patient to wake up during the operation, leaving a psychological shadow on the patient. Therefore, it is very important to monitor the depth of anesthesia in real time for patients during surgery.

目前,常用于监测麻醉深度的临床技术有EEG双频指数分析、听觉诱发电位(AEP)、麻醉熵等。这些技术都是通过处理EEG信号来进行麻醉深度监测的,EEG记录了大脑沟回的表面信号,具有非侵入、无损伤、易获得的优势。目前流行的麻醉深度监测技术仍具有一些缺陷,例如:BIS不对异氟醚诱导的麻醉有效,对不同人也存在较大差异,并且它的算法是不公开的。因此探索更加稳定的麻醉深度监测算法非常必要。Currently, clinical techniques commonly used to monitor the depth of anesthesia include EEG bispectral index analysis, auditory evoked potential (AEP), and anesthesia entropy. These techniques are all used to monitor the depth of anesthesia by processing EEG signals, which record the surface signals of the sulcus gyrus of the brain, and have the advantages of non-invasive, non-invasive, and easy-to-obtain. The current popular anesthesia depth monitoring technology still has some defects. For example, BIS is not effective for isoflurane-induced anesthesia, and there are great differences in different people, and its algorithm is not public. Therefore, it is necessary to explore a more stable anesthesia depth monitoring algorithm.

发明内容SUMMARY OF THE INVENTION

本发明的目的在于克服现有技术的不足,基于脑电信号构建脑功能网络,结合双流图卷积神经网络提出一种基于图卷积神经网络的麻醉深度监测方法及系统。The purpose of the present invention is to overcome the deficiencies of the prior art, construct a brain function network based on EEG signals, and provide a method and system for monitoring depth of anesthesia based on a graph convolutional neural network in combination with a dual-stream graph convolutional neural network.

为实现上述目的,本发明所设计的一种基于图卷积神经网络的麻醉深度监测方法,其特殊之处在于,所述方法包括如下步骤:In order to achieve the above purpose, a method for monitoring the depth of anesthesia based on a graph convolutional neural network designed by the present invention is special in that the method comprises the following steps:

1)采集若干个通道的脑皮层电图信号,并对原始信号进行预处理;1) Collect several channels of EEG signals, and preprocess the original signals;

2)截取不同麻醉阶段的若干个时间片段作为数据样本,计算相位滞后指数PLI,每个样本计算一个邻接矩阵,得到不同麻醉阶段的网络拓扑图样本,所述麻醉阶段包括清醒阶段、中度麻醉阶段、深度麻醉阶段;2) Intercept several time segments of different anesthesia stages as data samples, calculate the phase lag index PLI, calculate an adjacency matrix for each sample, and obtain the network topology map samples of different anesthesia stages, the anesthesia stages include awake stage, moderate anesthesia stage, deep anesthesia stage;

3)将所述网络拓扑图的邻接矩阵转换为对偶图,转换得到的对偶图的边连接都相同,边权值信息保留在对偶图的节点特征上,转换前的图样本为基于相位滞后指数构建的带权图,同时将节点特征保留,并构建一个全连接矩阵,以表示这些节点特征的拓扑结构;3) Convert the adjacency matrix of the network topology graph into a dual graph, the edge connections of the dual graphs obtained by the conversion are all the same, and the edge weight information is retained on the node features of the dual graph, and the graph samples before conversion are based on the phase lag index. The constructed weighted graph, while retaining the node features, and constructing a fully connected matrix to represent the topology of these node features;

4)构建双流图卷积神经网络,将所述图样本分为两流图数据,并找到两流图数据的两个公共邻接矩阵;所述两流图数据中,一流图数据为基于相位滞后指数构建的带权图,另一流图数据为保留原节点特征的全连接图;4) Constructing a two-stream graph convolutional neural network, dividing the graph samples into two-stream graph data, and finding two common adjacency matrices of the two-stream graph data; in the two-stream graph data, the first-stream graph data is based on phase lag The weighted graph constructed by the index, and the other flow graph data is a fully connected graph that retains the characteristics of the original node;

5)将两流图数据分别输入至双流图卷积神经网络两个模型,进行图粗化和快速池化,降低数据维度,聚合相似节点,通过全连接层输出各麻醉阶段的预测值;5) Input the two-stream graph data into the two models of the two-stream graph convolutional neural network respectively, perform graph coarsening and fast pooling, reduce the data dimension, aggregate similar nodes, and output the predicted value of each anesthesia stage through the fully connected layer;

6)将双流图卷积神经网络两个模型输出的不同麻醉阶段的预测值分别相加,将预测值最大的类别作为麻醉阶段预测结果输出。6) The predicted values of different anesthesia stages output by the two models of the dual-stream graph convolutional neural network are added respectively, and the category with the largest predicted value is output as the prediction result of the anesthesia stage.

优选地,所述步骤1)中脑皮层电图信号为研究对象大脑额叶-顶叶的16通道ECoG信号;所述预处理包括0.1~100HZ滤波、50HZ凹陷滤波和200HZ重采样。Preferably, the step 1) EMG signal of the midbrain cortex is a 16-channel ECoG signal of the frontal lobe-parietal lobe of the subject's brain; the preprocessing includes 0.1-100 Hz filtering, 50 Hz sag filtering and 200 Hz resampling.

优选地,所述相位滞后指数PLI的计算方法为:Preferably, the calculation method of the phase lag index PLI is:

设两个通道的信号序列为

Figure 559621DEST_PATH_IMAGE001
Figure 125776DEST_PATH_IMAGE002
,使用希尔伯特变换计算瞬时相位
Figure 863925DEST_PATH_IMAGE003
: Let the signal sequence of the two channels be
Figure 559621DEST_PATH_IMAGE001
and
Figure 125776DEST_PATH_IMAGE002
, using the Hilbert transform to calculate the instantaneous phase
Figure 863925DEST_PATH_IMAGE003
:

Figure 118320DEST_PATH_IMAGE004
Figure 118320DEST_PATH_IMAGE004

其中,

Figure 840288DEST_PATH_IMAGE005
表示
Figure 799017DEST_PATH_IMAGE006
的希尔伯特(Hilbert)变换,i=1或2,j为虚数符号: in,
Figure 840288DEST_PATH_IMAGE005
express
Figure 799017DEST_PATH_IMAGE006
The Hilbert transform of , i = 1 or 2, and j is the imaginary sign:

Figure 47465DEST_PATH_IMAGE007
Figure 47465DEST_PATH_IMAGE007

式中,P.V.表示柯西主值,t为时间,τ为积分变量;In the formula, PV represents the principal value of Cauchy, t is the time, and τ is the integral variable;

两个通道之间的相对锁定计算为:The relative lock between the two channels is calculated as:

Figure 66236DEST_PATH_IMAGE008
Figure 66236DEST_PATH_IMAGE008

式中,z 2*(t) 为z 2(t)的共轭复数;In the formula, z 2 *( t ) is the complex conjugate of z 2 ( t );

则PLI值计算如下:Then the PLI value is calculated as follows:

Figure 9921DEST_PATH_IMAGE009
Figure 9921DEST_PATH_IMAGE009

PLI的范围在0到1之间,0表示两个通道之间没有相位锁定,而1表示两个通道之间有完美的相位耦合。The range of PLI is between 0 and 1, with 0 indicating no phase locking between the two channels and 1 indicating perfect phase coupling between the two channels.

优选地,所述步骤4)中构建双流图卷积神经网络采用谱域图卷积方法GCN,通过傅里叶变化将图卷积拓展到图的频域中,并使用滤波器对信号进行滤波。Preferably, in the step 4), the dual-stream graph convolutional neural network is constructed using the spectral domain graph convolution method GCN, the graph convolution is extended to the frequency domain of the graph through Fourier transformation, and the signal is filtered by using a filter. .

优选地,所述步骤5)中双流图卷积神经网络两个模型均采用以切比雪夫多项式逼近卷积核的谱图卷积方法,及基于Graclus多级聚类算法的图粗化及快速池化方法。Preferably, in the step 5), both models of the dual-stream graph convolutional neural network adopt the spectral graph convolution method in which the convolution kernel is approximated by Chebyshev polynomials, and the graph coarsening and fast graph based on the Graclus multi-level clustering algorithm pooling method.

优选地,所述Graclus多级聚类算法采用贪婪算法对图的连续粗度进行度量,使谱聚类目标最小化。Preferably, the Graclus multi-level clustering algorithm adopts a greedy algorithm to measure the continuous roughness of the graph, so as to minimize the spectral clustering objective.

优选地,所述步骤2)中计算每个时间片段的各通道信号幅度平均绝对值作为节点特征。Preferably, in the step 2), the average absolute value of the signal amplitude of each channel of each time segment is calculated as the node feature.

优选地,所述步骤2)中的数据样本按照8:1:1的比例随机划分为训练集、验证集、测试集,使用训练集训练图神经网络模型、验证集调整模型的超参数并对模型的能力进行初步评估、测试集评估最终模型的泛化能力。Preferably, the data samples in the step 2) are randomly divided into a training set, a validation set, and a test set according to the ratio of 8:1:1, and the training set is used to train the neural network model of the graph and the validation set to adjust the hyperparameters of the model and adjust the model's hyperparameters. The ability of the model is initially evaluated, and the test set evaluates the generalization ability of the final model.

本发明还提出一种基于图卷积神经网络的麻醉深度监测系统,所述系统包括数据预处理模块、功能网络构建模块、图转换模块和双流图卷积神经网络模块;The present invention also provides an anesthesia depth monitoring system based on a graph convolutional neural network, the system includes a data preprocessing module, a functional network building module, a graph conversion module and a dual-stream graph convolutional neural network module;

所述数据预处理模块:用于对脑皮层电图信号进行预处理;The data preprocessing module: used to preprocess the cerebral cortex signal;

所述功能网络构建模块:用于将样本数据截取为不同麻醉阶段的若干个时间片段,计算相位滞后指数PLI,每个样本计算一个邻接矩阵,得到不同麻醉阶段的网络拓扑图样本,所述麻醉阶段包括清醒阶段、中度麻醉阶段、深度麻醉阶段;The functional network building module is used to cut the sample data into several time segments of different anesthesia stages, calculate the phase lag index PLI, calculate an adjacency matrix for each sample, and obtain the network topology map samples of different anesthesia stages. Stages include awake stage, moderate anesthesia stage, and deep anesthesia stage;

所述图转换模块:用于将网络拓扑图样本的邻接矩阵转换为对偶图,转换得到的对偶图的边连接都相同,边权值信息保留在对偶图的节点特征上,则转换后的图样本为基于相位滞后指数构建的带权图,此外,原节点特征保留并用于构建新的全连接图;The graph conversion module is used to convert the adjacency matrix of the network topology graph sample into a dual graph. The edge connections of the converted dual graphs are the same, and the edge weight information is retained on the node features of the dual graph. The sample is a weighted graph constructed based on the phase lag index. In addition, the original node features are retained and used to construct a new fully connected graph;

所述双流图卷积神经网络模块:用于存储双流图卷积神经网络的两个模型,模型1用于提取边权值信息,模型2用于提取节点特征信息,将两个模型输出预测概率分类相加,得出预测结果。The two-stream graph convolutional neural network module: used to store two models of the two-stream graph convolutional neural network, model 1 is used to extract edge weight information, model 2 is used to extract node feature information, and the two models are output prediction probability. The classification is added to obtain the prediction result.

本发明另外提出一种计算机可读存储介质,存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现上述的基于图卷积神经网络的麻醉深度监测方法。The present invention further provides a computer-readable storage medium storing a computer program, characterized in that, when the computer program is executed by a processor, the above-mentioned method for monitoring anesthesia depth based on a graph convolutional neural network is implemented.

本发明的有益效果包括:The beneficial effects of the present invention include:

1)传统的谱图卷积方法在一个邻接矩阵的基础上可以实现优秀图分类效益,它对节点特征信息敏感,但是它不能处理不同网络拓扑结构的图,而本发明采用对偶图转换的方法,对偶图转换巧妙地将本应用中的相位滞后指数计算所得的带权图转化为相同邻接矩阵的图,更重要的是将图卷积本不敏感的边权值特征转换为节点特征,于此便可以直接采用效益很好的传统谱图卷积图方法。1) The traditional spectral graph convolution method can achieve excellent graph classification benefits on the basis of an adjacency matrix. It is sensitive to node feature information, but it cannot handle graphs with different network topologies, while the present invention adopts the dual graph conversion method. , the dual graph transformation cleverly converts the weighted graph obtained by the phase lag index calculation in this application into a graph of the same adjacency matrix, and more importantly, converts the edge weight features that are not sensitive to the graph convolution into node features. In this way, the traditional spectral convolution method, which is very efficient, can be directly used.

2)图本身包括其边权值及其节点特征,将图的边权值进行对偶图转换,变成了第一个模型的节点特征输入。那么,对于图的节点特征,本发明提出第二个图卷积模型,前面将图的边权值提取出后,保留原有的节点信息,并认为剩下的节点之间的关系是均等的,所以对于第二个图卷积模型,本发明使用一个去掉自连接的全连接矩阵作为其邻接矩阵。2) The graph itself includes its edge weights and its node features. The edge weights of the graph are converted into a dual graph and become the node feature input of the first model. Then, for the node characteristics of the graph, the present invention proposes a second graph convolution model. After the edge weights of the graph are extracted, the original node information is retained, and the relationship between the remaining nodes is considered equal. , so for the second graph convolution model, the present invention uses a fully connected matrix without self-connection as its adjacency matrix.

3)本发明设计了一个双流图卷积神经网络结构,均采用谱图卷积的图分类方法,一流用于提取边权值信息,另一流用于提取节点特征信息,两个模型训练完后,将预测概率求和对测试集进行预测;3) The present invention designs a dual-stream graph convolutional neural network structure, both of which use the graph classification method of spectral graph convolution. One stream is used to extract edge weight information, and the other stream is used to extract node feature information. After the two models are trained, , sum the predicted probabilities to predict the test set;

4)本发明找到了区分不同麻醉状态的新特征,将脑网络和图卷积神经网络相结合应用于麻醉深度监测,对清醒状态、中度麻醉状态、深度麻醉状态分类精度达到95.4%,可以很好监测出麻醉的不同状态,这为临床麻醉监测提供了一种新方法。4) The present invention finds a new feature to distinguish different anesthesia states, and combines the brain network and the graph convolutional neural network to monitor the depth of anesthesia. The different states of anesthesia are well monitored, which provides a new method for clinical anesthesia monitoring.

本发明所提出的思路,不仅适用于本次麻醉数据,同时适用于其它场景下的脑电信号分类。The idea proposed by the present invention is not only applicable to this anesthesia data, but also applicable to EEG signal classification in other scenarios.

附图说明Description of drawings

图1为本发明系统的结构框图。FIG. 1 is a structural block diagram of the system of the present invention.

图2为本发明选取的覆盖猕猴前额叶-顶叶的16通道图。FIG. 2 is a 16-channel map covering the prefrontal-parietal lobe of the macaque selected by the present invention.

图3为相位滞后指数计算邻接矩阵示例。Figure 3 is an example of calculating the adjacency matrix of the phase lag index.

图4为本发明的猕猴前额叶-顶叶在不同状态下网络拓扑图(由左至右分别是:清醒状态,中度麻醉,深度麻醉)。Figure 4 is a network topology diagram of the prefrontal-parietal lobe of the macaque monkey in different states (from left to right: awake state, moderate anesthesia, deep anesthesia).

图5为本发明的猕猴前额叶-顶叶在不同状态下的邻接矩阵(由左至右分别是:清醒状态,中度麻醉,深度麻醉)。Figure 5 is the adjacency matrix of the prefrontal-parietal lobe of the macaque in different states (from left to right: awake state, moderate anesthesia, deep anesthesia).

图6为对偶图转换示例。Figure 6 is an example of dual graph conversion.

图7为两个模型的邻接矩阵。Figure 7 shows the adjacency matrices of the two models.

图8为本发明中模型1、模型2及双流模型对测试集进行预测的混淆矩阵。FIG. 8 is a confusion matrix for predicting the test set by model 1, model 2 and dual-stream model in the present invention.

图9为本发明中模型1、模型2及双流模型对测试集进行预测的ROC曲线图。FIG. 9 is a ROC curve diagram of the prediction of the test set by the model 1, the model 2 and the dual-stream model in the present invention.

具体实施方式Detailed ways

以下结合附图和具体实施例对本发明作进一步的详细描述。The present invention will be further described in detail below with reference to the accompanying drawings and specific embodiments.

本发明所提出的一种基于图卷积神经网络的麻醉深度监测方法,包括如下步骤:A method for monitoring depth of anesthesia based on a graph convolutional neural network proposed by the present invention includes the following steps:

1)采集若干个通道的脑皮层电图信号,并对原始信号进行预处理;1) Collect several channels of EEG signals, and preprocess the original signals;

2)截取不同麻醉阶段的若干个时间片段作为数据样本,计算相位滞后指数PLI,每个样本计算一个邻接矩阵,得到不同麻醉阶段的网络拓扑图样本,此外计算每个时间片段的幅值绝对值的平均值,以此作为图样本的节点特征,所述麻醉阶段包括清醒阶段、中度麻醉阶段、深度麻醉阶段;2) Intercept several time segments of different anesthesia stages as data samples, calculate the phase lag index PLI, calculate an adjacency matrix for each sample, obtain the network topology map samples of different anesthesia stages, and calculate the absolute value of the amplitude of each time segment. The average value of , as the node feature of the graph sample, the anesthesia stage includes the awake stage, the moderate anesthesia stage, and the deep anesthesia stage;

3)将网络拓扑图的邻接矩阵转换为对偶图,转换得到的对偶图的边连接都相同,边权值信息保留在对偶图的节点特征上,转换前的图样本为基于相位滞后指数构建的带权图,同时将节点特征保留,并构建一个全连接矩阵,以表示这些节点特征的拓扑结构;3) Convert the adjacency matrix of the network topology graph to a dual graph. The edge connections of the dual graphs obtained by the conversion are all the same, and the edge weight information is retained on the node features of the dual graph. The graph samples before conversion are constructed based on the phase lag index. Weighted graph, while retaining node features, and constructing a fully connected matrix to represent the topology of these node features;

4)构建双流图卷积神经网络,将所述图样本分为两流图数据,并找到两流图数据的两个公共邻接矩阵;所述两流图数据中,一流图数据为基于相位滞后指数构建的带权图,另一流图数据为保留原节点特征的全连接图;4) Constructing a two-stream graph convolutional neural network, dividing the graph samples into two-stream graph data, and finding two common adjacency matrices of the two-stream graph data; in the two-stream graph data, the first-stream graph data is based on phase lag The weighted graph constructed by the index, and the other flow graph data is a fully connected graph that retains the characteristics of the original node;

5)将两流图数据分别输入至双流图卷积神经网络两个模型,进行图粗化和快速池化,降低数据维度,聚合相似节点,通过全连接层输出各麻醉阶段的预测值;5) Input the two-stream graph data into the two models of the two-stream graph convolutional neural network respectively, perform graph coarsening and fast pooling, reduce the data dimension, aggregate similar nodes, and output the predicted value of each anesthesia stage through the fully connected layer;

6)将双流图卷积神经网络两个模型输出的不同麻醉阶段的预测值分别相加,将预测值最大的类别作为麻醉阶段预测结果输出。6) The predicted values of different anesthesia stages output by the two models of the dual-stream graph convolutional neural network are added respectively, and the category with the largest predicted value is output as the prediction result of the anesthesia stage.

下面对各步骤的实施过程进行详细说明:The implementation process of each step is described in detail below:

为完成所提出的发明目的,本发明利用公共数据库 Neurotycho (http://neurotycho.org/) 中的猕猴的麻醉实验数据,探究了ketamine-medetomidine诱导下猕猴的麻醉深度,使用的实验数据为2只猕猴的5次麻醉实验的覆盖脑前额叶-顶叶的16通道ECoG信号。ECoG信号为脑皮层电图信号,与EEG信号一样,都是记录的头皮上成对电极之间区域的电活动。而ECoG是侵入式脑机接口,比起EEG信号,空间分辨率更高,信号质量也更高。实验包括麻醉前清醒阶段、麻醉诱导阶段、麻醉维持阶段、麻醉恢复阶段、麻醉后清醒阶段。在不同阶段截取相同长度(1s)的时间片段后,基于功能连接方法(相位滞后指数)构建脑拓扑网络,并计算每个时间片段的各通道信号幅度平均绝对值作为节点特征,得到三个阶段(清醒、中度麻醉、深度麻醉)的图样本,将数据按照8:1:1的比例随机划分为训练集、验证集、测试集,使用训练集训练图神经网络模型、验证集调整模型的超参数并对模型的能力进行初步评估、测试集评估最终模型的泛化能力。In order to accomplish the purpose of the proposed invention, the present invention uses the experimental data of anesthesia of rhesus monkeys in the public database Neurotycho (http://neurotycho.org/) to explore the depth of anesthesia of rhesus monkeys induced by ketamine-medetomidine, and the experimental data used are 2 16-channel ECoG signals covering the prefrontal-parietal lobe of the brain from 5 anesthesia experiments in a single macaque. ECoG signals are electrocortical signals, and like EEG signals, they are all recorded electrical activity in the area between paired electrodes on the scalp. ECoG is an invasive brain-computer interface, which has higher spatial resolution and higher signal quality than EEG signals. The experiment included the awake stage before anesthesia, the anesthesia induction stage, the anesthesia maintenance stage, the anesthesia recovery stage, and the post-anesthesia awake stage. After intercepting time segments of the same length (1s) at different stages, a brain topology network was constructed based on the functional connectivity method (phase lag index), and the average absolute value of the signal amplitude of each channel in each time segment was calculated as the node feature to obtain three stages. (Awake, moderate anesthesia, deep anesthesia) graph samples, the data is randomly divided into training set, validation set, and test set according to the ratio of 8:1:1, and the training set is used to train the graph neural network model, and the validation set is used to adjust the model. Hyperparameters and initial evaluation of the ability of the model, test set to evaluate the generalization ability of the final model.

首先对猕猴的麻醉实验进行详细介绍,麻醉实验包括三个阶段。Firstly, the anesthesia experiment of rhesus monkeys is introduced in detail. The anesthesia experiment includes three stages.

11)清醒阶段:11) Awake stage:

(a)AwakeEyeOpened-START/END:猕猴睁眼休息。(a) AwakeEyeOpened-START/END: The macaque rests with its eyes open.

(b)AwakeEyeClose-START/END:通过遮住眼睛,猕猴闭着眼睛休息。(b) AwakeEyeClose-START/END: By covering the eyes, the macaques rest with their eyes closed.

12)麻醉阶段:12) Anesthesia stage:

(a)AnesticDrugInjection:肌内注射ketamine-medetomidine。(a) AnesticDrugInjection: Intramuscular injection of ketamine-medetomidine.

(b)Anesthetized-START/END:猕猴处于意识丧失 (LOC) 状态。当猕猴不再对猴手操作或用棉签触摸鼻孔或人中不做出反应时,它们就会进入LOC状态。此外,通过观察神经信号中的慢波振荡也可以证实LOC。(b) Anesthetized-START/END: Macaques are in a state of loss of consciousness (LOC). When macaques no longer respond to monkey hand manipulation or nostril touch with cotton swabs, or in humans, they enter the LOC state. In addition, LOC can also be confirmed by observing slow-wave oscillations in neural signals.

13)恢复期:13) Recovery period:

(a)AntagonistInjection:注射atipamezole使猴子从麻醉状态中恢复。(a) AntagonistInjection: Atipamezole was injected to recover monkeys from anesthesia.

(b)RecoveryEyeClosed-START/END:神经信号中慢波振荡消失的点被视为闭眼恢复开始,在恢复闭眼状态下,猕猴闭着眼睛平静地休息。(b) RecoveryEyeClosed-START/END: The point at which the slow-wave oscillations in the neural signal disappeared was regarded as the beginning of eye-closed recovery, in which the macaques rested peacefully with their eyes closed.

(c)RecoveryEyeOpened-START/END:通过取下眼罩,猴子睁开眼睛平静地坐着。(c) RecoveryEyeOpened-START/END: By removing the blindfold, the monkey sat calmly with its eyes open.

以上为麻醉实验的任务设计。The above is the task design of the anesthesia experiment.

实验数据为2只猕猴的5次实验,信号包括猕猴麻醉前后所有阶段,采样率为1kHZ。选取覆盖猕猴前额叶-顶叶的16通道ECoG信号,如图2所示,黑色点标注了本发明选取的信号通道。The experimental data are 5 experiments of 2 rhesus monkeys, the signal includes all stages before and after anesthesia of rhesus monkeys, and the sampling rate is 1kHZ. A 16-channel ECoG signal covering the prefrontal-parietal lobe of the macaque is selected, as shown in Figure 2, the black dots mark the signal channels selected by the present invention.

在步骤2)中,包括截取数据样本、计算PLI、计算网络拓扑图的节点特征。In step 2), including intercepting data samples, calculating PLI, and calculating node characteristics of the network topology map.

21)截取数据样本:21) Intercept data samples:

将数据进行预处理后,在每次实验的各阶段均匀截取多个1s片段,清醒阶段、中度麻醉阶段、深度麻醉阶段在每次实验分别截取1000个1s片段,中度麻醉数据量较少,所以滑窗的步进为0.1s。After preprocessing the data, multiple 1s segments were evenly intercepted at each stage of each experiment, and 1,000 1s segments were intercepted in each experiment in the awake stage, moderate anesthesia stage, and deep anesthesia stage, and the amount of moderate anesthesia data was small. , so the step of the sliding window is 0.1s.

(a)清醒阶段数据在麻醉前清醒阶段和麻醉后清醒阶段截取,其中麻醉后清醒阶段为恢复期后期,以确定猕猴处于清醒状态。(a) The awake phase data were intercepted during the pre-anesthesia awake phase and the post-anesthesia awake phase, where the post-anesthesia awake phase was the late recovery phase to determine the macaques' awake state.

(b)中度麻醉数据为麻醉诱导期(麻醉剂注射-到达LOC)的中间时期数据。(b) Moderate anesthesia data is the data of the intermediate period of anesthesia induction period (anesthetic injection - reaching LOC).

(c)深度麻醉数据为麻醉维持期的数据。(c) The deep anesthesia data is the data of the anesthesia maintenance period.

于是得到5×3×1000,即15000个数据样本,5为5次实验,3为3个阶段,1000为每次实验每个阶段1000个样本。每一样本在重采样后包括16个通道、200个采样点,一个样本将对应一个网络拓扑图。Therefore, 5×3×1000 is obtained, that is, 15,000 data samples, 5 is 5 experiments, 3 is 3 stages, and 1000 is 1,000 samples in each stage of each experiment. Each sample includes 16 channels and 200 sampling points after resampling, and one sample corresponds to a network topology map.

22)计算PLI,得到邻接矩阵;22) Calculate the PLI to obtain the adjacency matrix;

使用相位滞后指数(PLI)计算通道间的相关性,相位滞后指数的计算方法为:The correlation between channels is calculated using the phase lag index (PLI), which is calculated as:

设两个通道的信号序列为

Figure 444445DEST_PATH_IMAGE001
Figure 891607DEST_PATH_IMAGE002
,使用希尔伯特变换计算瞬时相位
Figure 769695DEST_PATH_IMAGE003
: Let the signal sequence of the two channels be
Figure 444445DEST_PATH_IMAGE001
and
Figure 891607DEST_PATH_IMAGE002
, using the Hilbert transform to calculate the instantaneous phase
Figure 769695DEST_PATH_IMAGE003
:

Figure 403939DEST_PATH_IMAGE004
Figure 403939DEST_PATH_IMAGE004

Figure 766787DEST_PATH_IMAGE005
表示
Figure 537297DEST_PATH_IMAGE006
的希尔伯特(Hilbert)变换,i=1或2,j为虚数符号,计算方 法如下:
Figure 766787DEST_PATH_IMAGE005
express
Figure 537297DEST_PATH_IMAGE006
The Hilbert transform of , i = 1 or 2, j is an imaginary number symbol, the calculation method is as follows:

Figure 835554DEST_PATH_IMAGE007
Figure 835554DEST_PATH_IMAGE007

式中,P.V.表示柯西主值,t为时间,τ为积分变量;在计算每个通道信号的相位之后,两个通道之间的相对锁定可以计算为:where PV represents the principal value of Cauchy, t is the time, and τ is the integral variable; after calculating the phase of each channel signal, the relative locking between the two channels can be calculated as:

Figure 957094DEST_PATH_IMAGE008
Figure 957094DEST_PATH_IMAGE008

式中,z 2*(t) 为z 2(t)的共轭复数;In the formula, z 2 *( t ) is the complex conjugate of z 2 ( t );

PLI的范围在0到1之间,0表示两个通道之间没有相位锁定,而1表示两个通道之间有完美的相位耦合。PLI值计算如下:The range of PLI is between 0 and 1, with 0 indicating no phase locking between the two channels and 1 indicating perfect phase coupling between the two channels. The PLI value is calculated as follows:

Figure 858054DEST_PATH_IMAGE009
Figure 858054DEST_PATH_IMAGE009

得到数据样本后,使用相位滞后指数计算公式计算通道间的相位相关性,每个样 本得到一个16×16的邻接矩阵,邻接值分布在0-1之间。如图3相位滞后指数计算邻接矩阵 示例:节点1于节点2的1s电信号(200个点)根据发明内容中的相位滞后指数公式计算出一 个相关值

Figure 748650DEST_PATH_IMAGE010
,对应邻接矩阵(1,2)的位置,而节点2于节点1的相关值
Figure 467076DEST_PATH_IMAGE011
,即 得到的邻接矩阵是一个实对称矩阵。同样地,节点3于节点15的相关值对应邻接矩阵(3,15) 的位置,节点15于节点3的相关值对应邻接矩阵(3,15)的位置,同时
Figure 810332DEST_PATH_IMAGE012
,以 此类推,便计算出了整个16×16邻接矩阵。 After the data samples are obtained, the phase correlation between channels is calculated using the phase lag index calculation formula, and a 16×16 adjacency matrix is obtained for each sample, and the adjacency values are distributed between 0-1. As shown in Figure 3, the phase lag index calculation adjacency matrix example: the 1s electrical signal (200 points) of node 1 and node 2 calculates a correlation value according to the phase lag index formula in the summary of the invention
Figure 748650DEST_PATH_IMAGE010
, corresponding to the position of the adjacency matrix (1,2), and the correlation value between node 2 and node 1
Figure 467076DEST_PATH_IMAGE011
, that is, the obtained adjacency matrix is a real symmetric matrix. Similarly, the correlation value of node 3 to node 15 corresponds to the position of the adjacency matrix (3, 15), the correlation value of node 15 to node 3 corresponds to the position of the adjacency matrix (3, 15), and at the same time
Figure 810332DEST_PATH_IMAGE012
, and so on, the entire 16×16 adjacency matrix is calculated.

图4给出了三个不同阶段猕猴Chibi脑前额叶-顶叶的网络结构;图5给出了三个不同阶段下猕猴Chibi拓扑网络图对应的邻接矩阵(为表现更显著的不同阶段区分效果,此处绘制的邻接矩阵增加了自连接)。Figure 4 shows the network structure of the prefrontal-parietal lobe of the rhesus monkey Chibi brain at three different stages; Figure 5 shows the adjacency matrix corresponding to the topological network map of the rhesus monkey Chibi at three different stages (for the distinguishing effect of different stages with more significant performance) , the adjacency matrix drawn here adds self-connections).

23)计算每个片段的各通道信号幅度绝对值,再取平均值,以此作为每个网络拓扑图的节点特征,每一个节点对应一个特征值。23) Calculate the absolute value of the signal amplitude of each channel of each segment, and then take the average value as the node feature of each network topology map, and each node corresponds to a feature value.

步骤3)中对偶图转换示例如图6所示:对偶图转换的思想是将边转化为节点,节点转化为边,若边与边之间有节点相连,则在得到的对偶图上加上边。An example of dual graph conversion in step 3) is shown in Figure 6: The idea of dual graph conversion is to convert edges into nodes and nodes into edges. If there are nodes connected between edges and edges, add edges to the obtained dual graph .

如图6中原始图包含边01,02,03,12,13,23,将其变成右图中的节点,同时其边权值变成了节点特征。另外,01,02有共同节点0;01,03有共同节点0;02,03有共同节点0;以此类推,将有共同节点的边再新图上加上连接,连接值视为1。As shown in Figure 6, the original graph contains edges 01, 02, 03, 12, 13, and 23, which are turned into nodes in the right graph, and their edge weights become node features. In addition, 01, 02 have a common node 0; 01, 03 have a common node 0; 02, 03 have a common node 0; and so on, add a connection to the edge with a common node on the new graph, and the connection value is regarded as 1.

基于以上对偶图转换的思想,本发明将相位滞后指数计算得到的多个邻接矩阵(16×16)转换为对偶图(120×120),其中120=(16×16-16)/2。转换得到的对偶图的边连接都是一样的即图7(左)表示的邻接矩阵,原来的边权值就表现在了新图的节点特征上。Based on the idea of the above dual graph conversion, the present invention converts multiple adjacency matrices (16×16) obtained by calculating the phase lag index into a dual graph (120×120), where 120=(16×16-16)/2. The edge connections of the converted dual graph are the same, that is, the adjacency matrix shown in Figure 7 (left), and the original edge weights are represented in the node features of the new graph.

另一方面,前面通过幅值信息计算得到的节点特征也保留了下来,作为另一图卷积神经网络模型的输入,即GCN模型二,GCN模型二将基于一个16×16的邻接矩阵对这些保留的节点特征进行滤波,由于前面将节点与节点之间的边信息取出并输入GCN模型一了,于是可以认为保留的节点信息的节点与节点之间的关系是均等的,便将一除去自连接的全连接矩阵作为其邻接矩阵。On the other hand, the node features calculated by the amplitude information previously are also retained as the input of another graph convolutional neural network model, namely GCN model 2, GCN model 2 will be based on a 16 × 16 adjacency matrix to these The reserved node features are filtered. Since the edge information between the nodes and the nodes has been taken out and entered into the GCN model, it can be considered that the relationship between the nodes and the nodes of the reserved node information is equal, and one will be removed from the node. Connected fully connected matrix as its adjacency matrix.

步骤4)构建双流图卷积神经网络:Step 4) Build a two-stream graph convolutional neural network:

双流图卷积神经网络包括两个模型,GCN模型一(以下简称模型1)设计了6层的图卷积-池化结构,用于提取边权值信息,即大小为120×120的图数据,GCN模型二(以下简称模型2)设计了4层的图卷积-池化结构,用于提取节点特征信息,即16×16的图数据。The dual-stream graph convolutional neural network includes two models. GCN model 1 (hereinafter referred to as model 1) designs a 6-layer graph convolution-pooling structure to extract edge weight information, that is, graph data with a size of 120×120 , GCN model 2 (hereinafter referred to as model 2) designs a 4-layer graph convolution-pooling structure for extracting node feature information, that is, 16×16 graph data.

在计算机视觉中CNN可以有效的提取像素点排列整齐的图片特征,即欧式结构数据,而科学研究中还有很多非欧式结构的数据,如社交网络、蛋白质结构等。为将机器学习方法应用于这些非欧式结构数据,GCN成为了研究的重点。In computer vision, CNN can effectively extract image features with neatly arranged pixels, that is, Euclidean structure data, and there are many non-Euclidean structure data in scientific research, such as social network, protein structure, etc. To apply machine learning methods to these non-European structured data, GCN has become the focus of research.

一开始将卷积应用于图结构数据的难点是参数共享问题,元素排列整齐的图片可以满足平移不变性,可以在整张图上定义相同尺寸的卷积核来进行卷积运算,而图的每个节点的相邻节点个数都不一样,就无法用一个相同大小的卷积核来进行卷积运算了。The difficulty in applying convolution to graph-structured data at the beginning is the problem of parameter sharing. Pictures with neatly arranged elements can satisfy translation invariance, and convolution kernels of the same size can be defined on the entire graph to perform convolution operations. The number of adjacent nodes of each node is different, so a convolution kernel of the same size cannot be used for convolution operations.

图卷积神经网络包括谱方法和空间方法,面对参数共享的问题,谱方法试图在谱域定义卷积,而非在节点域,节点域无法满足平移不变性,不能定义相同大小的卷积核,因此在谱域实现卷积的定义,再变回空间域,便解决了参数共享这一问题。空间方法则在节点域直接定义卷积,为解决参数共享的问题,空间方法先确定目标节点的邻居节点,再将这些邻居节点依次排列,每一节点再选取固定个数的邻居节点,这样便可以实现参数共享,这与谱方法是不同的思路。这两种方法在与图有关的任务上都有更好的表现。本发明采用谱图卷积方法实现两个模型的图分类。Graph convolutional neural networks include spectral methods and spatial methods. Faced with the problem of parameter sharing, spectral methods try to define convolutions in the spectral domain, not in the node domain. The node domain cannot satisfy translation invariance and cannot define convolutions of the same size. Therefore, the definition of convolution is implemented in the spectral domain, and then changed back to the spatial domain, which solves the problem of parameter sharing. The spatial method directly defines convolution in the node domain. In order to solve the problem of parameter sharing, the spatial method first determines the neighbor nodes of the target node, then arranges these neighbor nodes in sequence, and selects a fixed number of neighbor nodes for each node. Parameter sharing can be achieved, which is a different idea from the spectral method. Both methods perform better on graph-related tasks. The invention adopts the spectral graph convolution method to realize graph classification of two models.

本发明采用直接在谱域定义卷积的方法GCN,它基于一个邻接矩阵实现对图的分类。The invention adopts the method GCN to define convolution directly in the spectral domain, which realizes the classification of graphs based on an adjacency matrix.

谱图卷积:通过傅里叶变化将图卷积拓展到图的频域中。Spectral graph convolution: Extends graph convolution to the frequency domain of the graph through Fourier transform.

对于输入信号

Figure 780562DEST_PATH_IMAGE013
,在傅里叶域取一个
Figure 525665DEST_PATH_IMAGE014
为参数的滤波器
Figure 165724DEST_PATH_IMAGE015
: for input signal
Figure 780562DEST_PATH_IMAGE013
, take one in the Fourier domain
Figure 525665DEST_PATH_IMAGE014
filter for parameters
Figure 165724DEST_PATH_IMAGE015
:

Figure 996277DEST_PATH_IMAGE016
Figure 996277DEST_PATH_IMAGE016

其中U是图的拉普拉斯矩阵L的特征向量矩阵。拉普拉斯矩阵:where U is the eigenvector matrix of the Laplacian matrix L of the graph. Laplacian matrix:

Figure 239040DEST_PATH_IMAGE017
Figure 239040DEST_PATH_IMAGE017

其中,A为邻接矩阵,D为度矩阵,

Figure 104227DEST_PATH_IMAGE018
是拉普拉斯矩阵L的特征值组成的对角矩阵,
Figure 665921DEST_PATH_IMAGE019
就是图上的傅里叶变换。 Among them, A is the adjacency matrix, D is the degree matrix,
Figure 104227DEST_PATH_IMAGE018
is the diagonal matrix composed of the eigenvalues of the Laplace matrix L,
Figure 665921DEST_PATH_IMAGE019
It is the Fourier transform of the graph.

为减少计算量,将

Figure 983770DEST_PATH_IMAGE020
用切比雪夫多项式进行K阶逼近,得到改进的卷积核: In order to reduce the amount of calculation, the
Figure 983770DEST_PATH_IMAGE020
K-order approximation with Chebyshev polynomials to get an improved convolution kernel:

Figure 30223DEST_PATH_IMAGE021
Figure 30223DEST_PATH_IMAGE021

其中

Figure 749918DEST_PATH_IMAGE022
Figure 731780DEST_PATH_IMAGE023
为拉普拉斯矩阵L中最大的特征值,
Figure 333663DEST_PATH_IMAGE024
是切比 雪夫多项式的系数。 in
Figure 749918DEST_PATH_IMAGE022
,
Figure 731780DEST_PATH_IMAGE023
is the largest eigenvalue in the Laplace matrix L,
Figure 333663DEST_PATH_IMAGE024
are the coefficients of the Chebyshev polynomial.

接着,用滤波器

Figure 121490DEST_PATH_IMAGE025
信号x进行滤波: Next, use the filter
Figure 121490DEST_PATH_IMAGE025
Filter the signal x:

Figure 882642DEST_PATH_IMAGE026
Figure 882642DEST_PATH_IMAGE026

其中

Figure 97722DEST_PATH_IMAGE027
是在缩放后的拉普拉斯为
Figure 186901DEST_PATH_IMAGE028
的k阶的切 比雪夫多项式,意味着
Figure 778419DEST_PATH_IMAGE029
,我们可以用递归关系和
Figure 144810DEST_PATH_IMAGE030
Figure 530792DEST_PATH_IMAGE031
来计算
Figure 372846DEST_PATH_IMAGE032
。in
Figure 97722DEST_PATH_IMAGE027
is the scaled Laplacian as
Figure 186901DEST_PATH_IMAGE028
A Chebyshev polynomial of order k, which means
Figure 778419DEST_PATH_IMAGE029
, we can use the recurrence relation and
Figure 144810DEST_PATH_IMAGE030
,
Figure 530792DEST_PATH_IMAGE031
to calculate
Figure 372846DEST_PATH_IMAGE032
.

在完成数据预处理、不同阶段数据截取、功能网络构建等操作后,得到不同阶段的30000个图样本,得到的图为基于相位滞后指数构建的带权图,这与常规的二值图不同,带权图更详尽地表述了脑网络拓扑结构;此外,为实现更高的分类精度,本发明还选取了通道信号幅值的绝对值、取平均,作为图的节点特征。After completing data preprocessing, data interception at different stages, and functional network construction, 30,000 graph samples at different stages are obtained. The obtained graph is a weighted graph constructed based on the phase lag index, which is different from the conventional binary graph. The weighted graph expresses the topological structure of the brain network in more detail; in addition, in order to achieve higher classification accuracy, the present invention also selects the absolute value and average of the channel signal amplitude as the node feature of the graph.

基于拓扑结构及节点信息构建好图样本后,将图样本分为两流图数据,并找到两类图数据的两个公共邻接矩阵,邻接矩阵将作为这些图的拓扑结构输入到图卷积神经网络,用于计算图的拉普拉斯矩阵。输入到图一流图数据通过对偶图方法转换获得,将边权值转换为了图卷积敏感的节点特征,并得到120×120大小的邻接矩阵;另一流数据是保留的原节点特征,原来的边权值被取出后,将节点与节点之间的关系视为均等,于是构建一16×16的全连接邻接矩阵(已去掉自连接)。After the graph samples are constructed based on the topology and node information, the graph samples are divided into two-stream graph data, and two common adjacency matrices of the two types of graph data are found, and the adjacency matrices will be used as the topology of these graphs. A network that computes the Laplacian matrix of a graph. The first-stream graph data input into the graph is obtained by the dual graph method, and the edge weights are converted into graph convolution-sensitive node features, and an adjacency matrix of size 120×120 is obtained; the other stream data is the retained original node features, the original edge After the weights are taken out, the relationship between nodes is regarded as equal, so a 16×16 fully connected adjacency matrix (self-connection has been removed) is constructed.

在步骤5)中,将得到的两类图数据分别输入双流图卷积神经网络的两个模型,如发明内容所述的双流图卷积神经网络方法,第一流图卷积神经网络输入的特征x是一维的,即120×1,输入的邻接矩阵如图7(左)所示,大小为120×120;第二流图卷积神经网络输入的特征x也是1维的,即16×1,输入的邻接矩阵如图7(右)所示,大小为16×16。两流图神经网络模型使用的卷积方法和池化方法都是一样的,即以切比雪夫多项式逼近卷积核的谱图卷积方法,及基于Graclus多级聚类算法的图粗化及快速池化方法。只不过由于输入的数据特征大小不同,所以设计了不同的卷积-池化结构。In step 5), the obtained two types of graph data are respectively input into the two models of the two-stream graph convolutional neural network, such as the two-stream graph convolutional neural network method described in the content of the invention, the features of the first stream graph convolutional neural network input x is one-dimensional, that is, 120×1, and the input adjacency matrix is shown in Figure 7 (left), with a size of 120×120; the feature x input by the second flow graph convolutional neural network is also one-dimensional, that is, 16× 1. The input adjacency matrix is shown in Figure 7 (right), with a size of 16×16. The two-stream graph neural network model uses the same convolution method and pooling method, that is, the spectral graph convolution method that approximates the convolution kernel with Chebyshev polynomials, and the graph coarsening and graph coarsening based on the Graclus multi-level clustering algorithm. Fast pooling method. It's just that different convolution-pooling structures are designed due to the different sizes of the input data features.

在步骤6)中双流图卷积神经网络两个模型分别输出预测概率。双流图卷积神经网 络中两个模型中分别经过softmax分类层后输出对每个类别的预测概率,训练好的模型1和 模型2分别对测试集进行预测,得到模型1对清醒状态的预测概率为

Figure 502476DEST_PATH_IMAGE033
,对中度麻醉状态的 预测概率为
Figure 474105DEST_PATH_IMAGE034
,对深度麻醉状态的预测概率为
Figure 296568DEST_PATH_IMAGE035
。模型2对清醒状态的预测概率为
Figure 360339DEST_PATH_IMAGE036
,对 中度麻醉的预测概率为
Figure 28080DEST_PATH_IMAGE037
,对深度麻醉的预测概率为
Figure 369063DEST_PATH_IMAGE038
。将两个模型的对每个类别的预 测概率相加分别得到双流图卷积神经网络对清醒状态、中度麻醉状态、深度麻醉状态的预 测概率:
Figure 362427DEST_PATH_IMAGE039
Figure 647915DEST_PATH_IMAGE040
Figure 306298DEST_PATH_IMAGE041
,取其中的最大值,便得到该值对应的 预测类别。 In step 6), the two models of the dual-stream graph convolutional neural network output the predicted probabilities respectively. The two models in the dual-stream graph convolutional neural network go through the softmax classification layer respectively and output the predicted probability of each category. The trained model 1 and model 2 respectively predict the test set, and obtain the predicted probability of model 1 for the awake state. for
Figure 502476DEST_PATH_IMAGE033
, the predicted probability of moderate anesthesia is
Figure 474105DEST_PATH_IMAGE034
, the predicted probability of deep anesthesia is
Figure 296568DEST_PATH_IMAGE035
. Model 2 predicts the probability of being awake as
Figure 360339DEST_PATH_IMAGE036
, the predicted probability for moderate anesthesia is
Figure 28080DEST_PATH_IMAGE037
, the predicted probability of deep anesthesia is
Figure 369063DEST_PATH_IMAGE038
. The predicted probabilities of the two models for each category are added to obtain the predicted probabilities of the two-stream graph convolutional neural network for the awake state, the moderate anesthesia state, and the deep anesthesia state:
Figure 362427DEST_PATH_IMAGE039
,
Figure 647915DEST_PATH_IMAGE040
,
Figure 306298DEST_PATH_IMAGE041
, and take the maximum value to get the prediction category corresponding to this value.

基于上述方法,本发明提出的基于图卷积神经网络的麻醉深度监测系统如图1所示,包括数据预处理模块、功能网络构建模块、图转换模块和双流图卷积神经网络模块。Based on the above method, the anesthesia depth monitoring system based on the graph convolutional neural network proposed by the present invention is shown in FIG.

数据预处理模块:用于对脑皮层电图信号进行预处理,将原始数据滤波及下采样,滤波可以滤除杂波、交流电信号;下采样可以减少要处理的数据量。预处理包括:0.5-100HZ滤波;50HZ凹陷滤波;200HZ重采样。以上预处理操作均在在MATLAB2016b中完成。Data preprocessing module: It is used to preprocess the cerebral cortex signal, filter and downsample the original data, filtering can filter out clutter and alternating current signal; downsampling can reduce the amount of data to be processed. Preprocessing includes: 0.5-100HZ filtering; 50HZ notch filtering; 200HZ resampling. The above preprocessing operations are all done in MATLAB2016b.

功能网络构建模块:用于将样本数据截取为不同麻醉阶段的若干个时间片段,通过相位滞后指数方法计算通道间的相关性,构建邻接矩阵,得到关键脑区的网络拓扑图,并计算信号幅度平均绝对值作为图节点特征。Functional network building block: It is used to cut the sample data into several time segments of different anesthesia stages, calculate the correlation between channels through the phase lag index method, construct the adjacency matrix, obtain the network topology map of key brain areas, and calculate the signal amplitude The mean absolute value is used as a graph node feature.

图转换模块:用于将网络拓扑图样本的邻接矩阵转换为对偶图,将边权值转化为图卷积敏感的节点特征,使得不同的图可以得到一个相同的邻接矩阵,实现在谱图卷积图分类方法上的直接应用;同时保留原节点特征,构建新的全连接邻接矩阵,作为另一流图数据。Graph conversion module: It is used to convert the adjacency matrix of the network topology graph sample into a dual graph, and convert the edge weights into graph convolution-sensitive node features, so that different graphs can obtain the same adjacency matrix. The direct application of the product graph classification method; at the same time, the original node features are preserved, and a new fully connected adjacency matrix is constructed as another flow graph data.

双流图卷积神经网络模块:用于存储双流图卷积神经网络的两个模型,模型1用于提取边权值信息,模型2用于提取节点特征信息,将两个模型输出预测概率分类相加,得出预测结果。双流图卷积神经网络模块设计了双流结构,分别用于学习图转换模块得到的两类图数据,同时利用谱图卷积提取图特征,图粗化及快速池化操作聚合相似节点,减少计算量,最后softmax层预测麻醉不同状态的预测概率,将两个模型的预测概率相加实现模型融合,使用相加后的概率对测试集进行预测。Two-stream graph convolutional neural network module: used to store two models of the two-stream graph convolutional neural network, model 1 is used to extract edge weight information, model 2 is used to extract node feature information, and the output prediction probability classification of the two models is related to each other. Add to get the prediction result. The dual-stream graph convolutional neural network module designs a dual-stream structure, which is used to learn the two types of graph data obtained by the graph conversion module, and uses spectral graph convolution to extract graph features, graph coarsening and fast pooling operations to aggregate similar nodes and reduce computation. Finally, the softmax layer predicts the predicted probabilities of different states of anesthesia, adds the predicted probabilities of the two models to achieve model fusion, and uses the added probabilities to predict the test set.

表1展示了双流图卷积神经网络模块模型1内部各层参数,表2展示了双流图卷积 神经网络模块模型2内部各层参数,其中O表示麻醉深度分类任务的数量,

Figure 564104DEST_PATH_IMAGE042
Figure 525107DEST_PATH_IMAGE043
表示每一图卷积层的滤波器数量。 Table 1 shows the internal parameters of the two-stream graph convolutional neural network module model 1, and Table 2 shows the internal parameters of the two-stream graph convolutional neural network module model 2, where O represents the number of anesthesia depth classification tasks,
Figure 564104DEST_PATH_IMAGE042
,
Figure 525107DEST_PATH_IMAGE043
Represents the number of filters per graph convolutional layer.

Figure 501153DEST_PATH_IMAGE044
Figure 501153DEST_PATH_IMAGE044

Figure 713960DEST_PATH_IMAGE045
Figure 713960DEST_PATH_IMAGE045

本发明中搭建的双层图卷积神经网络结构描述如下:The structure of the double-layer graph convolutional neural network built in the present invention is described as follows:

本发明的GCN模型中,经过图卷积层,图的维度是不变的,对于最大池化层,图的维度会减少一半,这意味这,N×N的拉普拉斯矩阵,经过最大池化层后,图的维度会变成N/2×N/2,对于120×120大小的邻接矩阵,可以使用64-32-16-8-4-2-1的6层池化结构或120-60-30-15的3层池化结构,本发明选择了前者。对于16×16大小的邻接矩阵,本发明选择了16-8-4-2-1的4层池化结构。In the GCN model of the present invention, after the graph convolution layer, the dimension of the graph is unchanged. For the maximum pooling layer, the dimension of the graph will be reduced by half, which means that the N×N Laplacian matrix, after the maximum After the pooling layer, the dimension of the graph will become N/2×N/2. For an adjacency matrix of size 120×120, a 6-layer pooling structure of 64-32-16-8-4-2-1 can be used or The 3-layer pooling structure of 120-60-30-15, the present invention chooses the former. For an adjacency matrix of size 16×16, the present invention selects a 4-layer pooling structure of 16-8-4-2-1.

输入的邻接矩阵给出了对图形结构的描述,对邻接矩阵进行粗化操作得到多级粗化矩阵,同时根据粗化得到的多级粗化矩阵对原始数据进行重排及快速池化操作,原始数据由粗化返回的重排关系重新组织为3D数据,它们将输入到网络中进行卷积。The input adjacency matrix gives a description of the graph structure. Coarse the adjacency matrix to obtain a multi-level coarsening matrix. At the same time, according to the multi-level coarsening matrix obtained by the coarsening, the original data is rearranged and quickly pooled. The raw data is reorganized into 3D data by the rearranged relations returned by the coarsening, which are fed into the network for convolution.

图卷积层学习了图数据的特征,池化层降低数据维度,聚合相似节点。两个模型训练好后,用训练好的模型对测试集进行测试,得到预测类别。The graph convolutional layer learns the features of the graph data, and the pooling layer reduces the data dimension and aggregates similar nodes. After the two models are trained, use the trained model to test the test set to get the predicted category.

表3为模型1和模型2及结合后的双流模型对测试集进行预测的准确率,可见,模型1与模型2本就能实现相当不错的精度,而两个模型的结合可以使预测达到了更好的效果,这说明了两个模型学习了不同特征的区别性。Table 3 shows the prediction accuracy of model 1 and model 2 and the combined dual-stream model on the test set. It can be seen that model 1 and model 2 can achieve quite good accuracy, and the combination of the two models can make the prediction reach better performance, which shows that the two models learned the discriminativeness of different features.

Figure 826272DEST_PATH_IMAGE046
Figure 826272DEST_PATH_IMAGE046

图8为测试集的混淆矩阵,图9为测试集的ROC曲线图,包括对模型1,模型2及双流模型的评估,测试集评估最终模型的泛化能力,本发明得到测试集的三分类精度达到95.4%。FIG. 8 is the confusion matrix of the test set, and FIG. 9 is the ROC curve diagram of the test set, including the evaluation of Model 1, Model 2 and the two-stream model, the test set evaluates the generalization ability of the final model, and the present invention obtains three classifications of the test set The accuracy reaches 95.4%.

双流图卷积神经网络模型的训练及测试过程均在Python3.6 TensorFlow 1.13.1环境下完成。The training and testing process of the dual-stream graph convolutional neural network model were completed in the Python3.6 TensorFlow 1.13.1 environment.

最后需要说明的是,以上具体实施方式仅用以说明本专利技术方案而非限制,尽管参照较佳实施例对本专利进行了详细说明,本领域的普通技术人员应当理解,可以对本专利的技术方案进行修改或者等同替换,而不脱离本专利技术方案的精神和范围,其均应涵盖在本专利的权利要求范围当中。Finally, it should be noted that the above specific embodiments are only used to illustrate the technical solution of the patent and not to limit it. Although the patent has been described in detail with reference to the preferred embodiments, those of ordinary skill in the art should understand that the technical solution of the patent can be Modifications or equivalent substitutions are made without departing from the spirit and scope of the technical solutions of this patent, and they should all be covered by the scope of the claims of this patent.

Claims (9)

1.一种基于图卷积神经网络的麻醉深度监测系统,其特征在于:所述系统包括数据预处理模块、功能网络构建模块、图转换模块和双流图卷积神经网络模块;1. an anesthesia depth monitoring system based on graph convolutional neural network, is characterized in that: described system comprises data preprocessing module, functional network building module, graph conversion module and dual-flow graph convolutional neural network module; 所述数据预处理模块:用于对脑皮层电图信号进行预处理;The data preprocessing module: used to preprocess the cerebral cortex signal; 所述功能网络构建模块:用于将样本数据截取为不同麻醉阶段的若干个时间片段,计算相位滞后指数PLI,每个样本计算一个邻接矩阵,得到不同麻醉阶段的网络拓扑图样本,所述麻醉阶段包括清醒阶段、中度麻醉阶段、深度麻醉阶段;The functional network building module is used to cut the sample data into several time segments of different anesthesia stages, calculate the phase lag index PLI, calculate an adjacency matrix for each sample, and obtain the network topology map samples of different anesthesia stages. Stages include awake stage, moderate anesthesia stage, and deep anesthesia stage; 所述图转换模块:用于将网络拓扑图样本的邻接矩阵转换为对偶图,转换得到的对偶图的边连接都相同,边权值信息保留在对偶图的节点特征上,则转换后的图样本为基于相位滞后指数构建的带权图;The graph conversion module is used to convert the adjacency matrix of the network topology graph sample into a dual graph. The edge connections of the converted dual graphs are the same, and the edge weight information is retained on the node features of the dual graph. The sample is a weighted graph constructed based on the phase lag index; 所述双流图卷积神经网络模块:用于存储双流图卷积神经网络的两个模型,模型1用于提取边权值信息,模型2用于提取节点特征信息,将两个模型输出预测概率分类相加,得出预测结果。The two-stream graph convolutional neural network module: used to store two models of the two-stream graph convolutional neural network, model 1 is used to extract edge weight information, model 2 is used to extract node feature information, and the two models are output prediction probability. The classification is added to obtain the prediction result. 2.一种计算机可读存储介质,存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现一种基于图卷积神经网络的麻醉深度监测方法,其特征在于:所述方法包括步骤:2. a computer-readable storage medium storing a computer program, characterized in that, when the computer program is executed by a processor, a method for monitoring the depth of anesthesia based on a graph convolutional neural network is realized, characterized in that: the method Include steps: 1)采集若干个通道的脑皮层电图信号,并对原始信号进行预处理;1) Collect several channels of EEG signals, and preprocess the original signals; 2)截取不同麻醉阶段的若干个时间片段作为数据样本,计算相位滞后指数PLI,每个样本计算一个邻接矩阵,得到不同麻醉阶段的网络拓扑图样本,所述麻醉阶段包括清醒阶段、中度麻醉阶段、深度麻醉阶段;2) Intercept several time segments of different anesthesia stages as data samples, calculate the phase lag index PLI, calculate an adjacency matrix for each sample, and obtain the network topology map samples of different anesthesia stages, the anesthesia stages include awake stage, moderate anesthesia stage, deep anesthesia stage; 3)将所述网络拓扑图的邻接矩阵转换为对偶图,转换得到的对偶图的边连接都相同,边权值信息保留在对偶图的节点特征上,转换后的图样本为基于相位滞后指数构建的带权图,同时将节点特征保留,并构建一个全连接矩阵,以表示这些节点特征的拓扑结构;3) Convert the adjacency matrix of the network topology graph into a dual graph, the edge connections of the converted dual graphs are the same, the edge weight information is retained on the node features of the dual graph, and the converted graph samples are based on the phase lag index. The constructed weighted graph, while retaining the node features, and constructing a fully connected matrix to represent the topology of these node features; 4)构建双流图卷积神经网络,将所述图样本分为两流图数据,并找到两流图数据的两个公共邻接矩阵;所述两流图数据中,一流图数据为基于相位滞后指数构建的带权图,另一流图数据为保留原节点特征的全连接邻接矩阵;4) Constructing a two-stream graph convolutional neural network, dividing the graph samples into two-stream graph data, and finding two common adjacency matrices of the two-stream graph data; in the two-stream graph data, the first-stream graph data is based on phase lag The weighted graph constructed exponentially, and the data of the other flow graph is a fully connected adjacency matrix that retains the characteristics of the original node; 5)将两流图数据分别输入至双流图卷积神经网络两个模型,进行图粗化和快速池化,降低数据维度,聚合相似节点,通过全连接层输出各麻醉阶段的预测值;5) Input the two-stream graph data into the two models of the two-stream graph convolutional neural network respectively, perform graph coarsening and fast pooling, reduce the data dimension, aggregate similar nodes, and output the predicted value of each anesthesia stage through the fully connected layer; 6)将双流图卷积神经网络两个模型输出的不同麻醉阶段的预测值分别相加,将预测值最大的类别作为麻醉阶段预测结果输出。6) The predicted values of different anesthesia stages output by the two models of the dual-stream graph convolutional neural network are added respectively, and the category with the largest predicted value is output as the prediction result of the anesthesia stage. 3.根据权利要求2所述的一种计算机可读存储介质,其特征在于所述步骤1)中脑皮层电图信号为研究对象大脑额叶-顶叶的16通道ECoG信号;所述预处理包括0.1~100HZ滤波、50HZ凹陷滤波和200HZ重采样。3 . The computer-readable storage medium according to claim 2 , wherein the ECoG signal of the mesencephalic cortex in the step 1) is a 16-channel ECoG signal of the frontal lobe-parietal lobe of the brain of the research subject; the preprocessing Including 0.1~100Hz filtering, 50Hz notch filtering and 200Hz resampling. 4.根据权利要求2所述的一种计算机可读存储介质,其特征在于:所述相位滞后指数PLI的计算方法为:4. A computer-readable storage medium according to claim 2, wherein the calculation method of the phase lag index PLI is: 设两个通道的信号序列为
Figure 478451DEST_PATH_IMAGE001
Figure 395591DEST_PATH_IMAGE002
,使用希尔伯特变换计算瞬时相位
Figure 440908DEST_PATH_IMAGE003
Let the signal sequence of the two channels be
Figure 478451DEST_PATH_IMAGE001
and
Figure 395591DEST_PATH_IMAGE002
, using the Hilbert transform to calculate the instantaneous phase
Figure 440908DEST_PATH_IMAGE003
:
Figure 101696DEST_PATH_IMAGE004
Figure 101696DEST_PATH_IMAGE004
其中,
Figure 59944DEST_PATH_IMAGE005
表示
Figure 147985DEST_PATH_IMAGE006
的希尔伯特(Hilbert)变换,i=1或2,j为虚数符号:
in,
Figure 59944DEST_PATH_IMAGE005
express
Figure 147985DEST_PATH_IMAGE006
The Hilbert transform of , i = 1 or 2, and j is the imaginary sign:
Figure 415019DEST_PATH_IMAGE007
Figure 415019DEST_PATH_IMAGE007
式中,P.V.表示柯西主值,t为时间,τ为积分变量;In the formula, P.V. represents the principal value of Cauchy, t is the time, and τ is the integral variable; 两个通道之间的相对锁定计算为:The relative lock between the two channels is calculated as:
Figure 879498DEST_PATH_IMAGE008
Figure 879498DEST_PATH_IMAGE008
式中,z 2*(t) 为z 2(t)的共轭复数;In the formula, z 2 *( t ) is the complex conjugate of z 2 ( t ); 则PLI值计算如下:Then the PLI value is calculated as follows:
Figure 813956DEST_PATH_IMAGE009
Figure 813956DEST_PATH_IMAGE009
PLI的范围在0到1之间,0表示两个通道之间没有相位锁定,而1表示两个通道之间有完美的相位耦合。The range of PLI is between 0 and 1, with 0 indicating no phase locking between the two channels and 1 indicating perfect phase coupling between the two channels.
5.根据权利要求2所述的一种计算机可读存储介质,其特征在于:所述步骤4)中构建双流图卷积神经网络采用谱域图卷积方法GCN,通过傅里叶变化将图卷积拓展到图的频域中,并使用滤波器对信号进行滤波。5. A computer-readable storage medium according to claim 2, characterized in that: in the step 4), a dual-stream graph convolutional neural network is constructed using the spectral domain graph convolution method GCN, and the graph is transformed by Fourier transformation. Convolution extends into the frequency domain of the graph and uses filters to filter the signal. 6.根据权利要求2所述的一种计算机可读存储介质,其特征在于:所述步骤5)中双流图卷积神经网络两个模型均采用以切比雪夫多项式逼近卷积核的谱图卷积方法,及基于Graclus多级聚类算法的图粗化及快速池化方法。6 . The computer-readable storage medium according to claim 2 , wherein in the step 5), the two models of the dual-stream graph convolutional neural network both adopt the spectrogram that approximates the convolution kernel with Chebyshev polynomials. 7 . Convolution method, and graph coarsening and fast pooling method based on Graclus multi-level clustering algorithm. 7.根据权利要求6所述的一种计算机可读存储介质,其特征在于:所述Graclus多级聚类算法采用贪婪算法对图的连续粗度进行度量,使谱聚类目标最小化。7 . The computer-readable storage medium according to claim 6 , wherein the Graclus multi-level clustering algorithm adopts a greedy algorithm to measure the continuous thickness of the graph, so as to minimize the spectral clustering target. 8 . 8.根据权利要求2所述的一种计算机可读存储介质,其特征在于:所述步骤2)中计算每个时间片段的各通道信号幅度平均绝对值作为节点特征。8 . The computer-readable storage medium according to claim 2 , wherein in the step 2), the average absolute value of the signal amplitude of each channel of each time segment is calculated as the node feature. 9 . 9.根据权利要求2所述的一种计算机可读存储介质,其特征在于:所述步骤2)中的数据样本按照8:1:1的比例随机划分为训练集、验证集、测试集,使用训练集训练图神经网络模型、验证集调整模型的超参数并对模型的能力进行初步评估、测试集评估最终模型的泛化能力。9 . The computer-readable storage medium according to claim 2 , wherein the data samples in the step 2) are randomly divided into a training set, a verification set, and a test set according to a ratio of 8:1:1, 10 . The training set is used to train the graph neural network model, the validation set is used to adjust the hyperparameters of the model and the ability of the model is initially evaluated, and the test set is used to evaluate the generalization ability of the final model.
CN202111346082.1A 2021-11-15 2021-11-15 Anesthesia depth monitoring method and system based on graph convolution neural network Active CN113768474B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111346082.1A CN113768474B (en) 2021-11-15 2021-11-15 Anesthesia depth monitoring method and system based on graph convolution neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111346082.1A CN113768474B (en) 2021-11-15 2021-11-15 Anesthesia depth monitoring method and system based on graph convolution neural network

Publications (2)

Publication Number Publication Date
CN113768474A CN113768474A (en) 2021-12-10
CN113768474B true CN113768474B (en) 2022-03-18

Family

ID=78873958

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111346082.1A Active CN113768474B (en) 2021-11-15 2021-11-15 Anesthesia depth monitoring method and system based on graph convolution neural network

Country Status (1)

Country Link
CN (1) CN113768474B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114557708B (en) * 2022-02-21 2024-08-20 天津大学 Somatosensory stimulation consciousness detection device and method based on brain electricity dual-feature fusion
CN114662530A (en) * 2022-02-23 2022-06-24 北京航空航天大学杭州创新研究院 Sleep stage staging method based on time sequence signal convolution and multi-signal fusion
CN114931385A (en) * 2022-05-12 2022-08-23 西安邮电大学 An EEG channel selection method for fatigue driving based on PLI-Relief

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE19829018B4 (en) * 1998-06-30 2005-01-05 Markus Lendl Method for setting a dose rate selectively variable metering device for anesthetic and anesthetic system thereto
US7373198B2 (en) * 2002-07-12 2008-05-13 Bionova Technologies Inc. Method and apparatus for the estimation of anesthetic depth using wavelet analysis of the electroencephalogram
WO2017001495A1 (en) * 2015-06-29 2017-01-05 Koninklijke Philips N.V. Optimal drug dosing based on current anesthesia practice
CN110680285A (en) * 2019-10-29 2020-01-14 张萍萍 Anesthesia degree monitoring device based on neural network
CN111091712A (en) * 2019-12-25 2020-05-01 浙江大学 A Traffic Flow Prediction Method Based on Recurrent Attention Dual Graph Convolutional Networks

Also Published As

Publication number Publication date
CN113768474A (en) 2021-12-10

Similar Documents

Publication Publication Date Title
CN113768474B (en) Anesthesia depth monitoring method and system based on graph convolution neural network
CN110399857A (en) A EEG Emotion Recognition Method Based on Graph Convolutional Neural Network
CN114052735A (en) Electroencephalogram emotion recognition method and system based on depth field self-adaption
CN106503799A (en) Deep learning model and the application in brain status monitoring based on multiple dimensioned network
CN112869711A (en) Automatic sleep staging and migration method based on deep neural network
CN111544017A (en) Fatigue detection method, device and storage medium based on GPDC graph convolutional neural network
Kong et al. Causal graph convolutional neural network for emotion recognition
CN113128552A (en) Electroencephalogram emotion recognition method based on depth separable causal graph convolution network
CN110584596B (en) Sleep stage classification method based on dual-input convolutional neural network and application thereof
CN109299647B (en) Vehicle control-oriented multitask motor imagery electroencephalogram feature extraction and mode recognition method
CN115251909B (en) Method and device for evaluating hearing by electroencephalogram signals based on space-time convolutional neural network
CN108280414A (en) A kind of recognition methods of the Mental imagery EEG signals based on energy feature
CN115581467A (en) A recognition method of SSVEP based on time, frequency and time-frequency domain analysis and deep learning
CN117195099A (en) An EEG signal emotion recognition algorithm integrating multi-scale features
CN113128384A (en) Brain-computer interface software key technical method of stroke rehabilitation system based on deep learning
CN111513717A (en) A method for extracting the functional state of the brain
CN116439672A (en) Multi-resolution sleep stage classification method based on dynamic self-adaptive kernel graph neural network
CN113397562A (en) Sleep spindle wave detection method based on deep learning
CN115770044B (en) Emotion recognition method and device based on electroencephalogram phase amplitude coupling network
Wu et al. A multi-stream deep learning model for EEG-based depression identification
Shi et al. A brain topography graph embedded convolutional neural network for EEG-based motor imagery classification
CN117438068A (en) A diagnostic method for autism spectrum disorder combined with weighted learning network
CN114767130A (en) Multi-modal feature fusion electroencephalogram emotion recognition method based on multi-scale imaging
CN113317803B (en) Neural disease feature extraction method based on graph theory and machine learning
CN117158912B (en) Sleep stage detection system based on graph attention mechanism and space-time graph convolution

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant