CN109222972B - A deep learning-based fMRI whole-brain data classification method - Google Patents

A deep learning-based fMRI whole-brain data classification method Download PDF

Info

Publication number
CN109222972B
CN109222972B CN201811054390.5A CN201811054390A CN109222972B CN 109222972 B CN109222972 B CN 109222972B CN 201811054390 A CN201811054390 A CN 201811054390A CN 109222972 B CN109222972 B CN 109222972B
Authority
CN
China
Prior art keywords
layer
fmri
channel
neural network
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811054390.5A
Other languages
Chinese (zh)
Other versions
CN109222972A (en
Inventor
胡金龙
邝岳臻
董守斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN201811054390.5A priority Critical patent/CN109222972B/en
Publication of CN109222972A publication Critical patent/CN109222972A/en
Application granted granted Critical
Publication of CN109222972B publication Critical patent/CN109222972B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/05Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves
    • A61B5/055Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves involving electronic [EMR] or nuclear [NMR] magnetic resonance, e.g. magnetic resonance imaging
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0033Features or image-related aspects of imaging apparatus, e.g. for MRI, optical tomography or impedance tomography apparatus; Arrangements of imaging apparatus in a room
    • A61B5/004Features or image-related aspects of imaging apparatus, e.g. for MRI, optical tomography or impedance tomography apparatus; Arrangements of imaging apparatus in a room adapted for image acquisition of a particular organ or body part
    • A61B5/0042Features or image-related aspects of imaging apparatus, e.g. for MRI, optical tomography or impedance tomography apparatus; Arrangements of imaging apparatus in a room adapted for image acquisition of a particular organ or body part for the brain
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2576/00Medical imaging apparatus involving image processing or analysis
    • A61B2576/02Medical imaging apparatus involving image processing or analysis specially adapted for a particular organ or body part
    • A61B2576/026Medical imaging apparatus involving image processing or analysis specially adapted for a particular organ or body part for the brain

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Public Health (AREA)
  • Surgery (AREA)
  • Radiology & Medical Imaging (AREA)
  • Artificial Intelligence (AREA)
  • Veterinary Medicine (AREA)
  • General Health & Medical Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Signal Processing (AREA)
  • Evolutionary Computation (AREA)
  • Psychiatry (AREA)
  • Neurology (AREA)
  • Physiology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开了一种基于深度学习的fMRI全脑数据分类方法,包括:(1)获取fMRI数据,进行预处理,获取对应的标签;(2)对fMRI数据进行聚合;(3)分别以正交的x、y、z轴方向对平均三维图像进行切片;(4)将三组二维图像分别转换为一帧多通道二维图像;(5)构建用于fMRI数据分类的混合多通道卷积神经网络模型;(6)对fMRI数据进行处理,将得到的标签作为输入数据进行训练,得到的参数用于fMRI数据分类的混合卷积神经网络模型;(7)对fMRI数据进行处理,将得到的三帧多通道二维图像输入到训练后的混合卷积神经网络模型中进行分类。本发明能有效地提高fMRI数据分类的准确率,同时减少fMRI数据分类模型训练和分类的计算量。

Figure 201811054390

The invention discloses a fMRI whole-brain data classification method based on deep learning, comprising: (1) acquiring fMRI data, performing preprocessing, and acquiring corresponding labels; (2) aggregating fMRI data; Slice the averaged 3D images in the intersecting x, y, and z axis directions; (4) convert the three sets of 2D images into one frame of multi-channel 2D images respectively; (5) construct a hybrid multi-channel volume for fMRI data classification (6) processing the fMRI data, using the obtained labels as input data for training, and the obtained parameters are used for the hybrid convolutional neural network model of fMRI data classification; (7) processing the fMRI data, the The resulting three-frame multi-channel 2D images are input into a trained hybrid convolutional neural network model for classification. The invention can effectively improve the accuracy rate of fMRI data classification, and at the same time reduce the calculation amount of fMRI data classification model training and classification.

Figure 201811054390

Description

一种基于深度学习的fMRI全脑数据分类方法A deep learning-based fMRI whole-brain data classification method

技术领域technical field

本发明涉及数据分类领域,尤其涉及一种基于深度学习的fMRI全脑数据分类方法。The invention relates to the field of data classification, in particular to a deep learning-based fMRI whole-brain data classification method.

背景技术Background technique

功能磁共振成像(fMRI)是一种无创的脑功能活动测量手段,fMRI数据反映了人类大脑的血氧含量情况,目前FMRI已被广泛应用于认知科学、发育科学、精神疾病等领域。Functional Magnetic Resonance Imaging (fMRI) is a non-invasive measure of brain function activity. The fMRI data reflects the blood oxygen content of the human brain. At present, FMRI has been widely used in cognitive science, developmental science, mental disease and other fields.

深度学习是机器学习中一种对数据进行表征学习的方法,深度神经网络(DNN)、卷积神经网络(CNN)和递归神经网络(RNN)等深度学习模型已成功应用于计算机视觉、语音识别、自然语言处理等领域。深度学习模型已被用于对fMRI全脑数据的分类,但针对复杂动态的fMRI全脑数据,如何在保持计算量较小的情况下利用深度学习提高分类的准确性,仍然是亟待解决的问题。Deep learning is a method of representational learning of data in machine learning. Deep learning models such as deep neural network (DNN), convolutional neural network (CNN), and recurrent neural network (RNN) have been successfully applied in computer vision, speech recognition. , natural language processing, etc. Deep learning models have been used to classify fMRI whole-brain data, but for complex and dynamic fMRI whole-brain data, how to use deep learning to improve the classification accuracy while keeping the computational load small is still an urgent problem to be solved. .

发明内容SUMMARY OF THE INVENTION

本发明的目的在于提供一种基于深度学习的fMRI全脑数据分类方法。本发明相较于现有技术,能够更好地学习fMRI全脑特征信息,并同时使用较小的计算量进行模型训练。The purpose of the present invention is to provide a deep learning-based fMRI whole-brain data classification method. Compared with the prior art, the present invention can better learn the fMRI whole-brain feature information, and at the same time use a small amount of calculation to perform model training.

本发明的目的能够通过以下技术方案实现:The object of the present invention can be realized through the following technical solutions:

一种基于深度学习的fMRI全脑数据分类方法,具体步骤包括:A deep learning-based fMRI whole-brain data classification method, the specific steps include:

(1)获取试验参与者的fMRI试验数据,对fMRI试验数据进行预处理,同时获取fMRI数据对应的标签;(1) Obtain the fMRI test data of the test participants, preprocess the fMRI test data, and obtain the labels corresponding to the fMRI data;

(2)对每个试验参与者的fMRI全脑数据进行聚合;(2) Aggregate the fMRI whole brain data of each trial participant;

(3)分别以正交的x、y、z轴方向,对聚合后得到的平均三维图像进行切片,得到三组二维图像;(3) slicing the average three-dimensional images obtained after polymerization in orthogonal x, y, and z axis directions, respectively, to obtain three sets of two-dimensional images;

(4)将得到的三组二维图像分别转换为一帧多通道二维图像;(4) converting the obtained three groups of two-dimensional images into one frame of multi-channel two-dimensional images respectively;

(5)构建用于fMRI全脑数据分类的混合多通道卷积神经网络模型;(5) Build a hybrid multi-channel convolutional neural network model for fMRI whole-brain data classification;

(6)将用于模型训练部分的参与者的fMRI数据经过步骤(1)-(4)的处理,将得到的三帧多通道二维图像及其分类标签作为输入数据,输入至混合卷积神经网络中进行模型训练,得到混合卷积神经网络的参数,用于fMRI全脑数据分类的混合卷积神经网络模型;(6) The fMRI data of the participants used in the model training part are processed in steps (1)-(4), and the obtained three-frame multi-channel two-dimensional images and their classification labels are used as input data, and input to the hybrid convolution Model training is performed in the neural network to obtain the parameters of the hybrid convolutional neural network, which is used for the hybrid convolutional neural network model of fMRI whole-brain data classification;

(7)对获得的fMRI数据依次进行步骤(1)-(4)的处理,将得到的三帧多通道二维图像输入到训练后的混合卷积神经网络模型中进行分类。(7) Steps (1)-(4) are sequentially performed on the obtained fMRI data, and the obtained three-frame multi-channel two-dimensional images are input into the trained hybrid convolutional neural network model for classification.

具体地,所述步骤(1)中的预处理包括头部移动校正、时间层校正、空间标准化和空间平滑等;所述标签是指试验参与者的属性(如试验参与者的某种动作),或者试验参与者在试验过程中的行为属性(如试验参与者的某种动作)。Specifically, the preprocessing in the step (1) includes head movement correction, temporal layer correction, spatial normalization, and spatial smoothing, etc.; the label refers to the attributes of the test participants (such as certain actions of the test participants) , or the behavioral attributes of the experimental participants during the experiment (such as a certain action of the experimental participants).

具体地,在步骤(2)中,如果fMRI全脑数据为静息态fMRI数据,则对获得的N帧三维图像(dimX×dimY×dimZ)对应位置的体素点进行算术平均,得到一帧平均三维图像。Specifically, in step (2), if the fMRI whole-brain data is resting-state fMRI data, arithmetic average is performed on the voxels at the corresponding positions of the obtained N frames of three-dimensional images (dimX×dimY×dimZ) to obtain one frame Average 3D images.

具体地,在步骤(2)中,如果fMRI全脑数据是任务态fMRI数据,则对试验过程内的N帧三维图像采用信号变化百分比(PSC)方法,来计算每个体素点在试验过程中相对静息时刻的平均变化值,转换成一帧平均三维图像。Specifically, in step (2), if the fMRI whole-brain data is task-state fMRI data, the percentage of signal change (PSC) method is used for the N frames of three-dimensional images during the experiment to calculate each voxel point during the experiment. The average change value relative to the resting moment is converted into an average three-dimensional image of one frame.

更进一步地,每个体素点的平均PSC计算公式为:Further, the calculation formula of the average PSC of each voxel point is:

Figure GDA0002470551150000021
Figure GDA0002470551150000021

其中,N表示试验过程中三维图像的帧数,yi表示体素点在第i帧图像的值,

Figure GDA0002470551150000022
表示体素点在静息时刻的平均值,静息时刻选择试验者在无试验刺激的休息阶段,p表示计算得到该体素点的平均变化值。Among them, N represents the number of frames of the 3D image in the test process, y i represents the value of the voxel point in the image of the ith frame,
Figure GDA0002470551150000022
Represents the average value of voxel points at the resting time. The resting time selects the tester in the rest period without test stimulation, and p represents the calculated average change value of the voxel point.

其中所述三维图像的大小为x轴为dimX,y轴为dimY,z轴为dimZ;所述试验过程中的N帧三维图像具有相同的标签。The size of the three-dimensional image is that the x-axis is dimX, the y-axis is dimY, and the z-axis is dimZ; the N frames of three-dimensional images in the test process have the same label.

具体地,在步骤(3)中对平均三维图像进行切片的具体操作为:沿x轴方向对x轴上每个单位长度进行切片,得到dimX张在y-z平面上的二维图像,每张的大小为dimY×dimZ;沿y轴方向对y轴上每个单位长度进行切片,得到dimY张在x-z平面上的二维图像,每张的大小为dimX×dimZ;沿z轴方向对z轴上每个单位长度进行切片,得到dimZ张在x-y平面上的二维图像,每张的大小为dimX×dimY。以相同平面上的二维图像为一组,最终一共得到三组二维图像。Specifically, the specific operation of slicing the average three-dimensional image in step (3) is: slicing each unit length on the x-axis along the x-axis direction to obtain dimX two-dimensional images on the y-z plane. The size is dimY×dimZ; each unit length on the y-axis is sliced along the y-axis direction to obtain dimY two-dimensional images on the x-z plane, and the size of each image is dimX×dimZ; along the z-axis direction to the z-axis Each unit length is sliced to obtain dimZ two-dimensional images on the x-y plane, and the size of each image is dimX×dimY. Taking the two-dimensional images on the same plane as a group, finally a total of three groups of two-dimensional images are obtained.

进一步地,所述步骤(4)具体为:根据卷积神经网络中通道的概念,对于dimX张y-z平面上的二维图像,将每一个切片位置的二维图像当作一个通道,转换成一帧能够输入进卷积神经网络的有dimX个通道的二维图像;对于dimY张x-z平面上的二维图像,将每一个切片位置的二维图像当作一个通道,转换成一帧能够输入进卷积神经网络的有dimY个通道的二维图像;对于dimZ张x-y平面上的二维图像,同样将每一个切片位置的二维图像当作一个通道,转换成一帧能够输入进卷积神经网络的有dimZ个通道的二维图像。Further, the step (4) is specifically: according to the concept of channel in the convolutional neural network, for dimX two-dimensional images on the y-z plane, the two-dimensional image of each slice position is regarded as a channel, and converted into a frame. A two-dimensional image with dimX channels that can be input into the convolutional neural network; for dimY two-dimensional images on the x-z plane, the two-dimensional image of each slice position is regarded as a channel, and converted into a frame that can be input into the convolution The two-dimensional image of the neural network has dimY channels; for dimZ two-dimensional images on the x-y plane, the two-dimensional image of each slice position is also regarded as a channel, and converted into a frame that can be input into the convolutional neural network. A 2D image of dimZ channels.

具体地,所述混合多通道卷积神经网络模型从输入到输出,依次包括三个并联的多通道二维卷积神经网络和一个全连接神经网络。其中每个二维卷积神经网络的输入对应一种多通道二维图像,三个多通道二维卷积神经网络的输出以串联形式拼接成一维特征,输入至全连接神经网络,最后输出预测每种分类标签的概率值。Specifically, the hybrid multi-channel convolutional neural network model sequentially includes three parallel multi-channel two-dimensional convolutional neural networks and a fully connected neural network from input to output. The input of each two-dimensional convolutional neural network corresponds to a multi-channel two-dimensional image, and the outputs of three multi-channel two-dimensional convolutional neural networks are spliced into one-dimensional features in series, input to the fully connected neural network, and finally output prediction Probability values for each class label.

更进一步地,所述多通道二维卷积神经网络依次包括输入层(Input)、第一卷积层(Conv2d_1)、第一池化层(MaxPooling2d_1)、第一Dropout层、第二卷积层(Conv2d_2)、第二池化层(MaxPooling2d_2)、第二Dropout层以及展平层(Flatten)。其中第一卷积层的卷积核数量为32,卷积核大小为3×3;第二卷积层的卷积核数量为64,卷积核大小为3×3。所述第一卷积层和第二卷积层均采用LeakyReLU函数作为激活函数。所述第一池化层和第二池化层均采用最大池化操作,池化窗口大小为2×2。所述第一Dropout层和第二Dropout层均以0.25的概率保留上一层传递过来的结果。所述展平层将卷积层的结果展平输出成一维结果。三个多通道二维卷积神经网络输出的一维结果通过融合层(Merge)拼接成一维特征,输入至全连接神经网络。Further, the multi-channel two-dimensional convolutional neural network sequentially includes an input layer (Input), a first convolution layer (Conv2d_1), a first pooling layer (MaxPooling2d_1), a first Dropout layer, and a second convolution layer. (Conv2d_2), the second pooling layer (MaxPooling2d_2), the second Dropout layer, and the flattening layer (Flatten). The number of convolution kernels in the first convolutional layer is 32, and the size of convolution kernels is 3×3; the number of convolution kernels in the second convolutional layer is 64, and the size of convolution kernels is 3×3. Both the first convolutional layer and the second convolutional layer use the LeakyReLU function as the activation function. The first pooling layer and the second pooling layer both use the maximum pooling operation, and the size of the pooling window is 2×2. Both the first Dropout layer and the second Dropout layer retain the result passed from the previous layer with a probability of 0.25. The flattening layer flattens the result of the convolutional layer into a one-dimensional result. The one-dimensional results output by the three multi-channel two-dimensional convolutional neural networks are spliced into one-dimensional features through a fusion layer (Merge) and input to the fully connected neural network.

更进一步地,所述全连接神经网络依次包括第一全连接层(Dense_1)、规范层(BatchNormalization)、Dropout层、第二全连接层(Dense_2)。其中第一全连接层的神经元数量为625;第二全连接层的神经元数量根据分类任务的类别数量确定。所述第一全连接层采用LeakyReLU函数作为激活函数;第二全连接层采用Softmax函数作为激活函数。所述规范层将上一层的传递结果进行重新规范化,使其结果的均值接近0,标准差接近1。Dropout层以0.5的概率保留上一层传递过来的记过。全连接神经网络的输出为多个概率值,表示预测结果为每种分类标签的概率值。Further, the fully-connected neural network sequentially includes a first fully-connected layer (Dense_1), a normalization layer (BatchNormalization), a Dropout layer, and a second fully-connected layer (Dense_2). The number of neurons in the first fully connected layer is 625; the number of neurons in the second fully connected layer is determined according to the number of categories of the classification task. The first fully connected layer uses the LeakyReLU function as the activation function; the second fully connected layer uses the Softmax function as the activation function. The normalization layer re-normalizes the transfer results of the previous layer, so that the mean of the results is close to 0 and the standard deviation is close to 1. The dropout layer retains the demerits passed from the previous layer with a probability of 0.5. The output of the fully connected neural network is multiple probability values, indicating that the prediction result is the probability value of each classification label.

本发明相较于现有技术,具有以下的有益效果:Compared with the prior art, the present invention has the following beneficial effects:

本发明利用多通道二维卷积在正交的三个平面上提取特征。针对fMRI高维数据的特点,在正交的三个平面上用速度较快的多通道二维卷积能够使得模型学习到充分特征,同时避免了采用计算量较大的三维卷积,减少了计算量,提升了对fMRI全脑数据的分类准确率以及分类速度。The present invention uses multi-channel two-dimensional convolution to extract features on three orthogonal planes. According to the characteristics of fMRI high-dimensional data, using fast multi-channel two-dimensional convolution on three orthogonal planes can enable the model to learn sufficient features, while avoiding the use of three-dimensional convolution with a large amount of calculation, reducing the cost of The amount of calculation improves the classification accuracy and classification speed of fMRI whole-brain data.

附图说明Description of drawings

图1为一种基于深度学习的fMRI的全脑数据分类方法的具体流程图;Fig. 1 is a specific flow chart of a deep learning-based fMRI whole-brain data classification method;

图2为混合卷积神经网络的结构示意图。Figure 2 is a schematic structural diagram of a hybrid convolutional neural network.

具体实施方式Detailed ways

下面结合实施例及附图对本发明作进一步详细的描述,但本发明的实施方式不限于此。The present invention will be described in further detail below with reference to the embodiments and the accompanying drawings, but the embodiments of the present invention are not limited thereto.

实施例Example

在本实施例中,选取任务型fMRI的动作任务,对五种动作——移动右手手指、移动左手手指、挤压右脚脚趾、挤压左脚脚趾、移动舌头,的fMRI数据进行分类。In this embodiment, the action task of task-based fMRI is selected, and the fMRI data of five actions—moving the fingers of the right hand, moving the fingers of the left hand, squeezing the toes of the right foot, squeezing the toes of the left foot, and moving the tongue, are classified.

如图1所示为一种基于深度学习的fMRI的全脑数据分类方法的流程图,具体步骤包括:Figure 1 is a flowchart of a deep learning-based fMRI whole-brain data classification method, the specific steps include:

(1)获取试验参与者的FMRI试验数据,对fMRI试验数据进行预处理,同时获取fMRI数据对应的标签;(1) Obtain the FMRI test data of the test participants, preprocess the fMRI test data, and obtain the labels corresponding to the fMRI data;

所述预处理包括头部移动校正、时间层校正、空间标准化和空间平滑等;The preprocessing includes head movement correction, temporal layer correction, spatial normalization and spatial smoothing, etc.;

所述标签是指fMRI数据对应的动作类别,分别为:移动右手手指、移动左手手指、挤压右脚脚趾、挤压左脚脚趾、移动舌头。The labels refer to the action categories corresponding to the fMRI data, which are: moving the fingers of the right hand, moving the fingers of the left hand, squeezing the toes of the right foot, squeezing the toes of the left foot, and moving the tongue.

(2)对每个试验参与者的fMRI全脑数据进行聚合;(2) Aggregate the fMRI whole brain data of each trial participant;

本实施例中的fMRI全脑数据是任务态fMRI数据,因此,对试验过程内的N帧三维图像采用信号变化百分比(PSC)方法,来计算每个体素点在试验过程中相对静息时刻的平均变化值,转换成一帧平均三维图像。The fMRI whole-brain data in this embodiment is task-state fMRI data. Therefore, the signal change percentage (PSC) method is used for the N frames of 3D images during the experiment to calculate the relative value of each voxel at the resting moment during the experiment. The average change value, converted into an average 3D image of one frame.

每个体素点的平均PSC计算公式为:The formula for calculating the average PSC of each voxel point is:

Figure GDA0002470551150000041
Figure GDA0002470551150000041

其中,N表示试验过程中三维图像的帧数,yi表示体素点在第i帧图像的值,

Figure GDA0002470551150000042
表示体素点在静息时刻的平均值,静息时刻选择试验者在无试验刺激的休息阶段,p表示计算得到该体素点的平均变化值。Among them, N represents the number of frames of the 3D image in the test process, y i represents the value of the voxel point in the image of the ith frame,
Figure GDA0002470551150000042
Represents the average value of voxel points at the resting time. The resting time selects the tester in the rest period without test stimulation, and p represents the calculated average change value of the voxel point.

其中所述三维图像的大小为:x轴为91,y轴为109,z轴为91;所述动作过程内的N帧三维图像具有相同的动作类别标签。The size of the three-dimensional image is: the x-axis is 91, the y-axis is 109, and the z-axis is 91; the N frames of three-dimensional images in the action process have the same action category label.

(3)分别以正交的x、y、z轴方向,对聚合后得到的平均三维图像进行切片,得到三组二维图像;(3) slicing the average three-dimensional images obtained after polymerization in orthogonal x, y, and z axis directions, respectively, to obtain three sets of two-dimensional images;

对平均三维图像进行切片的具体过程为:沿x轴方向切片,得到91张在y-z平面上的二维图像,每张的大小为109×91;沿y轴方向切片,得到109张在x-z平面上的二维图像,每张的大小为91×91;沿z轴方向切片,得到91张在x-y平面上的二维图像,每张的大小为91×109。最终一共得到三组二维图像。The specific process of slicing the average 3D image is as follows: slicing along the x-axis direction to obtain 91 2D images on the y-z plane, each with a size of 109×91; slicing along the y-axis direction to obtain 109 2D images on the x-z plane The two-dimensional images on the x-y plane, each with a size of 91 × 91; sliced along the z-axis, to obtain 91 two-dimensional images on the x-y plane, each with a size of 91 × 109. Finally, a total of three sets of two-dimensional images are obtained.

(4)将得到的三组二维图像分别转换为一帧多通道二维图像;(4) converting the obtained three groups of two-dimensional images into one frame of multi-channel two-dimensional images respectively;

具体转换过程为:将91张y-z平面上的二维图像,转换成一帧有91个通道的二维图像;将109张x-z平面上的二维图像,转换成一帧有109个通道的二维图像;将91张x-y平面上的二维图像,转换成一帧有91个通道的二维图像。The specific conversion process is as follows: convert 91 two-dimensional images on the y-z plane into a frame of two-dimensional images with 91 channels; convert 109 two-dimensional images on the x-z plane into a frame of two-dimensional images with 109 channels ; Convert 91 two-dimensional images on the x-y plane into a frame of two-dimensional images with 91 channels.

(5)构建用于fMRI全脑数据分类的混合多通道卷积神经网络模型;(5) Build a hybrid multi-channel convolutional neural network model for fMRI whole-brain data classification;

具体地,所述混合多通道卷积神经网络模型的结构如图2所示,具体为:从输入到输出,依次包括三个并联的多通道二维卷进神经网络和一个全连接神经网络。其中每个二维卷积神经网络的输入对应一种多通道二维图像,三个多通道二维卷积神经网络的输出以串联形式拼接成一维特征,输入至全连接神经网络,最后输出预测每种分类标签的概率值。Specifically, the structure of the hybrid multi-channel convolutional neural network model is shown in FIG. 2, which is: from input to output, it sequentially includes three parallel multi-channel two-dimensional convolutional neural networks and a fully connected neural network. The input of each two-dimensional convolutional neural network corresponds to a multi-channel two-dimensional image, and the outputs of three multi-channel two-dimensional convolutional neural networks are spliced into one-dimensional features in series, input to the fully connected neural network, and finally output prediction Probability values for each class label.

所述多通道二维卷积神经网络依次包括输入层(Input)、第一卷积层(Conv2d_1)、第一池化层(MaxPooling2d_1)、第一Dropout层、第二卷积层(Conv2d_2)、第二池化层(MaxPooling2d_2)、第二Dropout层以及展平层(Flatten)。其中第一卷积层的卷积核数量为32,卷积核大小为3×3;第二卷积层的卷积核数量为64,卷积核大小为3×3。所述第一卷积层和第二卷积层均采用LeakyReLU函数作为激活函数。所述第一池化层和第二池化层均采用最大池化操作,池化窗口大小为2×2。所述第一Dropout层和第二Dropout层均以0.25的概率保留上一层传递过来的结果。所述展平层将卷积层的结果展平输出成一维结果。三个多通道二维卷积神经网络输出的一维结果通过融合层(Merge)拼接成一维特征,输入至全连接神经网络。The multi-channel two-dimensional convolutional neural network sequentially includes an input layer (Input), a first convolution layer (Conv2d_1), a first pooling layer (MaxPooling2d_1), a first Dropout layer, a second convolution layer (Conv2d_2), The second pooling layer (MaxPooling2d_2), the second Dropout layer and the flattening layer (Flatten). The number of convolution kernels in the first convolutional layer is 32, and the size of convolution kernels is 3×3; the number of convolution kernels in the second convolutional layer is 64, and the size of convolution kernels is 3×3. Both the first convolutional layer and the second convolutional layer use the LeakyReLU function as the activation function. The first pooling layer and the second pooling layer both use the maximum pooling operation, and the size of the pooling window is 2×2. Both the first Dropout layer and the second Dropout layer retain the result passed from the previous layer with a probability of 0.25. The flattening layer flattens the result of the convolutional layer into a one-dimensional result. The one-dimensional results output by the three multi-channel two-dimensional convolutional neural networks are spliced into one-dimensional features through a fusion layer (Merge) and input to the fully connected neural network.

所述全连接神经网络依次包括第一全连接层(Dense_1)、规范层(BatchNormalization)、Dropout层、第二全连接层(Dense_2)。其中第一全连接层的神经元数量为625;第二全连接层的神经元数量根据分类任务的类别数量确定。所述第一全连接层采用LeakyReLU函数作为激活函数;第二全连接层采用Softmax函数作为激活函数。所述规范层将上一层的传递结果进行重新规范化,使其结果的均值接近0,标准差接近1。Dropout层以0.5的概率保留上一层传递过来的记过。全连接神经网络的输出为多个概率值,表示预测结果为每种分类标签的概率值。The fully-connected neural network sequentially includes a first fully-connected layer (Dense_1), a normalization layer (BatchNormalization), a Dropout layer, and a second fully-connected layer (Dense_2). The number of neurons in the first fully connected layer is 625; the number of neurons in the second fully connected layer is determined according to the number of categories of the classification task. The first fully connected layer uses the LeakyReLU function as the activation function; the second fully connected layer uses the Softmax function as the activation function. The normalization layer re-normalizes the transfer results of the previous layer, so that the mean of the results is close to 0 and the standard deviation is close to 1. The dropout layer retains the demerits passed from the previous layer with a probability of 0.5. The output of the fully connected neural network is multiple probability values, indicating that the prediction result is the probability value of each classification label.

(6)将用于模型训练部分的参与者的fMRI数据经过步骤(1)-(4)的处理,将得到的三帧多通道二维图像机器分类标签作为输入数据,输入至混合卷积神经网络中进行模型训练,得到混合卷积神经网络的参数,用于fMRI全脑数据分类的混合卷积神经网络模型;(6) The fMRI data of the participants used in the model training part are processed in steps (1)-(4), and the obtained three-frame multi-channel two-dimensional image machine classification labels are used as input data, and input to the hybrid convolutional neural network Perform model training in the network to obtain the parameters of the hybrid convolutional neural network, which is used for the hybrid convolutional neural network model for fMRI whole-brain data classification;

(7)对获得的fMRI数据依次进行步骤(1)-(4)的处理,将得到的三帧多通道二维图像输入到训练后的混合卷积神经网络模型中进行分类。(7) Steps (1)-(4) are sequentially performed on the obtained fMRI data, and the obtained three-frame multi-channel two-dimensional images are input into the trained hybrid convolutional neural network model for classification.

上述实施例为本发明较佳的实施方式,但本发明的实施方式并不受上述实施例的限制,其他的任何未背离本发明的精神实质与原理下所作的改变、修饰、替代、组合、简化,均应为等效的置换方式,都包含在本发明的保护范围之内。The above-mentioned embodiments are preferred embodiments of the present invention, but the embodiments of the present invention are not limited by the above-mentioned embodiments, and any other changes, modifications, substitutions, combinations, The simplification should be equivalent replacement manners, which are all included in the protection scope of the present invention.

Claims (9)

1.一种基于深度学习的fMRI全脑数据分类方法,其特征在于,具体步骤包括:1. a kind of fMRI whole brain data classification method based on deep learning, is characterized in that, concrete steps comprise: (1)获取试验参与者的fMRI数据,对fMRI数据进行预处理,同时获取fMRI数据对应的标签;(1) Obtain the fMRI data of the test participants, preprocess the fMRI data, and obtain the labels corresponding to the fMRI data; (2)对每个试验参与者的fMRI数据进行聚合;(2) Aggregate the fMRI data of each trial participant; (3)分别以正交的x、y、z轴方向,对聚合后得到的平均三维图像进行切片,得到三组二维图像;(3) slicing the average three-dimensional images obtained after polymerization in orthogonal x, y, and z axis directions, respectively, to obtain three sets of two-dimensional images; (4)将得到的三组二维图像分别转换为一帧多通道二维图像;(4) converting the obtained three groups of two-dimensional images into one frame of multi-channel two-dimensional images respectively; (5)构建用于fMRI数据分类的混合多通道卷积神经网络模型;(5) Build a hybrid multi-channel convolutional neural network model for fMRI data classification; (6)将用于模型训练部分的参与者的fMRI数据经过步骤(1)-(4)的处理,将得到的三帧多通道二维图像及其标签作为输入数据,输入至混合多通道卷积神经网络模型中进行模型训练,得到混合多通道卷积神经网络模型的参数,用于fMRI数据分类的混合多通道卷积神经网络模型;(6) The fMRI data of the participants used in the model training part are processed in steps (1)-(4), and the obtained three-frame multi-channel two-dimensional images and their labels are used as input data, and input to the mixed multi-channel volume Model training is performed in the convolutional neural network model to obtain the parameters of the hybrid multi-channel convolutional neural network model, which is used for the hybrid multi-channel convolutional neural network model for fMRI data classification; (7)对获得的用于分类部分的参与者的fMRI数据依次进行步骤(1)-(4)的处理,将得到的三帧多通道二维图像输入到步骤(6)中训练后的混合多通道卷积神经网络模型中进行分类。(7) Steps (1)-(4) are sequentially performed on the obtained fMRI data of the participants for the classification part, and the obtained three-frame multi-channel two-dimensional images are input into the mixture trained in step (6). Classification in a multi-channel convolutional neural network model. 2.根据权利要求1所述的一种基于深度学习的fMRI全脑数据分类方法,其特征在于,所述步骤(1)中的预处理包括头部移动校正、时间层校正、空间标准化和空间平滑;所述标签是指试验参与者的属性。2. A deep learning-based fMRI whole-brain data classification method according to claim 1, wherein the preprocessing in the step (1) comprises head movement correction, temporal layer correction, spatial standardization and spatial Smooth; the labels refer to attributes of trial participants. 3.根据权利要求1所述的一种基于深度学习的fMRI全脑数据分类方法,其特征在于,在步骤(2)中,如果fMRI数据为静息态fMRI数据,则对获得的N帧三维图像(dimX×dimY×dimZ)对应位置的体素点进行算术平均,得到一帧平均三维图像;3. a kind of fMRI whole brain data classification method based on deep learning according to claim 1, is characterized in that, in step (2), if fMRI data is resting state fMRI data, then to the N frame three-dimensional obtained The voxel points at the corresponding positions of the image (dimX×dimY×dimZ) are arithmetically averaged to obtain an average three-dimensional image; 如果fMRI全脑数据是任务态fMRI数据,则对试验过程内的N帧三维图像采用信号变化百分比方法,来计算每个体素点在试验过程中相对静息时刻的平均变化值,转换成一帧平均三维图像。If the fMRI whole brain data is task-state fMRI data, the signal change percentage method is used for N frames of 3D images during the experiment to calculate the average change value of each voxel point relative to the resting moment during the experiment, and convert it into a frame average 3D image. 4.根据权利要求3所述的一种基于深度学习的fMRI全脑数据分类方法,其特征在于,每个体素点的平均信号变化百分比的计算公式为:4. a kind of fMRI whole-brain data classification method based on deep learning according to claim 3, is characterized in that, the calculation formula of the average signal change percentage of each voxel point is:
Figure FDA0002510383280000011
Figure FDA0002510383280000011
其中,N表示试验过程中三维图像的帧数,yi表示体素点在第i帧图像的值,
Figure FDA0002510383280000012
表示体素点在静息时刻的平均值,静息时刻选择试验参与者在无试验刺激的休息阶段,p表示计算得到该体素点的平均变化值;
Among them, N represents the number of frames of the 3D image in the test process, y i represents the value of the voxel point in the image of the ith frame,
Figure FDA0002510383280000012
Represents the average value of voxel points at the resting time, and the resting time selects the test participants in the rest period without experimental stimulation, and p represents the calculated average change value of the voxel point;
其中所述三维图像的大小为x轴为dimX,y轴为dimY,z轴为dimZ;所述试验过程中的N帧三维图像具有相同的标签。The size of the three-dimensional image is that the x-axis is dimX, the y-axis is dimY, and the z-axis is dimZ; the N frames of three-dimensional images in the test process have the same label.
5.根据权利要求1所述的一种基于深度学习的fMRI全脑数据分类方法,其特征在于,在步骤(3)中对平均三维图像进行切片的具体操作为:沿x轴方向对x轴上每个单位长度进行切片,得到dimX张在y-z平面上的二维图像,每张的大小为dimY×dimZ;沿y轴方向对y轴上每个单位长度进行切片,得到dimY张在x-z平面上的二维图像,每张的大小为dimX×dimZ;沿z轴方向对z轴上每个单位长度进行切片,得到dimZ张在x-y平面上的二维图像,每张的大小为dimX×dimY;以相同平面上的二维图像为一组,最终一共得到三组二维图像。5. a kind of fMRI whole-brain data classification method based on deep learning according to claim 1, is characterized in that, in step (3), the specific operation of slicing the average three-dimensional image is: along the x-axis direction to the x-axis Slice each unit length on the y-axis to obtain dimX two-dimensional images on the y-z plane, each with a size of dimY×dimZ; slice each unit length on the y-axis along the y-axis to obtain dimY images on the x-z plane The size of each image is dimX×dimZ; each unit length on the z-axis is sliced along the z-axis direction to obtain dimZ two-dimensional images on the x-y plane, and the size of each image is dimX×dimY ; Taking the two-dimensional images on the same plane as a group, finally a total of three groups of two-dimensional images are obtained. 6.根据权利要求1所述的一种基于深度学习的fMRI全脑数据分类方法,其特征在于,所述步骤(4)具体为:根据卷积神经网络中通道的概念,对于dimX张y-z平面上的二维图像,将每一个切片位置的二维图像当作一个通道,转换成一帧能够输入进卷积神经网络的有dimX个通道的二维图像;对于dimY张x-z平面上的二维图像,将每一个切片位置的二维图像当作一个通道,转换成一帧能够输入进卷积神经网络的有dimY个通道的二维图像;对于dimZ张x-y平面上的二维图像,同样将每一个切片位置的二维图像当作一个通道,转换成一帧能够输入进卷积神经网络的有dimZ个通道的二维图像。6. a kind of fMRI whole brain data classification method based on deep learning according to claim 1, is characterized in that, described step (4) is specifically: according to the concept of channel in convolutional neural network, for dimX Zhang y-z plane For the two-dimensional image on the above, the two-dimensional image of each slice position is regarded as a channel, and converted into a two-dimensional image with dimX channels that can be input into the convolutional neural network; for dimY two-dimensional images on the x-z plane , take the two-dimensional image of each slice position as a channel, and convert it into a frame of two-dimensional image with dimY channels that can be input into the convolutional neural network; for dimZ two-dimensional images on the x-y plane, each The 2D image of the slice position is treated as a channel and converted into a 2D image with dimZ channels that can be input into the convolutional neural network. 7.根据权利要求1所述的一种基于深度学习的fMRI全脑数据分类方法,其特征在于,所述混合多通道卷积神经网络模型从输入到输出,依次包括三个并联的多通道二维卷积神经网络和一个全连接神经网络;其中每个多通道二维卷积神经网络的输入对应一种多通道二维图像,三个多通道二维卷积神经网络的输出以串联形式拼接成一维特征,输入至全连接神经网络,最后输出预测每种分类标签的概率值。7. A deep learning-based fMRI whole-brain data classification method according to claim 1, wherein the hybrid multi-channel convolutional neural network model comprises three parallel multi-channel two successively from input to output. 2D convolutional neural network and a fully connected neural network; the input of each multi-channel 2D convolutional neural network corresponds to a multi-channel 2D image, and the outputs of three multi-channel 2D convolutional neural networks are concatenated in series The one-dimensional feature is input to the fully connected neural network, and the final output predicts the probability value of each classification label. 8.根据权利要求7所述的一种基于深度学习的fMRI全脑数据分类方法,其特征在于,所述多通道二维卷积神经网络依次包括输入层(Input)、第一卷积层(Conv2d_1)、第一池化层(MaxPooling2d_1)、第一Dropout层、第二卷积层(Conv2d_2)、第二池化层(MaxPooling2d_2)、第二Dropout层以及展平层(Flatten);其中第一卷积层的卷积核数量为32,卷积核大小为3×3;第二卷积层的卷积核数量为64,卷积核大小为3×3;所述第一卷积层和第二卷积层均采用LeakyReLU函数作为激活函数;所述第一池化层和第二池化层均采用最大池化操作,池化窗口大小为2×2;所述第一Dropout层和第二Dropout层均以0.25的概率保留上一层传递过来的结果;所述展平层将第二卷积层的结果展平输出成一维结果;三个多通道二维卷积神经网络输出的一维结果通过融合层(Merge)拼接成一维特征,输入至全连接神经网络。8. A deep learning-based fMRI whole-brain data classification method according to claim 7, wherein the multi-channel two-dimensional convolutional neural network comprises an input layer (Input), a first convolutional layer ( Conv2d_1), the first pooling layer (MaxPooling2d_1), the first Dropout layer, the second convolutional layer (Conv2d_2), the second pooling layer (MaxPooling2d_2), the second Dropout layer and the flattening layer (Flatten); wherein the first The number of convolution kernels in the convolution layer is 32, and the size of the convolution kernel is 3×3; the number of convolution kernels in the second convolution layer is 64, and the size of the convolution kernel is 3×3; the first convolution layer and The second convolutional layer adopts the LeakyReLU function as the activation function; the first pooling layer and the second pooling layer both adopt the maximum pooling operation, and the pooling window size is 2×2; Both Dropout layers retain the result passed from the previous layer with a probability of 0.25; the flattening layer flattens the result of the second convolutional layer and outputs it into a one-dimensional result; one output of the three multi-channel two-dimensional convolutional neural networks The dimensional results are spliced into one-dimensional features through a fusion layer (Merge) and input to the fully connected neural network. 9.根据权利要求7所述的一种基于深度学习的fMRI全脑数据分类方法,其特征在于,所述全连接神经网络依次包括第一全连接层(Dense_1)、规范层(BatchNormalization)、Dropout层、第二全连接层(Dense_2);其中第一全连接层的神经元数量为625;第二全连接层的神经元数量根据分类任务的类别数量确定;所述第一全连接层采用LeakyReLU函数作为激活函数;第二全连接层采用Softmax函数作为激活函数;所述规范层将上一层的传递结果进行重新规范化,使其结果的均值接近0,标准差接近1;Dropout层以0.5的概率保留上一层传递过来的记过;全连接神经网络的输出为多个概率值,表示预测结果为每种分类标签的概率值。9. A deep learning-based fMRI whole-brain data classification method according to claim 7, wherein the fully-connected neural network sequentially comprises a first fully-connected layer (Dense_1), a normative layer (BatchNormalization), Dropout layer, the second fully-connected layer (Dense_2); the number of neurons in the first fully-connected layer is 625; the number of neurons in the second fully-connected layer is determined according to the number of categories of the classification task; the first fully-connected layer adopts LeakyReLU function as the activation function; the second fully connected layer uses the Softmax function as the activation function; the norm layer re-normalizes the transfer results of the previous layer, so that the mean value of the results is close to 0, and the standard deviation is close to 1; the Dropout layer uses 0.5 The probability retains the demerits passed from the previous layer; the output of the fully connected neural network is multiple probability values, indicating that the prediction result is the probability value of each category label.
CN201811054390.5A 2018-09-11 2018-09-11 A deep learning-based fMRI whole-brain data classification method Active CN109222972B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811054390.5A CN109222972B (en) 2018-09-11 2018-09-11 A deep learning-based fMRI whole-brain data classification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811054390.5A CN109222972B (en) 2018-09-11 2018-09-11 A deep learning-based fMRI whole-brain data classification method

Publications (2)

Publication Number Publication Date
CN109222972A CN109222972A (en) 2019-01-18
CN109222972B true CN109222972B (en) 2020-09-22

Family

ID=65067767

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811054390.5A Active CN109222972B (en) 2018-09-11 2018-09-11 A deep learning-based fMRI whole-brain data classification method

Country Status (1)

Country Link
CN (1) CN109222972B (en)

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109816037B (en) * 2019-01-31 2021-05-25 北京字节跳动网络技术有限公司 Method and device for extracting feature map of image
CN110246566A (en) * 2019-04-24 2019-09-17 中南大学湘雅二医院 Method, system and storage medium are determined based on the conduct disorder of convolutional neural networks
CN110192860B (en) * 2019-05-06 2022-10-11 复旦大学 Brain imaging intelligent test analysis method and system for network information cognition
CN110197729A (en) * 2019-05-20 2019-09-03 华南理工大学 Tranquillization state fMRI data classification method and device based on deep learning
CN110322969A (en) * 2019-07-03 2019-10-11 北京工业大学 A kind of fMRI data classification method based on width study
CN110604572A (en) * 2019-10-08 2019-12-24 江苏海洋大学 Brain Activity State Recognition Method Based on Human Brain Feature Atlas
CN110916661B (en) * 2019-11-21 2021-06-08 大连理工大学 ICA-CNN classified fMRI intracerebral data time pre-filtering and amplifying method
CN110870770B (en) * 2019-11-21 2021-05-11 大连理工大学 A smooth augmentation method for fMRI spatial activation maps for ICA-CNN classification
CN111046918B (en) * 2019-11-21 2022-09-20 大连理工大学 ICA-CNN classified fMRI data space pre-smoothing and broadening method
US20210174939A1 (en) * 2019-12-09 2021-06-10 Tencent America LLC Deep learning system for detecting acute intracranial hemorrhage in non-contrast head ct images
CN110992351B (en) * 2019-12-12 2022-08-16 南京邮电大学 sMRI image classification method and device based on multi-input convolution neural network
CN111709787B (en) * 2020-06-18 2023-08-22 抖音视界有限公司 Method, device, electronic equipment and medium for generating user retention time
CN111728590A (en) * 2020-06-30 2020-10-02 中国人民解放军国防科技大学 Individual cognitive ability prediction method and system based on dynamic functional connectivity
CN113096096B (en) * 2021-04-13 2023-04-18 中山市华南理工大学现代产业技术研究院 Microscopic image bone marrow cell counting method and system fusing morphological characteristics
CN113313673B (en) * 2021-05-08 2022-05-20 华中科技大学 TB-level cranial nerve fiber data reduction method and system based on deep learning
CN113313232B (en) * 2021-05-19 2023-02-14 华南理工大学 A Classification Method for Functional Brain Networks Based on Pre-training and Graph Neural Networks
CN119205731B (en) * 2024-11-22 2025-03-04 江西财经大学 Human brain cognitive text description generation method and system for natural image

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107067395A (en) * 2017-04-26 2017-08-18 中国人民解放军总医院 A kind of nuclear magnetic resonance image processing unit and method based on convolutional neural networks
CN107145727A (en) * 2017-04-26 2017-09-08 中国人民解放军总医院 A medical image processing device and method using convolutional neural network
CN107424145A (en) * 2017-06-08 2017-12-01 广州中国科学院软件应用技术研究所 The dividing method of nuclear magnetic resonance image based on three-dimensional full convolutional neural networks
CN107563434A (en) * 2017-08-30 2018-01-09 山东大学 A kind of brain MRI image sorting technique based on Three dimensional convolution neutral net, device
CN107767378A (en) * 2017-11-13 2018-03-06 浙江中医药大学 The multi-modal Magnetic Resonance Image Segmentation methods of GBM based on deep neural network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107067395A (en) * 2017-04-26 2017-08-18 中国人民解放军总医院 A kind of nuclear magnetic resonance image processing unit and method based on convolutional neural networks
CN107145727A (en) * 2017-04-26 2017-09-08 中国人民解放军总医院 A medical image processing device and method using convolutional neural network
CN107424145A (en) * 2017-06-08 2017-12-01 广州中国科学院软件应用技术研究所 The dividing method of nuclear magnetic resonance image based on three-dimensional full convolutional neural networks
CN107563434A (en) * 2017-08-30 2018-01-09 山东大学 A kind of brain MRI image sorting technique based on Three dimensional convolution neutral net, device
CN107767378A (en) * 2017-11-13 2018-03-06 浙江中医药大学 The multi-modal Magnetic Resonance Image Segmentation methods of GBM based on deep neural network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Modeling Task fMRI Data Via Deep Convolutional Autoencoder;Heng Huang et al.;《IEEE Transactions on Medical Imaging》;20180630;第37卷(第7期);1551-1561 *
State-spacemodel with deep learning for functional dynamics estimation in resting-state fMRI;Heung-Il Suk et al.;《NeuroImage》;20160114;第129卷;292-307 *
基于卷积神经网络的fMRI数据分类方法;张兆晨,冀俊忠;《模式识别与人工智能》;20170630;第30卷(第6期);549-558 *

Also Published As

Publication number Publication date
CN109222972A (en) 2019-01-18

Similar Documents

Publication Publication Date Title
CN109222972B (en) A deep learning-based fMRI whole-brain data classification method
CN109685819B (en) A 3D Medical Image Segmentation Method Based on Feature Enhancement
CN109409222B (en) A multi-view facial expression recognition method based on mobile terminal
CN111627019B (en) Liver tumor segmentation method and system based on convolutional neural network
CN110363760B (en) Computer system for recognizing medical images
CN107977629A (en) A kind of facial image aging synthetic method of feature based separation confrontation network
CN110033023A (en) It is a kind of based on the image processing method and system of drawing this identification
CN108491858A (en) Method for detecting fatigue driving based on convolutional neural networks and system
CN107358648A (en) Real-time full-automatic high quality three-dimensional facial reconstruction method based on individual facial image
CN106778657A (en) Neonatal pain expression classification method based on convolutional neural networks
CN108363979A (en) Neonatal pain expression recognition method based on binary channels Three dimensional convolution neural network
CN110197729A (en) Tranquillization state fMRI data classification method and device based on deep learning
CN105534534A (en) Emotion recognition method, device and system based on real-time functional magnetic resonance
CN116311472B (en) Micro-expression recognition method and device based on multi-level graph convolution network
CN110929762B (en) A body language detection and behavior analysis method and system based on deep learning
CN107622261A (en) Face age estimation method and device based on deep learning
CN109118487B (en) Bone age assessment method based on non-subsampled contourlet transform and convolutional neural network
CN114429659B (en) A method and system for facial expression recognition of stroke patients based on self-attention
CN112990008A (en) Emotion recognition method and system based on three-dimensional characteristic diagram and convolutional neural network
CN109447155A (en) A kind of facial expression recognition model training method, device and equipment
CN111611852A (en) A method, device and equipment for training an expression recognition model
CN109657548A (en) A kind of method for detecting human face and system based on deep learning
CN111428555B (en) A joint-based hand pose estimation method
CN116975600A (en) Biomedical time sequence signal application algorithm for multi-scale and multi-mode contrast learning
CN111862261A (en) A method and system for generating FLAIR modal magnetic resonance images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant