CN109222972B - fMRI whole brain data classification method based on deep learning - Google Patents

fMRI whole brain data classification method based on deep learning Download PDF

Info

Publication number
CN109222972B
CN109222972B CN201811054390.5A CN201811054390A CN109222972B CN 109222972 B CN109222972 B CN 109222972B CN 201811054390 A CN201811054390 A CN 201811054390A CN 109222972 B CN109222972 B CN 109222972B
Authority
CN
China
Prior art keywords
layer
fmri
dimensional
neural network
dimensional image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811054390.5A
Other languages
Chinese (zh)
Other versions
CN109222972A (en
Inventor
胡金龙
邝岳臻
董守斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN201811054390.5A priority Critical patent/CN109222972B/en
Publication of CN109222972A publication Critical patent/CN109222972A/en
Application granted granted Critical
Publication of CN109222972B publication Critical patent/CN109222972B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/05Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves 
    • A61B5/055Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves  involving electronic [EMR] or nuclear [NMR] magnetic resonance, e.g. magnetic resonance imaging
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0033Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room
    • A61B5/004Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room adapted for image acquisition of a particular organ or body part
    • A61B5/0042Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room adapted for image acquisition of a particular organ or body part for the brain
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2576/00Medical imaging apparatus involving image processing or analysis
    • A61B2576/02Medical imaging apparatus involving image processing or analysis specially adapted for a particular organ or body part
    • A61B2576/026Medical imaging apparatus involving image processing or analysis specially adapted for a particular organ or body part for the brain

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Pathology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Veterinary Medicine (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Neurology (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physiology (AREA)
  • Psychiatry (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a fMRI whole brain data classification method based on deep learning, which comprises the following steps: (1) acquiring fMRI data, preprocessing the fMRI data, and acquiring a corresponding label; (2) aggregating the fMRI data; (3) slicing the average three-dimensional image in orthogonal x, y and z axis directions respectively; (4) converting the three groups of two-dimensional images into a frame of multi-channel two-dimensional image respectively; (5) constructing a mixed multi-channel convolution neural network model for fMRI data classification; (6) processing the fMRI data, training the obtained label as input data, and using the obtained parameters for a mixed convolution neural network model for fMRI data classification; (7) and (3) processing the fMRI data, and inputting the obtained three-frame multi-channel two-dimensional image into the trained hybrid convolutional neural network model for classification. The method can effectively improve the accuracy of fMRI data classification and reduce the amount of calculation of fMRI data classification model training and classification.

Description

fMRI whole brain data classification method based on deep learning
Technical Field
The invention relates to the field of data classification, in particular to a fMRI whole brain data classification method based on deep learning.
Background
Functional magnetic resonance imaging (fMRI) is a non-invasive measure of brain functional activity, fMRI data reflects the blood oxygen content of the human brain, and fMRI is widely used in the fields of cognitive science, developmental science, mental disease, and the like.
Deep learning is a method for performing characterization learning on data in machine learning, and deep learning models such as a Deep Neural Network (DNN), a Convolutional Neural Network (CNN), a Recurrent Neural Network (RNN) and the like have been successfully applied to the fields of computer vision, speech recognition, natural language processing and the like. Deep learning models have been used for classifying fMRI whole brain data, but how to improve classification accuracy by deep learning while keeping a small amount of calculation for fMRI whole brain data with complex dynamics is still an urgent problem to be solved.
Disclosure of Invention
The invention aims to provide a fMRI whole brain data classification method based on deep learning. Compared with the prior art, the method can better learn full-brain characteristic information of fMRI, and simultaneously uses smaller calculated amount to train the model.
The purpose of the invention can be realized by the following technical scheme:
a fMRI whole brain data classification method based on deep learning specifically comprises the following steps:
(1) obtaining fMRI test data of a test participant, preprocessing the fMRI test data, and simultaneously obtaining a label corresponding to the fMRI data;
(2) aggregating fMRI whole brain data for each test participant;
(3) slicing the average three-dimensional image obtained after polymerization in orthogonal x, y and z axis directions respectively to obtain three groups of two-dimensional images;
(4) converting the obtained three groups of two-dimensional images into a frame of multi-channel two-dimensional image respectively;
(5) constructing a mixed multi-channel convolution neural network model for fMRI whole brain data classification;
(6) processing fMRI data of participants for a model training part in the steps (1) to (4), inputting the obtained three-frame multichannel two-dimensional image and classification labels thereof as input data into a mixed convolutional neural network for model training to obtain parameters of the mixed convolutional neural network, and using the parameters of the mixed convolutional neural network as a mixed convolutional neural network model for fMRI whole brain data classification;
(7) and (3) sequentially carrying out the processing of the steps (1) to (4) on the obtained fMRI data, and inputting the obtained three-frame multichannel two-dimensional image into the trained mixed convolution neural network model for classification.
Specifically, the preprocessing in the step (1) includes head movement correction, temporal layer correction, spatial normalization, spatial smoothing, and the like; the label refers to an attribute of the test participant (such as a certain action of the test participant) or a behavior attribute of the test participant during the test (such as a certain action of the test participant).
Specifically, in step (2), if the fMRI whole brain data is resting fMRI data, the voxel points at the corresponding positions of the N frames of three-dimensional images (dimX × dimY × dimZ) obtained are arithmetically averaged to obtain one frame of averaged three-dimensional image.
Specifically, in step (2), if the fMRI whole brain data is task-state fMRI data, a signal change Percentage (PSC) method is applied to N frames of three-dimensional images during the experiment to calculate an average change value of each voxel point relative to a resting time during the experiment, and the average change value is converted into one frame of average three-dimensional image.
Further, the average PSC per voxel point is calculated by:
Figure GDA0002470551150000021
wherein N represents the number of frames of the three-dimensional image in the test process, yiA value representing a voxel point in the ith frame image,
Figure GDA0002470551150000022
the average value of the voxel point at the resting moment is shown, the resting moment selects a tester in the resting stage without test stimulation, and p represents the average change value of the voxel point obtained by calculation.
Wherein the size of the three-dimensional image is that the x axis is dimX, the y axis is dimY, and the z axis is dimZ; the N frames of three-dimensional images in the test process have the same label.
Specifically, the specific operations of slicing the average three-dimensional image in step (3) are as follows: slicing each unit length on the x axis along the x axis direction to obtain dimX two-dimensional images on a y-z plane, wherein the size of each image is dimY multiplied by dimZ; slicing each unit length on the y axis along the y axis direction to obtain dimY two-dimensional images on an x-z plane, wherein the size of each image is dimX multiplied by dimZ; and slicing each unit length on the z axis along the z axis direction to obtain dimZ two-dimensional images on an x-y plane, wherein the size of each image is dimX multiplied by dimY. And taking the two-dimensional images on the same plane as a group, and finally obtaining three groups of two-dimensional images.
Further, the step (4) is specifically as follows: according to the concept of a channel in a convolutional neural network, regarding a two-dimensional image on a dimX y-z plane, taking the two-dimensional image at each slice position as a channel, and converting the two-dimensional image into a frame of two-dimensional image which can be input into the convolutional neural network and has dimX channels; regarding the two-dimensional images on the dimY x-z plane, the two-dimensional image at each slice position is taken as a channel and converted into a frame of two-dimensional image with dimY channels, which can be input into a convolutional neural network; for the two-dimensional image on the dimZ x-y plane, the two-dimensional image at each slice position is also taken as a channel and converted into a frame of two-dimensional image with dimZ channels which can be input into the convolutional neural network.
Specifically, the hybrid multichannel convolutional neural network model sequentially comprises three parallel multichannel two-dimensional convolutional neural networks and a fully-connected neural network from input to output. The input of each two-dimensional convolutional neural network corresponds to a multi-channel two-dimensional image, the outputs of the three multi-channel two-dimensional convolutional neural networks are spliced into one-dimensional characteristics in a series connection mode and input to the fully-connected neural network, and finally the probability value of each classification label is output and predicted.
Further, the multichannel two-dimensional convolutional neural network sequentially comprises an Input layer (Input), a first convolutional layer (Conv2d _1), a first pooling layer (MaxPooling2d _1), a first Dropout layer, a second convolutional layer (Conv2d _2), a second pooling layer (MaxPooling2d _2), a second Dropout layer and a flattening layer (flaten). The number of convolution kernels of the first convolution layer is 32, and the size of the convolution kernels is 3 multiplied by 3; the number of convolution kernels of the second convolution layer is 64 and the convolution kernel size is 3 × 3. The first convolution layer and the second convolution layer both adopt a LeakyReLU function as an activation function. The first pooling layer and the second pooling layer both adopt maximum pooling operation, and the size of a pooling window is 2 multiplied by 2. The first Dropout layer and the second Dropout layer both retain the result passed by the previous layer with a probability of 0.25. And the flattening layer flattens the result of the convolution layer and outputs the result into a one-dimensional result. The one-dimensional results output by the three multi-channel two-dimensional convolution neural networks are spliced into one-dimensional characteristics through a fusion layer (Merge) and input into the fully-connected neural network.
Further, the fully-connected neural network sequentially comprises a first fully-connected layer (Dense _1), a specification layer (BatchNormalization), a Dropout layer and a second fully-connected layer (Dense _ 2). Wherein the number of neurons in the first fully-connected layer is 625; the number of neurons in the second fully-connected layer is determined from the number of classes of the classification task. The first full connection layer adopts a LeakyReLU function as an activation function; the second fully-connected layer employs a Softmax function as the activation function. And the normalization layer re-normalizes the transmission result of the previous layer, so that the mean value of the result is close to 0 and the standard deviation is close to 1. The Dropout layer retains the history passed by the previous layer with a probability of 0.5. The output of the fully-connected neural network is a plurality of probability values which represent the probability value of the prediction result of each classification label.
Compared with the prior art, the invention has the following beneficial effects:
the present invention utilizes multi-channel two-dimensional convolution to extract features in three orthogonal planes. Aiming at the characteristics of fMRI high-dimensional data, the model can learn sufficient characteristics by using fast multi-channel two-dimensional convolution on three orthogonal planes, meanwhile, the three-dimensional convolution with large calculation amount is avoided, the calculation amount is reduced, and the classification accuracy and the classification speed of fMRI whole brain data are improved.
Drawings
FIG. 1 is a detailed flow chart of a deep learning based full brain data classification method for fMRI;
fig. 2 is a schematic structural diagram of a hybrid convolutional neural network.
Detailed Description
The present invention will be described in further detail with reference to examples and drawings, but the present invention is not limited thereto.
Examples
In this embodiment, an action task of task-type fMRI is selected, and fMRI data of five actions, i.e., moving a right-hand finger, moving a left-hand finger, squeezing a right toe, squeezing a left toe, and moving a tongue, are classified.
Fig. 1 is a flowchart of a method for classifying whole brain data based on deep learning fMRI, which includes the following steps:
(1) obtaining FMRI test data of a test participant, preprocessing the fMRI test data, and simultaneously obtaining a label corresponding to the fMRI data;
the preprocessing comprises head movement correction, time layer correction, space standardization, space smoothing and the like;
the labels refer to action categories corresponding to fMRI data, and are respectively: move right hand fingers, move left hand fingers, squeeze right toes, squeeze left toes, move the tongue.
(2) Aggregating fMRI whole brain data for each test participant;
the fMRI whole brain data in this embodiment is task-state fMRI data, and therefore, a signal change Percentage (PSC) method is used for N frames of three-dimensional images in the test process to calculate an average change value of each voxel point at a relative resting time in the test process, and the average change value is converted into one frame of average three-dimensional image.
The average PSC per voxel point is calculated as:
Figure GDA0002470551150000041
wherein N represents the number of frames of the three-dimensional image in the test process, yiA value representing a voxel point in the ith frame image,
Figure GDA0002470551150000042
the average value of the voxel point at the resting moment is shown, the resting moment selects a tester in the resting stage without test stimulation, and p represents the average change value of the voxel point obtained by calculation.
Wherein the size of the three-dimensional image is: the x-axis is 91, the y-axis is 109, and the z-axis is 91; the N frames of three-dimensional images in the action process have the same action category label.
(3) Slicing the average three-dimensional image obtained after polymerization in orthogonal x, y and z axis directions respectively to obtain three groups of two-dimensional images;
the specific process of slicing the average three-dimensional image is as follows: slicing along the x-axis direction to obtain 91 two-dimensional images on the y-z plane, wherein the size of each two-dimensional image is 109 multiplied by 91; slicing along the y-axis direction to obtain 109 two-dimensional images on an x-z plane, wherein the size of each image is 91 x 91; the slices were taken along the z-axis to yield 91 two-dimensional images on the x-y plane, each 91 x 109 in size. Finally, three groups of two-dimensional images are obtained.
(4) Converting the obtained three groups of two-dimensional images into a frame of multi-channel two-dimensional image respectively;
the specific conversion process is as follows: converting 91 two-dimensional images on a y-z plane into a two-dimensional image with 91 channels in one frame; converting 109 two-dimensional images on an x-z plane into a two-dimensional image with 109 channels in one frame; 91 two-dimensional images on the x-y plane are converted into a two-dimensional image with 91 channels in one frame.
(5) Constructing a mixed multi-channel convolution neural network model for fMRI whole brain data classification;
specifically, the structure of the hybrid multichannel convolutional neural network model is shown in fig. 2, and specifically includes: from input to output, the three parallel multi-channel two-dimensional roll-in neural networks and a full-connection neural network are sequentially included. The input of each two-dimensional convolutional neural network corresponds to a multi-channel two-dimensional image, the outputs of the three multi-channel two-dimensional convolutional neural networks are spliced into one-dimensional characteristics in a series connection mode and input to the fully-connected neural network, and finally the probability value of each classification label is output and predicted.
The multichannel two-dimensional convolutional neural network sequentially comprises an Input layer (Input), a first convolutional layer (Conv2d _1), a first pooling layer (Maxbonding 2d _1), a first Dropout layer, a second convolutional layer (Conv2d _2), a second pooling layer (Maxbonding 2d _2), a second Dropout layer and a flattening layer (Flatten). The number of convolution kernels of the first convolution layer is 32, and the size of the convolution kernels is 3 multiplied by 3; the number of convolution kernels of the second convolution layer is 64 and the convolution kernel size is 3 × 3. The first convolution layer and the second convolution layer both adopt a LeakyReLU function as an activation function. The first pooling layer and the second pooling layer both adopt maximum pooling operation, and the size of a pooling window is 2 multiplied by 2. The first Dropout layer and the second Dropout layer both retain the result passed by the previous layer with a probability of 0.25. And the flattening layer flattens the result of the convolution layer and outputs the result into a one-dimensional result. The one-dimensional results output by the three multi-channel two-dimensional convolution neural networks are spliced into one-dimensional characteristics through a fusion layer (Merge) and input into the fully-connected neural network.
The fully-connected neural network sequentially comprises a first fully-connected layer (Dense _1), a specification layer (BatchNormal), a Dropout layer and a second fully-connected layer (Dense _ 2). Wherein the number of neurons in the first fully-connected layer is 625; the number of neurons in the second fully-connected layer is determined from the number of classes of the classification task. The first full connection layer adopts a LeakyReLU function as an activation function; the second fully-connected layer employs a Softmax function as the activation function. And the normalization layer re-normalizes the transmission result of the previous layer, so that the mean value of the result is close to 0 and the standard deviation is close to 1. The Dropout layer retains the history passed by the previous layer with a probability of 0.5. The output of the fully-connected neural network is a plurality of probability values which represent the probability value of the prediction result of each classification label.
(6) Processing fMRI data of participants for a model training part in the steps (1) to (4), inputting the obtained three-frame multichannel two-dimensional image machine classification labels as input data into a mixed convolutional neural network for model training to obtain parameters of the mixed convolutional neural network, and using the parameters of the mixed convolutional neural network as a mixed convolutional neural network model for fMRI whole brain data classification;
(7) and (3) sequentially carrying out the processing of the steps (1) to (4) on the obtained fMRI data, and inputting the obtained three-frame multichannel two-dimensional image into the trained mixed convolution neural network model for classification.
The above embodiments are preferred embodiments of the present invention, but the present invention is not limited to the above embodiments, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be construed as equivalents thereof, and all such changes, modifications, substitutions, combinations, and simplifications are intended to be included in the scope of the present invention.

Claims (9)

1. A fMRI whole brain data classification method based on deep learning is characterized by comprising the following specific steps:
(1) obtaining fMRI data of a test participant, preprocessing the fMRI data, and simultaneously obtaining a label corresponding to the fMRI data;
(2) aggregating fMRI data for each test participant;
(3) slicing the average three-dimensional image obtained after polymerization in orthogonal x, y and z axis directions respectively to obtain three groups of two-dimensional images;
(4) converting the obtained three groups of two-dimensional images into a frame of multi-channel two-dimensional image respectively;
(5) constructing a mixed multi-channel convolution neural network model for fMRI data classification;
(6) processing fMRI data of participants for a model training part in the steps (1) to (4), inputting the obtained three-frame multi-channel two-dimensional image and labels thereof as input data into a mixed multi-channel convolutional neural network model for model training to obtain parameters of the mixed multi-channel convolutional neural network model, and using the parameters of the mixed multi-channel convolutional neural network model for fMRI data classification;
(7) and (3) sequentially carrying out the processing of the steps (1) to (4) on the obtained fMRI data of the participants for classifying, and inputting the obtained three-frame multichannel two-dimensional image into the mixed multichannel convolutional neural network model trained in the step (6) for classification.
2. The deep learning-based fMRI whole brain data classification method according to claim 1, wherein the preprocessing in step (1) comprises head movement correction, temporal layer correction, spatial normalization and spatial smoothing; the label refers to the attributes of the test participants.
3. The method for classifying fMRI whole brain data based on deep learning according to claim 1, wherein in the step (2), if the fMRI data is resting state fMRI data, the arithmetic mean is performed on the voxel points at the corresponding positions of the obtained N frames of three-dimensional images (dimX x dimY dimZ) to obtain a frame mean three-dimensional image;
if the fMRI whole brain data is task state fMRI data, calculating the average change value of each voxel point in the test process relative to the rest time by adopting a signal change percentage method for N frames of three-dimensional images in the test process, and converting the average change value into one frame of average three-dimensional image.
4. The deep learning-based fMRI whole brain data classification method according to claim 3, wherein the calculation formula of the average signal change percentage of each voxel point is:
Figure FDA0002510383280000011
wherein N represents the number of frames of the three-dimensional image in the test process, yiA value representing a voxel point in the ith frame image,
Figure FDA0002510383280000012
representing the average value of the voxel point at the resting time, selecting a test participant at the resting stage without test stimulation at the resting time, and p represents the average change value of the voxel point obtained by calculation;
wherein the size of the three-dimensional image is that the x axis is dimX, the y axis is dimY, and the z axis is dimZ; the N frames of three-dimensional images in the test process have the same label.
5. The deep learning-based fMRI whole brain data classification method according to claim 1, wherein the operations of slicing the average three-dimensional image in the step (3) are as follows: slicing each unit length on the x axis along the x axis direction to obtain dimX two-dimensional images on a y-z plane, wherein the size of each image is dimY multiplied by dimZ; slicing each unit length on the y axis along the y axis direction to obtain dimY two-dimensional images on an x-z plane, wherein the size of each image is dimX multiplied by dimZ; slicing each unit length on the z axis along the z axis direction to obtain dimZ two-dimensional images on an x-y plane, wherein the size of each image is dimX multiplied by dimY; and taking the two-dimensional images on the same plane as a group, and finally obtaining three groups of two-dimensional images.
6. The deep learning-based fMRI whole brain data classification method according to claim 1, wherein the step (4) is specifically as follows: according to the concept of a channel in a convolutional neural network, regarding a two-dimensional image on a dimX y-z plane, taking the two-dimensional image at each slice position as a channel, and converting the two-dimensional image into a frame of two-dimensional image which can be input into the convolutional neural network and has dimX channels; regarding the two-dimensional images on the dimY x-z plane, the two-dimensional image at each slice position is taken as a channel and converted into a frame of two-dimensional image with dimY channels, which can be input into a convolutional neural network; for the two-dimensional image on the dimZ x-y plane, the two-dimensional image at each slice position is also taken as a channel and converted into a frame of two-dimensional image with dimZ channels which can be input into the convolutional neural network.
7. The fMRI whole brain data classification method based on deep learning of claim 1, wherein the hybrid multi-channel convolutional neural network model comprises three multi-channel two-dimensional convolutional neural networks connected in parallel and a fully-connected neural network in sequence from input to output; the input of each multi-channel two-dimensional convolutional neural network corresponds to a multi-channel two-dimensional image, the outputs of the three multi-channel two-dimensional convolutional neural networks are spliced into one-dimensional characteristics in a series connection mode and input to the fully-connected neural network, and finally the probability value of each classification label is output and predicted.
8. The deep learning-based fMRI whole brain data classification method according to claim 7, wherein the multichannel two-dimensional convolutional neural network comprises an Input layer (Input), a first convolutional layer (Conv2d _1), a first pooling layer (Max Paoling 2d _1), a first Dropout layer, a second convolutional layer (Conv2d _2), a second pooling layer (Max Paoling 2d _2), a second Dropout layer and a flattening layer (Flatten) in sequence; the number of convolution kernels of the first convolution layer is 32, and the size of the convolution kernels is 3 multiplied by 3; the number of convolution kernels of the second convolution layer is 64, and the size of the convolution kernels is 3 multiplied by 3; the first convolution layer and the second convolution layer both adopt a LeakyReLU function as an activation function; the first pooling layer and the second pooling layer both adopt maximum pooling operation, and the size of a pooling window is 2 multiplied by 2; the first Dropout layer and the second Dropout layer retain the result transmitted by the previous layer with the probability of 0.25; the flattening layer flattens and outputs the result of the second convolution layer into a one-dimensional result; the one-dimensional results output by the three multi-channel two-dimensional convolution neural networks are spliced into one-dimensional characteristics through a fusion layer (Merge) and input into the fully-connected neural network.
9. The deep learning-based fMRI whole brain data classification method according to claim 7, wherein the fully-connected neural network comprises a first fully-connected layer (Dense _1), a normative layer (Batchnormative), a Dropout layer, and a second fully-connected layer (Dense _2) in sequence; wherein the number of neurons in the first fully-connected layer is 625; the number of neurons of the second fully-connected layer is determined according to the number of categories of the classification task; the first full connection layer adopts a LeakyReLU function as an activation function; the second full connection layer adopts a Softmax function as an activation function; the standard layer carries out normalization again on the transmission result of the previous layer, so that the mean value of the result is close to 0, and the standard deviation is close to 1; the Dropout layer retains the record transmitted by the previous layer with a probability of 0.5; the output of the fully-connected neural network is a plurality of probability values which represent the probability value of the prediction result of each classification label.
CN201811054390.5A 2018-09-11 2018-09-11 fMRI whole brain data classification method based on deep learning Active CN109222972B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811054390.5A CN109222972B (en) 2018-09-11 2018-09-11 fMRI whole brain data classification method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811054390.5A CN109222972B (en) 2018-09-11 2018-09-11 fMRI whole brain data classification method based on deep learning

Publications (2)

Publication Number Publication Date
CN109222972A CN109222972A (en) 2019-01-18
CN109222972B true CN109222972B (en) 2020-09-22

Family

ID=65067767

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811054390.5A Active CN109222972B (en) 2018-09-11 2018-09-11 fMRI whole brain data classification method based on deep learning

Country Status (1)

Country Link
CN (1) CN109222972B (en)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109816037B (en) * 2019-01-31 2021-05-25 北京字节跳动网络技术有限公司 Method and device for extracting feature map of image
CN110246566A (en) * 2019-04-24 2019-09-17 中南大学湘雅二医院 Method, system and storage medium are determined based on the conduct disorder of convolutional neural networks
CN110192860B (en) * 2019-05-06 2022-10-11 复旦大学 Brain imaging intelligent test analysis method and system for network information cognition
CN110197729A (en) * 2019-05-20 2019-09-03 华南理工大学 Tranquillization state fMRI data classification method and device based on deep learning
CN110322969A (en) * 2019-07-03 2019-10-11 北京工业大学 A kind of fMRI data classification method based on width study
CN110604572A (en) * 2019-10-08 2019-12-24 江苏海洋大学 Brain activity state identification method based on human brain characteristic map
CN110916661B (en) * 2019-11-21 2021-06-08 大连理工大学 ICA-CNN classified fMRI intracerebral data time pre-filtering and amplifying method
CN111046918B (en) * 2019-11-21 2022-09-20 大连理工大学 ICA-CNN classified fMRI data space pre-smoothing and broadening method
CN110870770B (en) * 2019-11-21 2021-05-11 大连理工大学 ICA-CNN classified fMRI space activation map smoothing and broadening method
US20210174939A1 (en) * 2019-12-09 2021-06-10 Tencent America LLC Deep learning system for detecting acute intracranial hemorrhage in non-contrast head ct images
CN110992351B (en) * 2019-12-12 2022-08-16 南京邮电大学 sMRI image classification method and device based on multi-input convolution neural network
CN111709787B (en) * 2020-06-18 2023-08-22 抖音视界有限公司 Method, device, electronic equipment and medium for generating user retention time
CN111728590A (en) * 2020-06-30 2020-10-02 中国人民解放军国防科技大学 Individual cognitive ability prediction method and system based on dynamic function connection
CN113096096B (en) * 2021-04-13 2023-04-18 中山市华南理工大学现代产业技术研究院 Microscopic image bone marrow cell counting method and system fusing morphological characteristics
CN113313673B (en) * 2021-05-08 2022-05-20 华中科技大学 TB-level cranial nerve fiber data reduction method and system based on deep learning
CN113313232B (en) * 2021-05-19 2023-02-14 华南理工大学 Functional brain network classification method based on pre-training and graph neural network

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107067395A (en) * 2017-04-26 2017-08-18 中国人民解放军总医院 A kind of nuclear magnetic resonance image processing unit and method based on convolutional neural networks
CN107145727A (en) * 2017-04-26 2017-09-08 中国人民解放军总医院 The medical image processing devices and method of a kind of utilization convolutional neural networks
CN107424145A (en) * 2017-06-08 2017-12-01 广州中国科学院软件应用技术研究所 The dividing method of nuclear magnetic resonance image based on three-dimensional full convolutional neural networks
CN107563434A (en) * 2017-08-30 2018-01-09 山东大学 A kind of brain MRI image sorting technique based on Three dimensional convolution neutral net, device
CN107767378A (en) * 2017-11-13 2018-03-06 浙江中医药大学 The multi-modal Magnetic Resonance Image Segmentation methods of GBM based on deep neural network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107067395A (en) * 2017-04-26 2017-08-18 中国人民解放军总医院 A kind of nuclear magnetic resonance image processing unit and method based on convolutional neural networks
CN107145727A (en) * 2017-04-26 2017-09-08 中国人民解放军总医院 The medical image processing devices and method of a kind of utilization convolutional neural networks
CN107424145A (en) * 2017-06-08 2017-12-01 广州中国科学院软件应用技术研究所 The dividing method of nuclear magnetic resonance image based on three-dimensional full convolutional neural networks
CN107563434A (en) * 2017-08-30 2018-01-09 山东大学 A kind of brain MRI image sorting technique based on Three dimensional convolution neutral net, device
CN107767378A (en) * 2017-11-13 2018-03-06 浙江中医药大学 The multi-modal Magnetic Resonance Image Segmentation methods of GBM based on deep neural network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Modeling Task fMRI Data Via Deep Convolutional Autoencoder;Heng Huang et al.;《IEEE Transactions on Medical Imaging》;20180630;第37卷(第7期);1551-1561 *
State-spacemodel with deep learning for functional dynamics estimation in resting-state fMRI;Heung-Il Suk et al.;《NeuroImage》;20160114;第129卷;292-307 *
基于卷积神经网络的fMRI数据分类方法;张兆晨,冀俊忠;《模式识别与人工智能》;20170630;第30卷(第6期);549-558 *

Also Published As

Publication number Publication date
CN109222972A (en) 2019-01-18

Similar Documents

Publication Publication Date Title
CN109222972B (en) fMRI whole brain data classification method based on deep learning
CN109886273B (en) CMR image segmentation and classification system
CN110349652B (en) Medical data analysis system fusing structured image data
CN110363760B (en) Computer system for recognizing medical images
CN111832416A (en) Motor imagery electroencephalogram signal identification method based on enhanced convolutional neural network
CN110070107A (en) Object identification method and device
CN111242933B (en) Retinal image artery and vein classification device, apparatus, and storage medium
CN112450881B (en) Multi-modal sleep staging method based on time sequence relevance driving
CN110033023A (en) It is a kind of based on the image processing method and system of drawing this identification
Singh et al. Deep learning and machine learning based facial emotion detection using CNN
CN112446891A (en) Medical image segmentation method based on U-Net network brain glioma
CN109118487B (en) Bone age assessment method based on non-subsampled contourlet transform and convolutional neural network
CN112043260B (en) Electrocardiogram classification method based on local mode transformation
CN111126350B (en) Method and device for generating heart beat classification result
CN110929762A (en) Method and system for detecting body language and analyzing behavior based on deep learning
CN111681247B (en) Lung lobe lung segment segmentation model training method and device
CN113569891A (en) Training data processing device, electronic equipment and storage medium of neural network model
CN112037179A (en) Method, system and equipment for generating brain disease diagnosis model
CN113133769A (en) Equipment control method, device and terminal based on motor imagery electroencephalogram signals
CN112508902A (en) White matter high signal grading method, electronic device and storage medium
CN114595725B (en) Electroencephalogram signal classification method based on addition network and supervised contrast learning
CN111540467A (en) Schizophrenia classification identification method, operation control device and medical equipment
CN112420170B (en) Method for improving image classification accuracy of computer aided diagnosis system
Zhao et al. The end-to-end fetal head circumference detection and estimation in ultrasound images
CN116311472B (en) Micro-expression recognition method and device based on multi-level graph convolution network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant