CN111951228B - Epileptogenic focus positioning system integrating gradient activation mapping and deep learning model - Google Patents

Epileptogenic focus positioning system integrating gradient activation mapping and deep learning model Download PDF

Info

Publication number
CN111951228B
CN111951228B CN202010710691.XA CN202010710691A CN111951228B CN 111951228 B CN111951228 B CN 111951228B CN 202010710691 A CN202010710691 A CN 202010710691A CN 111951228 B CN111951228 B CN 111951228B
Authority
CN
China
Prior art keywords
layer
tensor
data
module
input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010710691.XA
Other languages
Chinese (zh)
Other versions
CN111951228A (en
Inventor
李蓉
李济邑
王冲
张镭耀
王鸿宇
邹婷
陈华富
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202010710691.XA priority Critical patent/CN111951228B/en
Publication of CN111951228A publication Critical patent/CN111951228A/en
Application granted granted Critical
Publication of CN111951228B publication Critical patent/CN111951228B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Biology (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

The invention discloses an epileptogenic focus positioning system fusing a gradient activation mapping model and a deep learning network, belongs to the technical field of biomedical image pattern recognition, and particularly relates to pattern recognition of magnetic resonance image data. The invention utilizes the convolution neural network and the gradient activation mapping algorithm to process the magnetic resonance data, realizes the intelligent identification of the epileptic and the positioning of the epileptic focus, and has higher accuracy. The invention provides a new effective intelligent method for positioning epileptic focus, and the convolutional neural network and the gradient activation mapping algorithm are fused together for the first time to be applied to positioning epileptic focus, so that the method can be used as an auxiliary and supplementary means for positioning epilepsia by a clinician.

Description

Epileptogenic focus positioning system integrating gradient activation mapping and deep learning model
Technical Field
The invention belongs to the technical field of biomedical image pattern recognition, and particularly relates to a deep learning classification network based on a magnetic resonance image and construction of a gradient activation mapping framework.
Background
Epilepsy is more and more concerned by clinicians as a common disease in neurology department, epilepsy is a transient disordered brain function syndrome caused by abnormal discharge of epilepsy-induced focus, about more than 900 ten thousand of epilepsy patients exist in China, about 20% of patients are intractable epilepsy patients, and the epilepsy patients are heavily stressed physiologically and psychologically due to the characteristics of unclear seizure mechanism, complex and various seizure symptoms, indefinite seizure time, poor drug treatment effect and the like. The surgical treatment of epilepsy can effectively relieve seizure symptoms, well control epileptic seizures, and bring better choices for epileptic patients. The key of surgical treatment of epilepsy is accurate positioning of an epilepsy focus, and the accuracy of positioning of the epilepsy focus is an important prerequisite for effectively cutting off the focus, controlling the occurrence of epilepsy and simultaneously avoiding damaging important brain functional regions to the maximum extent.
Currently, clinical diagnosis of epileptic focus mainly depends on electrophysiological, image examination, nuclear medicine and other means for detection, wherein magnetic resonance imaging plays an important role in locating epilepsy caused by abnormal brain tissue. However, the focus is searched by images, so that the focus is mainly judged by a clinician through naked eyes and depending on experience at present, and the effect of the method is not obvious under the condition that the focus is too small or other focuses cannot be identified by naked eyes. Therefore, a method for intelligently identifying and effectively classifying image data is needed to find these hidden lesions.
Disclosure of Invention
The invention discloses an epileptic focus positioning system integrating a gradient activation mapping model and a deep learning network, which aims to solve the problem of limitation of focus diagnosis by clinicians through naked eyes and experience.
On the basis of the research of predecessors, the method combines a convolutional neural network model and a gradient activation mapping model in deep learning to analyze the magnetic resonance image and realize the positioning of the epileptic focus. The implementation scheme of the technology is that an epileptic focus positioning system fusing gradient activation mapping and a deep learning model comprises the following steps: the system comprises an image preprocessing module, a neural network classification module and a gradient positioning module, wherein input data of the system is structural magnetic resonance image data of 3DT 1;
the input data input preprocessing module is used for performing origin correction, data size normalization and gray matter, white matter and cerebrospinal fluid segmentation on the input data;
the neural network module is a neural network trained in advance, the output data of the preprocessing module is input to the neural network classification module, and the neural network classification module classifies the input data into epileptics and normal persons;
the neural network module structure is as follows:
a first layer: a convolution layer, wherein input data is 1 × 91 × 109 × 91 tensor, 8 convolution kernels with the size of 3 × 3 × 3 are adopted, the step length is set to [1,1,1], an activation function is Relu, and output data is 8 × 91 × 109 × 91 tensor;
a second layer: a maximum pooling layer in which the input data is a tensor of 8 × 91 × 109 × 91, a pooling kernel of 2 × 2 × 2 is used, and the output data is a tensor of 8 × 46 × 55 × 46;
and a third layer: a convolution layer, wherein the input data is 8 × 46 × 55 × 46 tensor, 16 convolution kernels with the size of 3 × 3 × 3 are adopted, the step size is set to [1,1,1], the activation function is Relu, and the output data is 16 × 46 × 55 × 46 tensor;
a fourth layer: a maximum pooling layer in which 16 × 46 × 55 × 46 tensors are input data, a pooling kernel of 2 × 2 × 2 is used, and 16 × 23 × 28 × 23 tensors are output data;
and a fifth layer: a convolution layer, wherein the input data is 16 × 23 × 28 × 23 tensor, 32 convolution kernels with the size of 3 × 3 × 3 are adopted, the step size is set to [1,1,1], the activation function is Relu, and the output data is 32 × 23 × 28 × 23 tensor;
a sixth layer: a maximum pooling layer in which a tensor of 32 × 23 × 28 × 23 is input, a pooling kernel of 2 × 2 × 2 is used, and a tensor of 32 × 12 × 14 × 12 is output;
a seventh layer: a convolution layer, wherein the input data is a 32 × 12 × 14 × 12 tensor, 64 convolution kernels with the size of 3 × 3 × 3 are adopted, the step size is set to [1,1,1], the activation function is Relu, and the output data is a 64 × 12 × 14 × 12 tensor;
an eighth layer: a maximum pooling layer in which a tensor of 64 × 12 × 14 × 12 is input, a pooling kernel of 2 × 2 × 2 is used, and a tensor of 64 × 6 × 7 × 6 is output;
a ninth layer: a convolution layer, wherein input data is 64 × 6 × 7 × 6 tensor, 128 convolution kernels with the size of 3 × 3 × 3 are adopted, the step length is set to [1,1,1], the activation function is Relu, and output data is 128 × 6 × 7 × 6 tensor;
a tenth layer: a maximum pooling layer in which the input data is a 128 × 6 × 7 × 6 tensor, a pooling kernel of 2 × 2 × 2 is used, and the output data is a 128 × 3 × 4 × 3 tensor;
the eleventh layer: the full connection layer has the input data of 128 multiplied by 3 multiplied by 4 multiplied by 3 tensor, the activation function of Relu and the output data of 1 multiplied by 512 vector;
a twelfth layer: the full connection layer inputs a 1 × 512 vector of data, the activation function is Relu, and the output data is a 1 × 2 vector;
the gradient positioning module sequentially comprises: the input of the gradient module is output data y of the eleventh layer of the neural network module when the epileptic patient is judged to be cc
Gradient module pair ycPerforming feature mapping A with convolutional layerskGradient finding
Figure BDA0002596429690000021
Then output to the weight module;
the weight module is based on the gradient of the input
Figure BDA0002596429690000022
Calculating weights of neuronal importance
Figure BDA0002596429690000023
Then output to a positioning module, wherein
Figure BDA0002596429690000024
The calculation method comprises the following steps:
Figure BDA0002596429690000031
wherein Z represents the total number of voxels in the kth characteristic diagram of the convolutional layer, and p, q and r respectively represent the length, width and height of the kth characteristic diagram;
the positioning module is used for weighting according to the input
Figure BDA0002596429690000032
Performing weighted combination on the activation maps, and obtaining a heat map representing the weight magnitude through a ReLU function
Figure BDA0002596429690000033
Figure BDA0002596429690000034
Heatmap
Figure BDA0002596429690000035
And the part with the middle weight higher than the threshold value is regarded as a potential focus.
According to the method, the ReLU is applied to the linear combination of the activation maps, and target identification and target positioning are simultaneously completed in a three-dimensional convolution model + gradient model mode according to the characteristics of the magnetic resonance image, so that a target area is also positioned on the basis of improving the classification accuracy.
Drawings
FIG. 1 is a flow chart of the present invention.
FIG. 2 is a detailed block diagram of a model constructed according to the present invention.
FIG. 3 is a diagram of the classification results of the present invention.
Fig. 4 is a map of the location of epileptic lesions in accordance with the present invention.
Detailed Description
The following detailed description of specific embodiments of the present invention is provided in connection with the accompanying drawings and examples, which are intended to illustrate the invention and not to limit the scope of the invention.
Step A: magnetic resonance data processing
The data included a total of 133 tested 3DT1 data, with 74 epileptic patients and 59 normal patients. In addition, 42 of the 74 epileptic patients acquired post-operative 3DT1 data. All 3DT1 data were first origin corrected and grey matter, white matter, cerebrospinal fluid segmented by cat12 software and mapped onto the same brain template space with each voxel size of 2mm and data in a three-dimensional matrix of 91 × 109 × 91.
And B: model data construction
Model data are 133 data pairs<Xi,Yi>Wherein X isiIs gray matter three-dimensional matrix data, Y, corresponding to the ith tested subjectiThe category label corresponding to the ith tested person is represented by a 2-dimensional vector, wherein (1,0) represents a normal person, and (0,1) represents an epileptic patient.
And secondly, the model adopts 10-fold cross validation to ensure that the data is utilized to the maximum extent and the generalization of the model is validated. The model training is carried out for 10 times, 9 folds of the model training are respectively selected as a training set each time, a classification model is trained, and the rest folds are used as a test set of the model.
And C: training classification and lesion localization model
Setting model parameters: the method comprises the steps of batch training size batch _ size:32, iteration times epoch:100, learning rate: 0.0001 and 5 layers of convolutional neural network. The selection of different parameters can greatly affect the result of the algorithm and the size of the calculated amount, so that the optimal classification effect can be obtained by adjusting the parameter values for multiple times in the model training process.
Secondly, inputting the training set data into a three-dimensional convolution neural network, wherein the specific network structure is as shown in figure 2, updating model parameters by using an Adam optimizer, reducing the training loss of the model, and stopping training after the iteration times are reached.
Step D: test classification and lesion localization model
Inputting the gray matter three-dimensional matrix of each tested gray matter in the test set into the model established in the step C to obtain the prediction of the tested class label, comparing the predicted label with the real label, and checking whether the predicted label is matched with the real label; for the data with the prediction label of epileptic, the last convolutional layer conv _5 in fig. 2 is extracted, the gradient activation mapping operation is performed on the last convolutional layer conv _5 and the fully-connected layer fc _2 in fig. 2, a tensor with the size of 128 × 6 × 7 × 6 is obtained, the tensor represents the contribution weight of different voxel points in 128 channels to classification, the three-dimensional matrixes of the 128 channels are summed, a three-dimensional weight map with the size of 6 × 7 × 6 is obtained, and the three-dimensional weight map is up-sampled to 91 × 109 × 91, so that a predicted hotspot map for the localization of the tested lesion can be obtained.
And secondly, repeating the step I for each ten-fold data, calculating the accuracy of each model (on the left in the figure 3), drawing an ROC curve (on the right in the figure 3) of each model, comparing the acquired focus positioning heat point diagram (in the figure 4) of 42 epilepsies with postoperative data with preoperative magnetic resonance data (on the left in the figure 4) and postoperative magnetic resonance data (on the right in the figure 4), and indicating that the focus is more likely to be represented by redder color in the heat point diagram.
In conclusion, the method provided by the invention can predict the potential focus of the epileptic patient by using the preoperative data of the epileptic patient, and can be used as an auxiliary and supplementary means for a clinician to position the focus.
The specific parameters of the three-dimensional convolutional neural network structure are as follows:
a first layer: a convolution layer, wherein input data is 1 × 91 × 109 × 91 tensor, 8 convolution kernels with the size of 3 × 3 × 3 are adopted, the step length is set to [1,1,1], an activation function is Relu, and output data is 8 × 91 × 109 × 91 tensor;
a second layer: a maximum pooling layer in which the input data is a tensor of 8 × 91 × 109 × 91, a pooling kernel of 2 × 2 × 2 is used, and the output data is a tensor of 8 × 46 × 55 × 46;
and a third layer: a convolution layer, wherein the input data is 8 × 46 × 55 × 46 tensor, 16 convolution kernels with the size of 3 × 3 × 3 are adopted, the step size is set to [1,1,1], the activation function is Relu, and the output data is 16 × 46 × 55 × 46 tensor;
a fourth layer: a maximum pooling layer in which 16 × 46 × 55 × 46 tensors are input data, a pooling kernel of 2 × 2 × 2 is used, and 16 × 23 × 28 × 23 tensors are output data;
and a fifth layer: a convolution layer, wherein the input data is 16 × 23 × 28 × 23 tensor, 32 convolution kernels with the size of 3 × 3 × 3 are adopted, the step size is set to [1,1,1], the activation function is Relu, and the output data is 32 × 23 × 28 × 23 tensor;
a sixth layer: a maximum pooling layer in which a tensor of 32 × 23 × 28 × 23 is input, a pooling kernel of 2 × 2 × 2 is used, and a tensor of 32 × 12 × 14 × 12 is output;
a seventh layer: a convolution layer, wherein the input data is a 32 × 12 × 14 × 12 tensor, 64 convolution kernels with the size of 3 × 3 × 3 are adopted, the step size is set to [1,1,1], the activation function is Relu, and the output data is a 64 × 12 × 14 × 12 tensor;
an eighth layer: a maximum pooling layer in which a tensor of 64 × 12 × 14 × 12 is input, a pooling kernel of 2 × 2 × 2 is used, and a tensor of 64 × 6 × 7 × 6 is output;
a ninth layer: a convolution layer, wherein input data is 64 × 6 × 7 × 6 tensor, 128 convolution kernels with the size of 3 × 3 × 3 are adopted, the step length is set to [1,1,1], the activation function is Relu, and output data is 128 × 6 × 7 × 6 tensor;
a tenth layer: a maximum pooling layer in which the input data is a 128 × 6 × 7 × 6 tensor, a pooling kernel of 2 × 2 × 2 is used, and the output data is a 128 × 3 × 4 × 3 tensor;
the eleventh layer: the full connection layer has the input data of 128 multiplied by 3 multiplied by 4 multiplied by 3 tensor, the activation function of Relu and the output data of 1 multiplied by 512 vector;
a twelfth layer: and in the full connection layer, the input data is a 1 × 512 vector, the activation function is Relu, and the output data is a 1 × 2 vector.

Claims (1)

1. An epileptic focus localization system that fuses gradient activation mapping and a deep learning model, the system comprising: the system comprises an image preprocessing module, a neural network classification module and a gradient positioning module, wherein input data of the system is structural magnetic resonance image data of 3DT 1;
the input data input preprocessing module is used for performing origin correction, data size normalization and gray matter, white matter and cerebrospinal fluid segmentation on the input data;
the neural network module is a neural network trained in advance, the output data of the preprocessing module is input to the neural network classification module, and the neural network classification module classifies the input data into epileptics and normal persons;
the neural network module structure is as follows:
a first layer: a convolution layer, wherein input data is 1 × 91 × 109 × 91 tensor, 8 convolution kernels with the size of 3 × 3 × 3 are adopted, the step length is set to [1,1,1], an activation function is Relu, and output data is 8 × 91 × 109 × 91 tensor;
a second layer: a maximum pooling layer in which the input data is a tensor of 8 × 91 × 109 × 91, a pooling kernel of 2 × 2 × 2 is used, and the output data is a tensor of 8 × 46 × 55 × 46;
and a third layer: a convolution layer, wherein the input data is 8 × 46 × 55 × 46 tensor, 16 convolution kernels with the size of 3 × 3 × 3 are adopted, the step size is set to [1,1,1], the activation function is Relu, and the output data is 16 × 46 × 55 × 46 tensor;
a fourth layer: a maximum pooling layer in which 16 × 46 × 55 × 46 tensors are input data, a pooling kernel of 2 × 2 × 2 is used, and 16 × 23 × 28 × 23 tensors are output data;
and a fifth layer: a convolution layer, wherein the input data is 16 × 23 × 28 × 23 tensor, 32 convolution kernels with the size of 3 × 3 × 3 are adopted, the step size is set to [1,1,1], the activation function is Relu, and the output data is 32 × 23 × 28 × 23 tensor;
a sixth layer: a maximum pooling layer in which a tensor of 32 × 23 × 28 × 23 is input, a pooling kernel of 2 × 2 × 2 is used, and a tensor of 32 × 12 × 14 × 12 is output;
a seventh layer: a convolution layer, wherein the input data is a 32 × 12 × 14 × 12 tensor, 64 convolution kernels with the size of 3 × 3 × 3 are adopted, the step size is set to [1,1,1], the activation function is Relu, and the output data is a 64 × 12 × 14 × 12 tensor;
an eighth layer: a maximum pooling layer in which a tensor of 64 × 12 × 14 × 12 is input, a pooling kernel of 2 × 2 × 2 is used, and a tensor of 64 × 6 × 7 × 6 is output;
a ninth layer: a convolution layer, wherein input data is 64 × 6 × 7 × 6 tensor, 128 convolution kernels with the size of 3 × 3 × 3 are adopted, the step length is set to [1,1,1], the activation function is Relu, and output data is 128 × 6 × 7 × 6 tensor;
a tenth layer: a maximum pooling layer in which the input data is a 128 × 6 × 7 × 6 tensor, a pooling kernel of 2 × 2 × 2 is used, and the output data is a 128 × 3 × 4 × 3 tensor;
the eleventh layer: the full connection layer has the input data of 128 multiplied by 3 multiplied by 4 multiplied by 3 tensor, the activation function of Relu and the output data of 1 multiplied by 512 vector;
a twelfth layer: the full connection layer inputs a 1 × 512 vector of data, the activation function is Relu, and the output data is a 1 × 2 vector;
the gradient positioning module sequentially comprises: the input of the gradient module is output data y of the eleventh layer of the neural network module when the epileptic patient is judged to be cc
Gradient module pair ycPerforming feature mapping A with convolutional layerskGradient finding
Figure FDA0002596429680000021
Then output to the weight module;
the weight module is based on the gradient of the input
Figure FDA0002596429680000022
Calculating weights of neuronal importance
Figure FDA0002596429680000023
Then output to a positioning module, wherein
Figure FDA0002596429680000024
The calculation method comprises the following steps:
Figure FDA0002596429680000025
wherein Z represents the total number of voxels in the kth characteristic diagram of the convolutional layer, and p, q and r respectively represent the length, width and height of the kth characteristic diagram;
the positioning module is used for weighting according to the input
Figure FDA0002596429680000026
Performing weighted combination on the activation maps, and obtaining a heat map representing the weight magnitude through a ReLU function
Figure FDA0002596429680000027
Figure FDA0002596429680000028
Heatmap
Figure FDA0002596429680000029
And the part with the middle weight higher than the threshold value is regarded as a potential focus.
CN202010710691.XA 2020-07-22 2020-07-22 Epileptogenic focus positioning system integrating gradient activation mapping and deep learning model Active CN111951228B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010710691.XA CN111951228B (en) 2020-07-22 2020-07-22 Epileptogenic focus positioning system integrating gradient activation mapping and deep learning model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010710691.XA CN111951228B (en) 2020-07-22 2020-07-22 Epileptogenic focus positioning system integrating gradient activation mapping and deep learning model

Publications (2)

Publication Number Publication Date
CN111951228A CN111951228A (en) 2020-11-17
CN111951228B true CN111951228B (en) 2022-03-15

Family

ID=73341446

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010710691.XA Active CN111951228B (en) 2020-07-22 2020-07-22 Epileptogenic focus positioning system integrating gradient activation mapping and deep learning model

Country Status (1)

Country Link
CN (1) CN111951228B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022241710A1 (en) * 2021-05-20 2022-11-24 中国科学院深圳先进技术研究院 Epileptic seizure area positioning system and device, and medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107680082A (en) * 2017-09-11 2018-02-09 宁夏医科大学 Lung tumor identification method based on depth convolutional neural networks and global characteristics
CN110390351A (en) * 2019-06-24 2019-10-29 浙江大学 A kind of Epileptic focus three-dimensional automatic station-keeping system based on deep learning
CN111340142A (en) * 2020-05-14 2020-06-26 南京慧脑云计算有限公司 Epilepsia magnetoencephalogram spike automatic detection method and tracing positioning system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020136571A1 (en) * 2018-12-26 2020-07-02 Analytics For Life Inc. Methods and systems to configure and use neural networks in characterizing physiological systems

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107680082A (en) * 2017-09-11 2018-02-09 宁夏医科大学 Lung tumor identification method based on depth convolutional neural networks and global characteristics
CN110390351A (en) * 2019-06-24 2019-10-29 浙江大学 A kind of Epileptic focus three-dimensional automatic station-keeping system based on deep learning
CN111340142A (en) * 2020-05-14 2020-06-26 南京慧脑云计算有限公司 Epilepsia magnetoencephalogram spike automatic detection method and tracing positioning system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
独立成份分析的梯度算法及应用;陈华富 尧德中;《信号处理》;20011231;第17卷(第6期);全文 *
高频振荡信号自动检测算法及其在定位致痫灶中的应用;张馨月;《中国优秀博硕士学位论文全文数据库(硕士)医药卫生科技辑》;20200715(第07期);全文 *

Also Published As

Publication number Publication date
CN111951228A (en) 2020-11-17

Similar Documents

Publication Publication Date Title
Liu et al. A framework of wound segmentation based on deep convolutional networks
CN107506797A (en) One kind is based on deep neural network and multi-modal image alzheimer disease sorting technique
CN112348785B (en) Epileptic focus positioning method and system
CN116523840B (en) Lung CT image detection system and method based on deep learning
CN110444294B (en) Auxiliary analysis method and equipment for prostate cancer based on perception neural network
CN113782184A (en) Cerebral apoplexy auxiliary evaluation system based on facial key point and feature pre-learning
CN116597214A (en) Alzheimer&#39;s disease classification method and system based on multi-mode hypergraph attention network
Han et al. Fundus retinal vessels image segmentation method based on improved U-Net
CN112750137A (en) Liver tumor segmentation method and system based on deep learning
CN116309615A (en) Multi-mode MRI brain tumor image segmentation method
Wu et al. Identification of invisible ischemic stroke in noncontrast CT based on novel two‐stage convolutional neural network model
Jayachandran et al. Multi-dimensional cascades neural network models for the segmentation of retinal vessels in colour fundus images
CN111951228B (en) Epileptogenic focus positioning system integrating gradient activation mapping and deep learning model
Jain et al. Early detection of brain tumor and survival prediction using deep learning and an ensemble learning from radiomics images
CN114926396A (en) Mental disorder magnetic resonance image preliminary screening model construction method
Bhat et al. Identification of intracranial hemorrhage using ResNeXt model
Al Jannat et al. Detection of multiple sclerosis using deep learning
Mirchandani et al. Comparing the Architecture and Performance of AlexNet Faster R-CNN and YOLOv4 in the Multiclass Classification of Alzheimer Brain MRI Scans
CN114418999B (en) Retinopathy detection system based on lesion attention pyramid convolution neural network
Amin et al. Automated psoriasis detection using deep learning
KR102373992B1 (en) Method and apparatut for alzheimer&#39;s disease classification using texture features
Atmakuri et al. Reliable image metrics-based brain tumor analysis using sensor deep learning technologies
Shourie et al. Multi-class Classification of Skin Diseases using Pre-trained DenseNet Architecture on Dermoscopy Images
Liu et al. Weakly-supervised automatic biomarkers detection and classification of retinal optical coherence tomography images
Devaraj et al. Early Detection Glaucoma and Stargardt's Disease Using Deep Learning Techniques.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant