CN114820524A - 3D feature recognition method for Alzheimer disease in MRI (magnetic resonance imaging) image - Google Patents

3D feature recognition method for Alzheimer disease in MRI (magnetic resonance imaging) image Download PDF

Info

Publication number
CN114820524A
CN114820524A CN202210457193.8A CN202210457193A CN114820524A CN 114820524 A CN114820524 A CN 114820524A CN 202210457193 A CN202210457193 A CN 202210457193A CN 114820524 A CN114820524 A CN 114820524A
Authority
CN
China
Prior art keywords
feature
disease
alzheimer
convolution
block
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210457193.8A
Other languages
Chinese (zh)
Inventor
俞文心
刘明金
凌德玉
刘露
龚俊
何刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southwest University of Science and Technology
Original Assignee
Southwest University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southwest University of Science and Technology filed Critical Southwest University of Science and Technology
Priority to CN202210457193.8A priority Critical patent/CN114820524A/en
Publication of CN114820524A publication Critical patent/CN114820524A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Molecular Biology (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

The invention discloses a 3D feature recognition method of Alzheimer's disease in MRI images, which extracts 3D features of Alzheimer's disease from an Alzheimer's disease 3D feature recognition model; the Alzheimer disease 3D feature recognition model comprises an input layer, a plurality of sequentially arranged ResNet blocks, a self-adaptive average pooling module and a full-connection module; the input layer receives 3D MRI images; each ResNet block is provided with a 3D asymmetric rolling block, and the 3D asymmetric rolling block extracts discriminant features in the 3D MRI image; an attention feature fusion adding module is arranged behind each ResNet block to fuse local and global feature contexts and perform multi-scale channel attention feature fusion; and after the features of the ResNet blocks are extracted, outputting by using an adaptive average pooling module and a full-connection module to finally obtain an identification result. The invention can extract the characteristics with more discriminability and can achieve early identification and higher accuracy.

Description

3D feature recognition method for Alzheimer disease in MRI (magnetic resonance imaging) image
Technical Field
The invention belongs to the technical field of image recognition, and particularly relates to a 3D feature recognition method for Alzheimer's disease in an MRI image.
Background
Alzheimer's Disease (AD) is a neurodegenerative encephalopathy commonly seen in the elderly. Alzheimer's disease is a typical dementia, difficult to reverse and difficult to find early. The clinical manifestations are: memory disorders, aphasia, cognitive disorders, and behavioral disorders. Unfortunately, often the clinical diagnosis of a patient is in an advanced stage, so early diagnosis can control and timely understand the patient's condition. Physicians commonly diagnose patients' conditions by 3D brain Magnetic Resonance Imaging (MRI). However, since 3D MRI structures of adjacent disease stages are almost similar, multi-classification diagnosis of alzheimer's disease becomes very difficult. Therefore, it is necessary to improve the feature extraction capability, extract more discriminative features from 3D MRI, and promote more accurate diagnosis. In addition, not only the whole MRI has a global change, but also the MRI has a local change. Therefore, it is necessary to note the variation of the entire image and local regions and to fuse features of different scales.
Mild Cognitive Impairment (MCI) is a transitional stage between normal elderly cognitive decline and alzheimer's disease, and is also the earliest clinically detectable stage. In the progression of alzheimer's disease. Mild Cognitive Impairment (MCI) is further divided into stages of Early Mild Cognitive Impairment (EMCI) and Late Mild Cognitive Impairment (LMCI). It would be more advantageous to treat patients early if we could more accurately identify the disease stage.
Since the structure of 3D brain MRI is similar, the disease stages adjacent to MRI are nearly identical. Therefore, features in 3D MRI are not easy to extract, and it is necessary to extract features in 3D MRI step by step. In previous work, it was common to process 3D Convolutional Neural Networks (CNNs) and 2D CNNs in sequence, or process 3D MRI into 2D slices, and then perform feature extraction using 2D CNNs. The loss of characteristic information is accompanied by a conversion of 3D to 2D.
The existing image feature recognition of the Alzheimer's disease is poor in accuracy, and cannot achieve early recognition and high accuracy. The reason for this is that discriminant features that distinguish the stages are not extracted. Some technical approaches use more labeled training samples and a priori knowledge to improve accuracy and early recognition. Marking the specimen typically requires very sophisticated gold standards for physician labeling, which complicates the study and makes the marking of the specimen prone to error. Some technical methods use the integration of several models to improve the accuracy by using the learning capabilities of different models. However, training of several models increases training time, and also does not control the learning ability of different models, which may have recognition effects on the same stage. Still other techniques use deeper, more complex neural network models to improve recognition. The neural network model is too deep, and often the network model is not fully trained, coupled with the shortfall of hundreds of MRI data.
Disclosure of Invention
In order to solve the problems, the invention provides a 3D feature recognition method for Alzheimer's disease in an MRI image, which can extract more discriminative features and achieve early recognition and higher accuracy.
In order to achieve the purpose, the invention adopts the technical scheme that: a method for 3D feature recognition of alzheimer's disease in MRI images, comprising: inputting a 3D MRI image into the Alzheimer disease 3D feature recognition model, and extracting Alzheimer disease 3D features;
the Alzheimer disease 3D feature recognition model comprises an input layer, a plurality of sequentially arranged ResNet blocks, a self-adaptive average pooling module and a full-connection module; the 3D MRI image is received by the input layer; adding a 3D asymmetric volume block in each ResNet block, and extracting discriminant features in the 3D MRI image by the 3D asymmetric volume block; an attention feature fusion adding module is arranged behind each ResNet block to fuse local and global feature contexts and perform multi-scale channel attention feature fusion; and after the features of the ResNet blocks are extracted, outputting by using an adaptive average pooling module and a full-connection module to finally obtain an identification result.
Further, the ResNet block includes a first convolution layer of 1 × 1, a second convolution layer of 3 × 3, a third convolution layer of 1 × 1, and a fourth convolution layer of 1 × 1, the first convolution layer is connected to the second convolution layer, the second convolution layer is connected to the 3D asymmetric convolution block, the 3D asymmetric convolution block is connected to the third convolution layer, and the third convolution layer is connected to the fourth convolution layer.
Further, the 3D asymmetric convolution block includes four parallel branches, which are a 3 × 3 × 3 convolution kernel, a 3 × 1 × 1 convolution kernel, a 1 × 3 × 1 convolution kernel, and a 1 × 1 × 3 convolution kernel, respectively; normalization is performed after each of the four branches, and then the outputs of the four branches are added as the output of the 3D asymmetric volume block.
Further, after each ResNet block is set, an attention feature fusion adding module is set to fuse local and global feature contexts, including: selecting a point-by-point convolution uses the point-by-point channel interaction as a local channel context aggregator for each spatial location.
Further, a local feature context is computed: l (X) ε R C×H×W×L C, H, W, L denotes the number of channels, height, width and length of L (X);
obtained by a bottleneck structure: l (x) ═ B (PW Conv 2 (δ(B(PWConv 1 (X)))));
Wherein PW Conv 1 And PW Conv 2 Have kernel sizes of
Figure BDA0003619188240000031
And
Figure BDA0003619188240000041
r is the channel reduction ratio, δ represents the rectification linear unit, and B represents the batch normalization.
Further, a global feature context g (X) e R is computed by a global average pooling operation C×1×1×1 And calculating a formula:
Figure BDA0003619188240000042
wherein X [ i, j, k ] represents the input image characteristics;
for global feature contexts g (x) and local feature contexts l (x), further processing is performed using a multi-scale channel attention mechanism that fuses the multi-scale context information along the channel dimensions by changing the size of the spatial pool.
Further, the output processed by the multi-scale channel attention mechanism is m (x) ═ R C×W×H×L M (X) represents the attention weight generated by the multi-scale channel attention mechanism, and the formula is as follows:
Figure BDA0003619188240000043
where, σ is the Sigmoid function,
Figure BDA0003619188240000044
indicating a broadcast addition;
the attention feature of the multi-scale channel is fused with two features X, Y belongs to R C×H×W×L Assume Y is a signature generated by a larger receptive field;
in a short hop connection scenario: x is the feature obtained by identity mapping, Y is the residual feature learned in the ResNet block, and based on the multi-scale channel attention mechanism, after being processed by the multi-scale channel attention mechanism, the representation is as follows:
Figure BDA0003619188240000045
wherein, T represents the initial feature integration, and element-by-element summation is selected as the initial integral;
definition Z ∈ R C×H×W×L For attention to the fusion features output by the feature fusion adding module, the formula is as follows:
Figure BDA0003619188240000051
the beneficial effects of the technical scheme are as follows:
the invention prevents the loss of characteristic information and improves the extraction of more discriminative characteristics, thereby ensuring more accurate diagnosis of each stage of the Alzheimer disease and more timely early recognition of the Alzheimer disease. In addition, the characteristics of the Alzheimer disease image MRI are combined, the features with various semantics and scales are better fused in the attention mechanism, the fusion of the 3D features is improved, and the feature identification accuracy of the Alzheimer disease is further improved.
The 3D asymmetric convolution structure introduced by the invention can improve the 3D feature extraction capability of the neural network and avoid feature information loss, and the 3D asymmetric convolution can help to extract more discriminant features from three spatial dimensions, so that the 3D feature extraction capability of the neural network is improved. Meanwhile, the multi-scale channel attention mechanism is combined to realize feature fusion in the attention mechanism, the fusion of global features and local features is realized, the overall change of MRI is detected, the local subtle change is also detected, and the identification accuracy of all stages of Alzheimer's disease is further improved.
Drawings
FIG. 1 is a schematic flow chart of a 3D feature recognition method for Alzheimer's disease in an MRI image according to the present invention;
FIG. 2 is a schematic diagram of a multi-scale channel attention mechanism in an embodiment of the present invention;
FIG. 3 is a schematic diagram illustrating a feature fusion adding module according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described with reference to the accompanying drawings.
In this embodiment, referring to fig. 1, the present invention provides a method for 3D feature recognition of alzheimer's disease in an MRI image, including: inputting the 3D MRI image into the Alzheimer disease 3D feature recognition model, and extracting Alzheimer disease 3D features;
the Alzheimer disease 3D feature recognition model comprises an input layer, a plurality of sequentially arranged ResNet blocks, a self-adaptive average pooling module and a full-connection module; the 3D MRI image is received by the input layer; adding a 3D asymmetric volume block in each ResNet block, and extracting discriminant features in the 3D MRI image by the 3D asymmetric volume block; an attention feature fusion adding module is arranged behind each ResNet block to fuse local and global feature contexts and perform multi-scale channel attention feature fusion; and after the features of the ResNet blocks are extracted, outputting by using an adaptive average pooling module and a full-connection module to finally obtain an identification result.
As an optimization scheme of the above embodiment, the ResNet block includes a first convolution layer of 1 × 1, a second convolution layer of 3 × 1, a third convolution layer of 1 × 1, and a fourth convolution layer of 1 × 1, where the first convolution layer is connected to the second convolution layer, the second convolution layer is connected to the 3D asymmetric convolution block, the 3D asymmetric convolution block is connected to the third convolution layer, and the third convolution layer is connected to the fourth convolution layer.
Wherein the 3D asymmetric convolution block includes four parallel branches, the four parallel branches being a 3 × 3 × 3 convolution kernel, a 3 × 1 × 1 convolution kernel, a 1 × 3 × 1 convolution kernel, and a 1 × 1 × 3 convolution kernel, respectively; normalization is performed after each of the four branches, and then the outputs of the four branches are added as the output of the 3D asymmetric volume block.
By adding asymmetric convolution kernels, feature extraction in different directions can be enhanced for 3D asymmetric convolution kernels, and more discriminative features can be extracted from 3D MRI.
As an optimization of the above embodiments, since the target size in 3D MRI is different, e.g. the whole MRI is variable, the whole brain atrophy is different at different stages of the disease. Some local areas of MRI have also changed, such as local hippocampal contractions and ventricular dilation. Therefore, to capture the MRI changes in the brain, attention is paid to both the entire image and the local regions of the image, and features of different scales are fused. The invention proposes multi-scale channel attention feature fusion (MS-CAFF). The core idea is by changing the size of the spatial pool.
After each ResNet block is set, an attention feature fusion adding module is set to fuse local and global feature contexts, and the method comprises the following steps: selecting a point-by-point convolution uses the point-by-point channel interaction as a local channel context aggregator for each spatial location.
Wherein the local feature context is computed: l (X) ε R C×H×W×L C, H, W, L denotes the number of channels, height, width and length of L (X);
obtained by a bottleneck structure: l (x) ═ B (PW Conv 2 (δ(B(PWConv 1 (X)))));
Wherein PW Conv 1 And PW Conv 2 Have kernel sizes of
Figure BDA0003619188240000071
And
Figure BDA0003619188240000072
r is the channel reduction ratio, δ represents the rectified linear unit, and B represents the Batch Normalization (BN).
After the above processing, since l (x) has the same shape as the input feature, fine detail features of low level can be retained and focused.
Wherein the global feature context g (X) e R is computed by a global average pooling operation C×1×1×1 And calculating a formula:
Figure BDA0003619188240000073
wherein X [ i, j, k ] represents the input image characteristics;
for the global feature context g (X) and the local feature context L (X), a multi-scale channel attention mechanism is used for further processing, and by changing the size of the space pool, the multi-scale channel attention mechanism fuses multi-scale context information along the channel dimension, so that global change and local change can be highlighted simultaneously, and a network can conveniently identify and detect a target under the scale change.
Wherein the output processed by the multi-scale channel attention mechanism is M (X) R C×W×H×L M (X) represents the attention weight generated by the multi-scale channel attention mechanism, and the formula is as follows:
Figure BDA0003619188240000081
where, σ is the Sigmoid function,
Figure BDA0003619188240000082
indicating a broadcast addition.
The detailed structure is shown in fig. 2. In the ResNet neural network, the input features are defined as: x is formed by R C×H×W×L The output characteristics are as follows: x' is belonged to R C×H×W×L . The output characteristics are then as in equation (5):
Figure BDA0003619188240000083
wherein the content of the first and second substances,
Figure BDA0003619188240000084
representing element-by-element multiplication.
The attention feature of the multi-scale channel is fused with two features X, Y belongs to R C×H×W×L Assume Y is a signature generated by a larger receptive field;
in a short hop connection scenario: x is the feature obtained by identity mapping, Y is the residual feature learned in the ResNet block, and based on the multi-scale channel attention mechanism, after being processed by the multi-scale channel attention mechanism, the representation is as follows:
Figure BDA0003619188240000085
wherein, T represents the initial feature integration, and element-by-element summation is selected as the initial integral;
definition Z ∈ R C×H×W×L For attention to the fusion features output by the feature fusion adding module, the formula is as follows:
Figure BDA0003619188240000086
in fig. 3, the dashed arrows represent 1-M (X ≠ Y), in particular the fusion weights M (X ≠ Y) and 1-M (X ═ Y) consist of real numbers between 0 and 1, so that the network performs soft selection or weighted averaging between X and Y.
In order to extract more discriminative features, the invention respectively promotes the extraction and fusion of features by using an asymmetric convolution and a multi-scale channel attention mechanism. It is common practice to use 3D Convolutional Neural Networks (CNN) and 2D CNN for sequential processing, or to process 3D MRI into 2D slices and then use 2D CNN for feature extraction. With the conversion of MRI from 3D to 2D, there is also a concomitant loss of characteristic information. In contrast, directly extracting 3D features may avoid loss of feature information. In this regard, 3D asymmetric convolution can help extract more discriminative features from three spatial dimensions. And the 3D feature extraction capability of the neural network is improved by using asymmetric convolution. By adding asymmetric convolution, features are extracted from three-dimensional MRI, feature extraction in different directions can be enhanced, and more discriminative features can be extracted. In addition, the invention combines the 3D MRI characteristics of Alzheimer's disease, and the overall structure of MRI at various stages of the disease is similar, and there are differences in local regions in the images. This is also the fundamental reason why other various technical methods are not very accurate. Identification of the different phases requires attention to changes throughout the MRI and local changes within the image. Thus, the present invention proposes a multi-scale channel attention feature fusion (MS-CAFF) that can simultaneously emphasize the more global distribution of the entire MRI and highlight smaller variations of the local distribution. More importantly, in order to better fuse features with various semantics and scales, the invention fuses features of different scales in an attention mechanism. The technology can extract more distinguishing features and can improve the feature recognition of the Alzheimer's disease by combining the image features of the disease.
The foregoing shows and describes the general principles and broad features of the present invention and advantages thereof. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are described in the specification and illustrated only to illustrate the principle of the present invention, but that various changes and modifications may be made therein without departing from the spirit and scope of the present invention, which fall within the scope of the invention as claimed. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (7)

1. A3D feature recognition method for Alzheimer's disease in an MRI image, comprising: inputting a 3D MRI image into the Alzheimer disease 3D feature recognition model, and extracting Alzheimer disease 3D features;
the Alzheimer disease 3D feature recognition model comprises an input layer, a plurality of sequentially arranged ResNet blocks, a self-adaptive average pooling module and a full-connection module; the 3D MRI image is received by the input layer; adding a 3D asymmetric volume block in each ResNet block, and extracting discriminant features in the 3D MRI image by the 3D asymmetric volume block; an attention feature fusion adding module is arranged behind each ResNet block to fuse local and global feature contexts and perform multi-scale channel attention feature fusion; and after the features of the ResNet blocks are extracted, outputting by using an adaptive average pooling module and a full-connection module to finally obtain an identification result.
2. The method according to claim 1, wherein the ResNet block includes a first convolution layer of 1 x 1, a second convolution layer of 3 x 3, a third convolution layer of 1 x 1, and a fourth convolution layer of 1 x 1, the first convolution layer is connected to the second convolution layer, the second convolution layer is connected to the 3D asymmetric convolution block, the 3D asymmetric convolution block is connected to the third convolution layer, and the third convolution layer is connected to the fourth convolution layer.
3. The 3D feature recognition method for alzheimer's disease in an MRI image according to claim 1 or 2, wherein said 3D asymmetric volume block comprises four parallel branches, which are a 3 x 3 convolution kernel, a 3 x 1 convolution kernel, a 1 x 3 x 1 convolution kernel and a 1 x 3 convolution kernel, respectively; normalization is performed after each of the four branches, and then the outputs of the four branches are added as the output of the 3D asymmetric volume block.
4. The 3D feature recognition method for Alzheimer's disease in MRI images according to claim 1, wherein each ResNet block is configured to set an attention feature fusion adding module to fuse local and global feature contexts, comprising: selecting a point-by-point convolution uses the point-by-point channel interaction as a local channel context aggregator for each spatial location.
5. The method for 3D feature recognition of alzheimer's disease in MRI images as claimed in claim 1 or 4, characterized in that the local feature context is calculated: l (X) ε R C×H×W×L C, H, W, L denotes the number of channels, height, width and length of L (X);
obtained by a bottleneck structure: l (x) B (PW Conv) 2 (δ(B(PWConv 1 (X)))));
Wherein PW Conv 1 And PW Conv 2 Have kernel sizes of
Figure FDA0003619188230000021
And
Figure FDA0003619188230000022
r is the channel reduction ratio, δ represents the rectified linear unit, and B represents batch normalization.
6. The method of claim 5, wherein the 3D feature recognition of Alzheimer's disease in MRI images is performed by computing a global mean pooling operationLocal feature context g (X) e R C×1×1×1 And calculating a formula:
Figure FDA0003619188230000023
wherein X [ i, j, k ] represents the input image characteristics;
for global feature contexts g (x) and local feature contexts l (x), further processing is performed using a multi-scale channel attention mechanism that fuses the multi-scale context information along the channel dimensions by changing the size of the spatial pool.
7. The method of claim 6, wherein the output of the 3D feature recognition method for Alzheimer's disease in MRI images is M (X) R (R) after being processed by the multi-scale channel attention mechanism C×W×H×L M (X) represents the attention weight generated by the multi-scale channel attention mechanism, and the formula is as follows:
Figure FDA0003619188230000031
where, σ is the Sigmoid function,
Figure FDA0003619188230000032
indicating a broadcast addition;
the attention feature of the multi-scale channel is fused with two features X, Y belongs to R C×H×W×L Assume Y is a signature generated by a larger receptive field;
in a short hop connection scenario: x is the feature obtained by identity mapping, Y is the residual feature learned in the ResNet block, and based on the multi-scale channel attention mechanism, after being processed by the multi-scale channel attention mechanism, the representation is as follows:
Figure FDA0003619188230000033
wherein, T represents the initial feature integration, and element-by-element summation is selected as the initial integral;
definition Z ∈ R C×H×W×L For attention to the fusion features output by the feature fusion adding module, the formula is as follows:
Figure FDA0003619188230000034
CN202210457193.8A 2022-04-27 2022-04-27 3D feature recognition method for Alzheimer disease in MRI (magnetic resonance imaging) image Pending CN114820524A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210457193.8A CN114820524A (en) 2022-04-27 2022-04-27 3D feature recognition method for Alzheimer disease in MRI (magnetic resonance imaging) image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210457193.8A CN114820524A (en) 2022-04-27 2022-04-27 3D feature recognition method for Alzheimer disease in MRI (magnetic resonance imaging) image

Publications (1)

Publication Number Publication Date
CN114820524A true CN114820524A (en) 2022-07-29

Family

ID=82509890

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210457193.8A Pending CN114820524A (en) 2022-04-27 2022-04-27 3D feature recognition method for Alzheimer disease in MRI (magnetic resonance imaging) image

Country Status (1)

Country Link
CN (1) CN114820524A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115375665A (en) * 2022-08-31 2022-11-22 河南大学 Early Alzheimer disease development prediction method based on deep learning strategy

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115375665A (en) * 2022-08-31 2022-11-22 河南大学 Early Alzheimer disease development prediction method based on deep learning strategy
CN115375665B (en) * 2022-08-31 2024-04-16 河南大学 Advanced learning strategy-based early Alzheimer disease development prediction method

Similar Documents

Publication Publication Date Title
Nawaz et al. A deep feature-based real-time system for Alzheimer disease stage detection
CN109523521B (en) Pulmonary nodule classification and lesion positioning method and system based on multi-slice CT image
CN107506761B (en) Brain image segmentation method and system based on significance learning convolutional neural network
CN107492071A (en) Medical image processing method and equipment
CN111461232A (en) Nuclear magnetic resonance image classification method based on multi-strategy batch type active learning
CN111738363B (en) Alzheimer disease classification method based on improved 3D CNN network
Ding et al. FTransCNN: Fusing Transformer and a CNN based on fuzzy logic for uncertain medical image segmentation
Mahapatra et al. Weakly supervised semantic segmentation of Crohn's disease tissues from abdominal MRI
Sammouda Segmentation and analysis of CT chest images for early lung cancer detection
CN111639697B (en) Hyperspectral image classification method based on non-repeated sampling and prototype network
CN116030325A (en) Lung nodule CT image recognition method based on deep hybrid learning framework
Zhuang et al. Tumor classification in automated breast ultrasound (ABUS) based on a modified extracting feature network
Fu et al. Automatic detection of lung nodules using 3D deep convolutional neural networks
Rajamani et al. Attention-augmented U-Net (AA-U-Net) for semantic segmentation
CN114332572B (en) Method for extracting breast lesion ultrasonic image multi-scale fusion characteristic parameters based on saliency map-guided hierarchical dense characteristic fusion network
Lee et al. Tooth instance segmentation from cone-beam CT images through point-based detection and Gaussian disentanglement
Ameen et al. Explainable residual network for tuberculosis classification in the IoT era
CN114820524A (en) 3D feature recognition method for Alzheimer disease in MRI (magnetic resonance imaging) image
Shan et al. DSCA-Net: A depthwise separable convolutional neural network with attention mechanism for medical image segmentation
CN107590806B (en) Detection method and system based on brain medical imaging
Li et al. Attention-based and micro designed EfficientNetB2 for diagnosis of Alzheimer’s disease
CN117710760A (en) Method for detecting chest X-ray focus by using residual noted neural network
CN117036288A (en) Tumor subtype diagnosis method for full-slice pathological image
Mirchandani et al. Comparing the Architecture and Performance of AlexNet Faster R-CNN and YOLOv4 in the Multiclass Classification of Alzheimer Brain MRI Scans
CN113989269B (en) Traditional Chinese medicine tongue image tooth trace automatic detection method based on convolutional neural network multi-scale feature fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination