CN112950639A - MRI medical image segmentation method based on SA-Net - Google Patents

MRI medical image segmentation method based on SA-Net Download PDF

Info

Publication number
CN112950639A
CN112950639A CN202011624371.9A CN202011624371A CN112950639A CN 112950639 A CN112950639 A CN 112950639A CN 202011624371 A CN202011624371 A CN 202011624371A CN 112950639 A CN112950639 A CN 112950639A
Authority
CN
China
Prior art keywords
model
loss function
medical image
net
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011624371.9A
Other languages
Chinese (zh)
Other versions
CN112950639B (en
Inventor
潘晓光
张海轩
刘剑超
宋晓晨
王小华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanxi Sanyouhe Smart Information Technology Co Ltd
Original Assignee
Shanxi Sanyouhe Smart Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanxi Sanyouhe Smart Information Technology Co Ltd filed Critical Shanxi Sanyouhe Smart Information Technology Co Ltd
Priority to CN202011624371.9A priority Critical patent/CN112950639B/en
Publication of CN112950639A publication Critical patent/CN112950639A/en
Application granted granted Critical
Publication of CN112950639B publication Critical patent/CN112950639B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

The invention belongs to the field of MRI medical image processing, and particularly relates to an MRI medical image segmentation method based on SA-Net, which comprises the following steps: data acquisition of BraTS 2020: acquiring native T1 weighted imaging, contrast enhanced imaging, T2 weighted imaging, and fluid attenuation imaging datasets provided by BraTS 2020; data annotation: manually annotating the data set according to the same annotation protocol; data preprocessing: preprocessing an MRI image; and (3) segmentation model training: segmenting the MRI medical image by utilizing the variation of the U-Net model; adjusting model parameters through a loss function to obtain an optimal model, and completing the construction process of the segmentation model; when the loss function of the model is no longer decreasing, the model is saved. The invention evaluates the performance of the model through 5-fold cross validation, and can fully utilize full-scale information to segment the MRI medical image. The invention is used for the segmentation of MRI medical images.

Description

MRI medical image segmentation method based on SA-Net
Technical Field
The invention belongs to the field of MRI medical image processing, and particularly relates to an MRI medical image segmentation method based on SA-Net.
Background
The current automated segmentation of medical images for accurate lesion site detection by extracting quantitative imaging biomarkers is a key, and most challenging, step in diagnosis, prognosis, treatment planning and assessment. Multi-parameter magnetic resonance imaging (mpMRI), the primary imaging modality for treating cancer, can provide a variety of different tissue properties. However, correctly interpreting mpMRI images is a challenging task not only because of the large amount of three-or four-dimensional image data produced by mpMRI sequences, but also because of the inherent heterogeneity of MRI medical images. Therefore, there is an increasing need for computerized analysis that can assist clinicians in better interpreting the lesion sites in mpMRI images. Especially in mpMRI image quantitative analysis, automatic segmentation of the lesion part and its sub-regions is an essential step.
Most common U-Net and its variants are used for accurate segmentation of lesion sites in current MRI medical images. However, when multiple scale feature mappings exist in the encoding path, the existing U-Net architecture limits feature fusion at the same scale. Research shows that in medical images with different scales, low-scale images represent spatial detail information, and high-scale images represent semantic information such as target positions, so that in the current U-Net architecture, full-scale information cannot be fully utilized by feature fusion based on scales.
Disclosure of Invention
Aiming at the technical problem that full-scale information cannot be fully utilized by the feature fusion based on the scale, the invention provides the MRI medical image segmentation method based on SA-Net, which is fully utilized, high in efficiency and strong in reliability.
In order to solve the technical problems, the invention adopts the technical scheme that:
an MRI medical image segmentation method based on SA-Net comprises the following steps:
s1 and BraTS 2020 data acquisition: acquiring native T1 weighted imaging, contrast enhanced imaging, T2 weighted imaging, and fluid attenuation imaging datasets provided by BraTS 2020;
s2, data annotation: manually annotating the data set according to the same annotation protocol;
s3, preprocessing data: preprocessing an MRI image;
s4, training a segmentation model: segmenting the MRI medical image by utilizing the variation of the U-Net model;
s5, adjusting model parameters through a loss function to obtain an optimal model, and completing the construction process of the segmentation model;
and S6, saving the model after the loss function of the model is not reduced any more.
The data preprocessing method in the step S3 includes: comprises the following steps:
s3.1, normalize each modality individually, divide the mean by the standard deviation of the entire image by subtracting,
Figure BDA0002877080660000021
wherein μ is the mean of all sample data and σ is the standard deviation of all sample data;
s3.2, randomly overturning the left/right, up/down and front/back directions of the input quantity, performing data enhancement with the probability of 0.5, or randomly selecting a factor to adjust the contrast of each image input channel to obtain MRI medical images with different contrasts;
s3.3, changing the input image into a corresponding size suitable for the model before training the data set;
and S3.4, evaluating the performance of the model on a training data set by using 5-fold cross validation, and simultaneously adjusting the parameters of the model through the training of the model to find out the parameter values which enable the model to reach the optimal value.
The method for training the segmentation model in the S4 comprises the following steps: SA-Net outputs and merges the coding blocks of different scales into scale attention blocks, learns and selects the features with full-scale information, wherein the scale attention blocks are established on ResNet modules, each module consists of two convolution layers and a ReLU active layer, the depth and the width of the model are improved through jump connection, and more complex feature information is extracted: f (x) h (x) -x, where h (x) is the output of the residual network, f (x) is the output of the convolution operation,adding a compression module into each residual block to form a ResSE block, and gradually halving the dimension of the feature map and doubling the feature width through the ResSE block, wherein the compression module is as follows:
Figure BDA0002877080660000022
wherein H and W represent the height and width of the input image, respectively, uc represents the c-th convolution kernel, and the extraction module is: s ═ Fex(z,W)=σ(g(z,W))=σ(g(W2σ(W1z)) where σ denotes a ReLU activation function, g denotes a sigmoid activation function,
Figure BDA0002877080660000023
respectively the weight matrix of two full-connection layers, and finally outputting after obtaining the gate control unit s
Figure BDA0002877080660000024
After obtaining the gate control unit s, outputting
Figure BDA0002877080660000025
Wherein
Figure BDA0002877080660000026
Is that
Figure BDA0002877080660000027
A Feature Map, s of a Feature channel ofcIs a scalar value in the gate control unit s; introducing depth supervision to each intermediate scale layer of a decoding path, reducing the characteristic width of each depth supervision subnet by adopting 1 multiplied by 1 convolution, then adopting a trilinear upsampling layer to enable the depth supervision subnet and the output subnet to have the same spatial dimension, and finally applying a Sigmoid function to obtain ultra-dense prediction, wherein the Sigmoid function is as follows:
Figure BDA0002877080660000028
the method for adjusting the model parameters through the loss function in S5 includes: the network employs a blending loss function to reduce the gap between the segmented image and the annotated image:
Figure BDA0002877080660000029
Ibceand IiouRespectively representing a binary cross entropy loss function BCE and a cross-over ratio loss function IOU,
Figure BDA00028770806600000210
representing the hyperparameter of each loss function, the BCE loss function:
Figure BDA00028770806600000211
where GT (a, b) labels the expert of pixel (a, b) and SEG (a, b) labels the prediction probability of segmenting the lesion region, the IOU loss function:
Figure BDA0002877080660000031
wherein H, W represent the height and width of the input image, respectively.
Compared with the prior art, the invention has the following beneficial effects:
the method comprises the steps of inputting a preprocessed data set into a built U-net variant network for training, storing the model after the model is lost and stabilized, and performing performance evaluation on the model through 5-fold cross validation, so that the full-scale information can be fully utilized for segmenting the MRI medical image.
Drawings
FIG. 1 is a flow chart of the main steps of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
An SA-Net based MRI medical image segmentation method, as shown in fig. 1, comprises the following steps:
s1 and BraTS 2020 data acquisition: the data set provided by BraTS 2020, which collected MRI scans from 19 institutions, was collected using different protocols, magnetic field strengths, and manufacturers. Each patient received native T1 weighted imaging, contrast enhanced imaging, T2 weighted imaging, and fluid attenuation imaging. These images were subjected to rigorous registration, cranial dissection and resampling with an isotropic resolution of 111mm and image sizes of 240 x 155. Three tumor subregions, including enhanced tumor, peritumoral edema and necrosis and other non-enhanced tumor cores.
S2, data annotation: manual annotation was performed by 1-4 scorers following the same annotation protocol and was ultimately approved by experienced neuroradiologists.
S3, preprocessing data: preprocessing an MRI image;
s4, training a segmentation model: segmenting the MRI medical image by utilizing the variation of the U-Net model;
s5, adjusting model parameters through a loss function to obtain an optimal model, and completing the construction process of the segmentation model;
and S6, saving the model after the loss function of the model is not reduced any more.
The data preprocessing method in the S3 comprises the following steps: comprises the following steps:
further, S3.1, each modality is normalized separately, by subtracting the mean divided by the standard deviation of the entire image,
Figure BDA0002877080660000032
wherein μ is the mean of all sample data and σ is the standard deviation of all sample data;
s3.2, randomly overturning the left/right, up/down and front/back directions of the input quantity, performing data enhancement with the probability of 0.5, or randomly selecting a factor to adjust the contrast of each image input channel to obtain MRI medical images with different contrasts;
s3.3, since the size of the acquired image is 240 × 240 × 155, the input image needs to be changed to a corresponding size suitable for the model before training the data set to obtain the best training result. The overfitting and gradient explosion of the model are easily caused by the overlarge image scale, but the characteristic information is difficult to extract by the model caused by the overlarge image scale, so that the accuracy rate of model segmentation is low.
And S3.4, evaluating the performance of the model on a training data set by using 5-fold cross validation, and simultaneously adjusting the parameters of the model through the training of the model to find out the parameter values which enable the model to reach the optimal value. And dividing the 5-fold cross validation data into 5 parts, taking one part for testing each time, using the rest part for training, and performing 5 times in total to maximize the performance of the evaluation segmentation model.
Further, the method for training the segmentation model in S4 includes: SA-Net outputs and merges the coding blocks of different scales into scale attention blocks, learns and selects the features with full-scale information, wherein the scale attention blocks are established on ResNet modules, each module consists of two convolution layers and a ReLU active layer, the depth and the width of the model are improved through jump connection, and more complex feature information is extracted: f (x) ═ h (x) -x, where h (x) is the output of the residual network, f (x) is the output subjected to the convolution operation, adding in each remaining block a compression module constituting a ResSE block by which the feature map dimension is reduced by half step by step, while the feature width is doubled, where the compression module is:
Figure BDA0002877080660000041
wherein H and W represent the height and width of the input image, respectively, uc represents the c-th convolution kernel, and the extraction module is: s ═ Fex(z,W)=σ(g(z,W))=σ(g(W2σ(W1z)) where σ denotes a ReLU activation function, g denotes a sigmoid activation function,
Figure BDA0002877080660000042
respectively the weight matrix of two full-connection layers, and finally outputting after obtaining the gate control unit s
Figure BDA0002877080660000043
After obtaining the gate control unit s, outputting
Figure BDA0002877080660000044
Wherein
Figure BDA0002877080660000045
Is that
Figure BDA0002877080660000046
A Feature Map, s of a Feature channel ofcIs a scalar value in the gate control unit s; introducing depth supervision to each intermediate scale layer of a decoding path, reducing the characteristic width of each depth supervision subnet by adopting 1 multiplied by 1 convolution, then adopting a trilinear upsampling layer to enable the depth supervision subnet and the output subnet to have the same spatial dimension, and finally applying a Sigmoid function to obtain ultra-dense prediction, wherein the Sigmoid function is as follows:
Figure BDA0002877080660000047
the scale attention block proposed in the decoding stage consists of full scale skip connections from the encoding path to the decoding path, where each decoding layer contains output feature maps from all encoding layers, capturing fine-grained detail and coarse-grained semantic information simultaneously at full scale. The feature map with more semantic feature information is obtained by performing feature mapping on the input of the coding path in different scales, converting the input of the coding path into a feature map with the same dimension, and performing full-scale feature fusion on the feature map and the feature map obtained by up-sampling in a decoding stage. Therefore, each feature graph in the decoding stage is obtained by fusing the feature output of each layer in the encoding stage and the feature output of the next layer obtained by up-sampling, low-level detail information and high-level semantic information are combined into a unified framework, and the problem that full-scale information may not be fully utilized by feature fusion based on scale is solved.
Further, the method for adjusting the model parameters by the loss function in S5 is as follows: the network employs a blending loss function to reduce the gap between the segmented image and the annotated image:
Figure BDA0002877080660000051
Ibceand IiouRespectively representing a binary cross entropy loss function BCE and a cross-over ratio loss function IOU,
Figure BDA0002877080660000052
representing the hyperparameter of each loss function, the BCE loss function:
Figure BDA0002877080660000053
where GT (a, b) labels the expert of pixel (a, b) and SEG (a, b) labels the prediction probability of segmenting the lesion region, the IOU loss function:
Figure BDA0002877080660000054
wherein H, W represent the height and width of the input image, respectively.
Although only the preferred embodiments of the present invention have been described in detail, the present invention is not limited to the above embodiments, and various changes can be made without departing from the spirit of the present invention within the knowledge of those skilled in the art, and all changes are encompassed in the scope of the present invention.

Claims (4)

1. An MRI medical image segmentation method based on SA-Net is characterized in that: comprises the following steps:
s1 and BraTS 2020 data acquisition: acquiring native T1 weighted imaging, contrast enhanced imaging, T2 weighted imaging, and fluid attenuation imaging datasets provided by BraTS 2020;
s2, data annotation: manually annotating the data set according to the same annotation protocol;
s3, preprocessing data: preprocessing an MRI image;
s4, training a segmentation model: segmenting the MRI medical image by utilizing the variation of the U-Net model;
s5, adjusting model parameters through a loss function to obtain an optimal model, and completing the construction process of the segmentation model;
and S6, saving the model after the loss function of the model is not reduced any more.
2. An SA-Net based MRI medical image segmentation method according to claim 1, characterized in that: the data preprocessing method in the step S3 includes: comprises the following steps:
s3.1, normalize each modality individually, divide the mean by the standard deviation of the entire image by subtracting,
Figure RE-FDA0003043526020000011
wherein μ is the mean of all sample data and σ is the standard deviation of all sample data;
s3.2, randomly overturning the left/right, up/down and front/back directions of the input quantity, performing data enhancement with the probability of 0.5, or randomly selecting a factor to adjust the contrast of each image input channel to obtain MRI medical images with different contrasts;
s3.3, changing the input image into a corresponding size suitable for the model before training the data set;
and S3.4, evaluating the performance of the model on a training data set by using 5-fold cross validation, and simultaneously adjusting the parameters of the model through the training of the model to find out the parameter values which enable the model to reach the optimal value.
3. An SA-Net based MRI medical image segmentation method according to claim 1, characterized in that: the method for training the segmentation model in the S4 comprises the following steps: SA-Net outputs and merges the coding blocks of different scales into scale attention blocks, learns and selects the features with full-scale information, wherein the scale attention blocks are established on ResNet modules, each module consists of two convolution layers and a ReLU active layer, the depth and the width of the model are improved through jump connection, and more complex feature information is extracted: f (x) ═ h (x) -x, where h (x) is the output of the residual network, f (x) is the output subjected to the convolution operation, adding in each remaining block a compression module constituting a ResSE block by which the feature map dimension is reduced by half step by step, while the feature width is doubled, where the compression module is:
Figure RE-FDA0003043526020000012
wherein H and W represent the height and width of the input image, respectively, uc represents the c-th convolution kernel, and the extraction module is: s ═ Fex(z,W)=σ(g(z,W))=σ(g(W2σ(W1z)) where σ denotes a ReLU activation function, g denotes a sigmoid activation function,
Figure RE-FDA0003043526020000013
respectively the weight matrix of two full-connection layers, and finally outputting after obtaining the gate control unit s
Figure RE-FDA0003043526020000014
After obtaining the gate control unit s, outputting
Figure RE-FDA0003043526020000015
Figure RE-FDA0003043526020000016
Wherein
Figure RE-FDA0003043526020000017
Is that
Figure RE-FDA0003043526020000018
A Feature Map, s of a Feature channel ofcIs a scalar value in the gate control unit s; introducing depth supervision to each intermediate scale layer of a decoding path, reducing the characteristic width of each depth supervision subnet by adopting 1 multiplied by 1 convolution, then adopting a trilinear upsampling layer to enable the depth supervision subnet and the output subnet to have the same spatial dimension, and finally applying a Sigmoid function to obtain ultra-dense prediction, wherein the Sigmoid function is as follows:
Figure RE-FDA0003043526020000021
4. an SA-Net based MRI medical image segmentation method according to claim 1, characterized in that: the method for adjusting the model parameters through the loss function in S5 includes: the network employs a blending loss function to reduce the gap between the segmented image and the annotated image:
Figure RE-FDA0003043526020000022
Ibceand IiouRespectively representing a binary cross entropy loss function BCE and a cross-over ratio loss function IOU,
Figure RE-FDA0003043526020000023
representing the hyperparameter of each loss function, the BCE loss function: i isbce=∑(a,b)[GT(a,b)log(SEG(a,b))+(1-GT(a,b))log(1-SEG(a,b))]Where GT (a, b) labels the expert of pixel (a, b) and SEG (a, b) labels the prediction probability of segmenting the lesion region, the IOU loss function:
Figure RE-FDA0003043526020000024
wherein H, W represent the height and width of the input image, respectively.
CN202011624371.9A 2020-12-31 2020-12-31 SA-Net-based MRI medical image segmentation method Active CN112950639B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011624371.9A CN112950639B (en) 2020-12-31 2020-12-31 SA-Net-based MRI medical image segmentation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011624371.9A CN112950639B (en) 2020-12-31 2020-12-31 SA-Net-based MRI medical image segmentation method

Publications (2)

Publication Number Publication Date
CN112950639A true CN112950639A (en) 2021-06-11
CN112950639B CN112950639B (en) 2024-05-10

Family

ID=76235037

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011624371.9A Active CN112950639B (en) 2020-12-31 2020-12-31 SA-Net-based MRI medical image segmentation method

Country Status (1)

Country Link
CN (1) CN112950639B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113344914A (en) * 2021-07-09 2021-09-03 重庆医科大学附属第一医院 Method and device for intelligently analyzing PPD skin test result based on image recognition
CN114882047A (en) * 2022-04-19 2022-08-09 厦门大学 Medical image segmentation method and system based on semi-supervision and Transformers
WO2022199143A1 (en) * 2021-03-26 2022-09-29 南京邮电大学 Medical image segmentation method based on u-shaped network
US11580646B2 (en) 2021-03-26 2023-02-14 Nanjing University Of Posts And Telecommunications Medical image segmentation method based on U-Net
CN114882047B (en) * 2022-04-19 2024-07-12 厦门大学 Medical image segmentation method and system based on semi-supervision and Transformers

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109086807A (en) * 2018-07-16 2018-12-25 哈尔滨工程大学 A kind of semi-supervised light stream learning method stacking network based on empty convolution
CN109919948A (en) * 2019-02-26 2019-06-21 华南理工大学 Nasopharyngeal Carcinoma Lesions parted pattern training method and dividing method based on deep learning
US20190333222A1 (en) * 2018-04-26 2019-10-31 NeuralSeg Ltd. Systems and methods for segmenting an image
CN110930397A (en) * 2019-12-06 2020-03-27 陕西师范大学 Magnetic resonance image segmentation method and device, terminal equipment and storage medium
CN111612754A (en) * 2020-05-15 2020-09-01 复旦大学附属华山医院 MRI tumor optimization segmentation method and system based on multi-modal image fusion
CN111681252A (en) * 2020-05-30 2020-09-18 重庆邮电大学 Medical image automatic segmentation method based on multipath attention fusion
WO2020190821A1 (en) * 2019-03-15 2020-09-24 Genentech, Inc. Deep convolutional neural networks for tumor segmentation with positron emission tomography
WO2020192471A1 (en) * 2019-03-26 2020-10-01 腾讯科技(深圳)有限公司 Image classification model training method, and image processing method and device
CN111915592A (en) * 2020-08-04 2020-11-10 西安电子科技大学 Remote sensing image cloud detection method based on deep learning

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190333222A1 (en) * 2018-04-26 2019-10-31 NeuralSeg Ltd. Systems and methods for segmenting an image
CN109086807A (en) * 2018-07-16 2018-12-25 哈尔滨工程大学 A kind of semi-supervised light stream learning method stacking network based on empty convolution
CN109919948A (en) * 2019-02-26 2019-06-21 华南理工大学 Nasopharyngeal Carcinoma Lesions parted pattern training method and dividing method based on deep learning
WO2020190821A1 (en) * 2019-03-15 2020-09-24 Genentech, Inc. Deep convolutional neural networks for tumor segmentation with positron emission tomography
WO2020192471A1 (en) * 2019-03-26 2020-10-01 腾讯科技(深圳)有限公司 Image classification model training method, and image processing method and device
CN110930397A (en) * 2019-12-06 2020-03-27 陕西师范大学 Magnetic resonance image segmentation method and device, terminal equipment and storage medium
CN111612754A (en) * 2020-05-15 2020-09-01 复旦大学附属华山医院 MRI tumor optimization segmentation method and system based on multi-modal image fusion
CN111681252A (en) * 2020-05-30 2020-09-18 重庆邮电大学 Medical image automatic segmentation method based on multipath attention fusion
CN111915592A (en) * 2020-08-04 2020-11-10 西安电子科技大学 Remote sensing image cloud detection method based on deep learning

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022199143A1 (en) * 2021-03-26 2022-09-29 南京邮电大学 Medical image segmentation method based on u-shaped network
US11580646B2 (en) 2021-03-26 2023-02-14 Nanjing University Of Posts And Telecommunications Medical image segmentation method based on U-Net
CN113344914A (en) * 2021-07-09 2021-09-03 重庆医科大学附属第一医院 Method and device for intelligently analyzing PPD skin test result based on image recognition
CN113344914B (en) * 2021-07-09 2023-04-07 重庆医科大学附属第一医院 Method and device for intelligently analyzing PPD skin test result based on image recognition
CN114882047A (en) * 2022-04-19 2022-08-09 厦门大学 Medical image segmentation method and system based on semi-supervision and Transformers
CN114882047B (en) * 2022-04-19 2024-07-12 厦门大学 Medical image segmentation method and system based on semi-supervision and Transformers

Also Published As

Publication number Publication date
CN112950639B (en) 2024-05-10

Similar Documents

Publication Publication Date Title
CN111784671B (en) Pathological image focus region detection method based on multi-scale deep learning
CN112950639B (en) SA-Net-based MRI medical image segmentation method
CN110930416B (en) MRI image prostate segmentation method based on U-shaped network
CN106056595B (en) Based on the pernicious assistant diagnosis system of depth convolutional neural networks automatic identification Benign Thyroid Nodules
CN112017198B (en) Right ventricle segmentation method and device based on self-attention mechanism multi-scale features
CN112102321A (en) Focal image segmentation method and system based on deep convolutional neural network
CN113450328B (en) Medical image key point detection method and system based on improved neural network
CN111951288A (en) Skin cancer lesion segmentation method based on deep learning
CN114494296A (en) Brain glioma segmentation method and system based on fusion of Unet and Transformer
CN113393469A (en) Medical image segmentation method and device based on cyclic residual convolutional neural network
CN111916206B (en) CT image auxiliary diagnosis system based on cascade connection
CN110751621A (en) Breast cancer auxiliary diagnosis method and device based on deep convolutional neural network
CN115471470A (en) Esophageal cancer CT image segmentation method
CN115375711A (en) Image segmentation method of global context attention network based on multi-scale fusion
Shan et al. SCA-Net: A spatial and channel attention network for medical image segmentation
CN113781488A (en) Tongue picture image segmentation method, apparatus and medium
CN114565601A (en) Improved liver CT image segmentation algorithm based on DeepLabV3+
CN114119558B (en) Method for automatically generating nasopharyngeal carcinoma image diagnosis structured report
CN116091518A (en) Mammary gland focus segmentation method and device based on dynamic contrast enhanced magnetic resonance image
CN111932486A (en) Brain glioma segmentation method based on 3D convolutional neural network
Godla et al. An Ensemble Learning Approach for Multi-Modal Medical Image Fusion using Deep Convolutional Neural Networks
CN116597041B (en) Nuclear magnetic image definition optimization method and system for cerebrovascular diseases and electronic equipment
CN113239978B (en) Method and device for correlation of medical image preprocessing model and analysis model
Kumar et al. DICNet: A Novel CNN Model based on DenseNet with Interleaved Convolutional Block Attention Module for the Classification of Breast Cancer Histopathology Images
Lakineni et al. Enhancing Brain Tumor Classification: Leveraging Pre-trained CNN Models for Efficient MRI Image Analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant