CN112950639B - SA-Net-based MRI medical image segmentation method - Google Patents

SA-Net-based MRI medical image segmentation method Download PDF

Info

Publication number
CN112950639B
CN112950639B CN202011624371.9A CN202011624371A CN112950639B CN 112950639 B CN112950639 B CN 112950639B CN 202011624371 A CN202011624371 A CN 202011624371A CN 112950639 B CN112950639 B CN 112950639B
Authority
CN
China
Prior art keywords
model
loss function
image
net
medical image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011624371.9A
Other languages
Chinese (zh)
Other versions
CN112950639A (en
Inventor
潘晓光
张海轩
刘剑超
宋晓晨
王小华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanxi Sanyouhe Smart Information Technology Co Ltd
Original Assignee
Shanxi Sanyouhe Smart Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanxi Sanyouhe Smart Information Technology Co Ltd filed Critical Shanxi Sanyouhe Smart Information Technology Co Ltd
Priority to CN202011624371.9A priority Critical patent/CN112950639B/en
Publication of CN112950639A publication Critical patent/CN112950639A/en
Application granted granted Critical
Publication of CN112950639B publication Critical patent/CN112950639B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

The invention belongs to the field of MRI medical image processing, and particularly relates to an SA-Net-based MRI medical image segmentation method, which comprises the following steps: braTS 2020 data acquisition: acquiring BraTS 2020 provided native T1-weighted imaging, contrast-enhanced imaging, T2-weighted imaging, and fluid-attenuated imaging datasets; and (3) data marking: manually annotating the dataset according to the same annotation protocol; data preprocessing: preprocessing the MRI image; training a segmentation model: segmenting the MRI medical image using variations of the U-Net model; the model parameters are adjusted through the loss function, an optimal model is obtained, and the construction process of the segmentation model is completed; after the loss function of the model is no longer decreasing, the model is saved. According to the invention, the performance of the model is evaluated through 5-fold cross validation, and full-scale information can be fully utilized to segment the MRI medical image. The invention is used for segmentation of MRI medical images.

Description

SA-Net-based MRI medical image segmentation method
Technical Field
The invention belongs to the field of MRI medical image processing, and particularly relates to an SA-Net-based MRI medical image segmentation method.
Background
The current automatic segmentation of medical images is used for accurate focal site detection by extracting quantitative imaging biomarkers, which is a key step in diagnosis, prognosis, treatment planning and assessment, and is the most challenging step. Multiparameter magnetic resonance imaging (mpMRI) as a primary imaging modality for the treatment of cancer can provide a variety of different tissue properties. However, interpreting mpMRI images correctly is a challenging task not only because mpMRI sequences produce large amounts of three-or four-dimensional image data, but also because of the inherent heterogeneity of MRI medical images. Thus, computerized analysis is increasingly required to aid clinicians in better interpreting mpMRI focal sites in images via computer analysis. In particular, in mpMRI image quantitative analysis, automatic segmentation of lesion parts and sub-regions thereof is an indispensable step.
The focal sites of current MRI medical images are mostly U-Net and its variants for accurate segmentation. However, when there are multiple scale feature mappings in the encoding path, existing U-Net architectures limit feature fusion at the same scale. Research shows that in medical images with different scales, images with low scales represent space detail information, while images with high scales represent semantic information such as target positions, so that in the current U-Net system structure, scale-based feature fusion cannot fully utilize full-scale information.
Disclosure of Invention
Aiming at the technical problem that the full-scale information cannot be fully utilized in the scale-based feature fusion, the invention provides the SA-Net-based MRI medical image segmentation method which is full in utilization, high in efficiency and high in reliability.
In order to solve the technical problems, the invention adopts the following technical scheme:
an MRI medical image segmentation method based on SA-Net comprises the following steps:
S1, braTS 2020 data acquisition: acquiring BraTS 2020 provided native T1-weighted imaging, contrast-enhanced imaging, T2-weighted imaging, and fluid-attenuated imaging datasets;
s2, data marking: manually annotating the dataset according to the same annotation protocol;
s3, data preprocessing: preprocessing the MRI image;
S4, training a segmentation model: segmenting the MRI medical image using variations of the U-Net model;
s5, adjusting model parameters through a loss function to obtain an optimal model, and completing the construction process of the segmentation model;
And S6, after the loss function of the model is not reduced, storing the model.
The data preprocessing method in the S3 comprises the following steps: comprises the following steps:
S3.1, independently normalizing each mode, dividing the average value by the standard deviation of the whole image, Wherein μ is the mean value of all sample data, σ is the standard deviation of all sample data;
s3.2, randomly overturning the left/right, up/down and front/back directions of the input quantity, and carrying out data enhancement with the probability of 0.5, or randomly selecting a factor to adjust the contrast of each image input channel, so as to obtain MRI medical images with different contrasts;
s3.3, changing the input image into a corresponding size suitable for the model before training the data set;
S3.4, performing 5-fold cross validation on the performance of the model estimated on the training data set, and simultaneously, adjusting parameters of the model through training of the model to find out a parameter value which enables the model to reach the optimal.
The method for training the segmentation model in the S4 comprises the following steps: the SA-Net combines the coded block outputs of different scales into a scale attention block, learns and selects features with full-scale information, wherein the scale attention block is built on ResNet modules, each module is composed of two convolution layers and a ReLU activation layer, the depth and the width of a model are improved through jump connection, and more complex feature information is extracted: f (x) =h (x) -x, where H (x) is the output of the residual network, F (x) is the output after convolution operation, adding a compression module to each remaining block to form ResSE blocks, and gradually halving the feature map dimension through the modules while doubling the feature width, where the compression module is: Where H, W represent the height and width of the input image, respectively, uc represents the c-th convolution kernel, and the extraction module is: s=f ex(z,W)=σ(g(z,W))=σ(g(W2σ(W1 z)), where σ represents a ReLU activation function, g represents a sigmoid activation function,/> Respectively, the weight matrix of two full-connection layers, and the final output/>, after the gating unit s is obtainedAfter the gate control unit s is obtained, outputWherein/>Is/>Feature Map, s c of a Feature channel of (a) is a scalar value in gating cell s; introducing depth supervision to each intermediate scale layer of a decoding path, performing feature width reduction on each depth supervision subnet by adopting 1 multiplied by 1 convolution, then adopting a tri-linear upsampling layer to enable the depth supervision subnet to have the same spatial dimension as output, and finally applying a Sigmoid function to obtain ultra-dense prediction, wherein the Sigmoid function is as follows: /(I)
The method for adjusting the model parameters through the loss function in the S5 is as follows: the network employs a hybrid loss function to reduce the gap between the segmented image and the annotated image: i bce and I iou represent binary cross entropy loss function BCE and cross ratio loss function IOU,/>, respectively A hyper-parameter representing each loss function, BCE loss function: wherein GT (a, b) labels the expert of the pixel (a, b) and SEG (a, b) labels the prediction probability of the segmented lesion region, IOU loss function: wherein H, W represent the height and width of the input image, respectively.
Compared with the prior art, the invention has the beneficial effects that:
According to the invention, the preprocessed data set is input into the established U-net variant network for training, the model is stored after the model loss is stable, the performance of the model is evaluated through 5-fold cross validation, and full-scale information can be fully utilized for segmenting MRI medical images.
Drawings
FIG. 1 is a flow chart of the main steps of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
An MRI medical image segmentation method based on SA-Net, as shown in figure 1, comprises the following steps:
S1, braTS 2020 data acquisition: the data set provided by BraTS 2020 was collected for MRI scan results for 19 institutions and used with different protocols, magnetic field strengths and manufacturers. Each patient received native T1 weighted imaging, contrast enhanced imaging, T2 weighted imaging, and fluid attenuation imaging. These images were subjected to severe registration, craniocerebral dissection and resampling with an isotropic resolution of 111mm and image size 240×240×155. Three tumor subregions, including enhanced tumors, peri-tumor oedema and necrosis and other non-enhanced tumor cores.
S2, data marking: manual annotation was performed by 1-4 raters according to the same annotation protocol and eventually received approval from an experienced neurologist.
S3, data preprocessing: preprocessing the MRI image;
S4, training a segmentation model: segmenting the MRI medical image using variations of the U-Net model;
s5, adjusting model parameters through a loss function to obtain an optimal model, and completing the construction process of the segmentation model;
And S6, after the loss function of the model is not reduced, storing the model.
The data preprocessing method in S3 is as follows: comprises the following steps:
Further, S3.1, each mode is independently normalized, and by subtracting the mean value to divide the standard deviation of the whole image, Wherein μ is the mean value of all sample data, σ is the standard deviation of all sample data;
s3.2, randomly overturning the left/right, up/down and front/back directions of the input quantity, and carrying out data enhancement with the probability of 0.5, or randomly selecting a factor to adjust the contrast of each image input channel, so as to obtain MRI medical images with different contrasts;
S3.3, since the acquired image size is 240×240×155, the input image needs to be changed to the corresponding size of the model before training the data set to obtain the best training result. Too large an image scale easily causes model overfitting and gradient explosion, but too small an image scale causes the model to be difficult to extract characteristic information, and causes the model segmentation accuracy to be low.
S3.4, performing 5-fold cross validation on the performance of the model estimated on the training data set, and simultaneously, adjusting parameters of the model through training of the model to find out a parameter value which enables the model to reach the optimal. The 5-fold cross validation is divided into 5 parts by data, one part is taken for testing each time, the rest part is used for training, and the total time is 5 times, so that the performance of the segmentation model is evaluated maximally.
Further, the method for training the segmentation model in S4 comprises the following steps: the SA-Net combines the coded block outputs of different scales into a scale attention block, learns and selects features with full-scale information, wherein the scale attention block is built on ResNet modules, each module is composed of two convolution layers and a ReLU activation layer, the depth and the width of a model are improved through jump connection, and more complex feature information is extracted: f (x) =h (x) -x, where H (x) is the output of the residual network, F (x) is the output after convolution operation, adding a compression module to each remaining block to form ResSE blocks, and gradually halving the feature map dimension through the modules while doubling the feature width, where the compression module is: Where H, W represent the height and width of the input image, respectively, uc represents the c-th convolution kernel, and the extraction module is: s=f ex(z,W)=σ(g(z,W))=σ(g(W2σ(W1 z)), where σ represents a ReLU activation function, g represents a sigmoid activation function,/> Respectively, the weight matrix of two full-connection layers, and the final output/>, after the gating unit s is obtainedAfter the gate control unit s is obtained, outputWherein/>Is/>Feature Map, s c of a Feature channel of (a) is a scalar value in gating cell s; introducing depth supervision to each intermediate scale layer of a decoding path, performing feature width reduction on each depth supervision subnet by adopting 1 multiplied by 1 convolution, then adopting a tri-linear upsampling layer to enable the depth supervision subnet to have the same spatial dimension as output, and finally applying a Sigmoid function to obtain ultra-dense prediction, wherein the Sigmoid function is as follows: /(I)The proposed scale attention block in the decoding stage consists of a full scale jump connection from the encoding path to the decoding path, where each decoding layer contains the output feature map from all encoding layers, capturing fine-granularity detail and coarse-granularity semantic information at the same time at full scale. The method comprises the steps of performing feature mapping on input with different scales of a coding path, converting the input into feature images with the same dimension, and performing full-scale feature fusion on the feature images obtained through up-sampling in a decoding stage to obtain feature images with more semantic feature information. Therefore, each feature map in the decoding stage is obtained by fusing the feature output of each layer in the encoding stage and the feature output obtained by upsampling the next layer, and the low-level detail information and the high-level semantic information are combined into a unified frame, so that the problem that full-scale information can not be fully utilized due to scale-based feature fusion is solved.
Further, the method for adjusting the model parameters through the loss function in S5 is as follows: the network employs a hybrid loss function to reduce the gap between the segmented image and the annotated image: i bce and I iou represent binary cross entropy loss function BCE and cross ratio loss function IOU,/>, respectively A hyper-parameter representing each loss function, BCE loss function: wherein GT (a, b) labels the expert of the pixel (a, b) and SEG (a, b) labels the prediction probability of the segmented lesion region, IOU loss function: wherein H, W represent the height and width of the input image, respectively.
The preferred embodiments of the present invention have been described in detail, but the present invention is not limited to the above embodiments, and various changes can be made within the knowledge of those skilled in the art without departing from the spirit of the present invention, and the various changes are included in the scope of the present invention.

Claims (3)

1. An MRI medical image segmentation method based on SA-Net is characterized in that: comprises the following steps:
S1, braTS2020 data acquisition: acquiring BraTS2020 provided native T1-weighted imaging, contrast-enhanced imaging, T2-weighted imaging, and fluid-attenuated imaging datasets;
s2, data marking: manually annotating the dataset according to the same annotation protocol;
s3, data preprocessing: preprocessing the MRI image;
S4, training a segmentation model: segmenting the MRI medical image using variations of the U-Net model;
The method for training the segmentation model in the S4 comprises the following steps: the SA-Net combines the coded block outputs of different scales into a scale attention block, learns and selects features with full-scale information, wherein the scale attention block is built on ResNet modules, each module is composed of two convolution layers and a ReLU activation layer, the depth and the width of a model are improved through jump connection, and more complex feature information is extracted: f (x) =h (x) -x, where H (x) is the output of the residual network, F (x) is the output after convolution operation, adding a compression module to each remaining block to form ResSE blocks, and gradually halving the feature map dimension through the modules while doubling the feature width, where the compression module is: Where H, W represent the height and width of the input image, respectively, uc represents the c-th convolution kernel, and the extraction module is: s=f ex(z,W)=σ(g(z,W))=σ(g(W2σ(W1 z)), where σ represents a ReLU activation function, g represents a sigmoid activation function,/> Respectively, the weight matrix of two full-connection layers, and the final output/>, after the gating unit s is obtainedAfter obtaining the gating cell s, output/> Wherein/>Is/>Feature Map, s c of a Feature channel of (a) is a scalar value in gating cell s; introducing depth supervision to each intermediate scale layer of a decoding path, performing feature width reduction on each depth supervision subnet by adopting 1 multiplied by 1 convolution, then adopting a tri-linear upsampling layer to enable the depth supervision subnet to have the same spatial dimension as output, and finally applying a Sigmoid function to obtain ultra-dense prediction, wherein the Sigmoid function is as follows: /(I)
S5, adjusting model parameters through a loss function to obtain an optimal model, and completing the construction process of the segmentation model;
And S6, after the loss function of the model is not reduced, storing the model.
2. The SA-Net based MRI medical image segmentation method according to claim 1, wherein: the data preprocessing method in the S3 comprises the following steps: comprises the following steps:
S3.1, independently normalizing each mode, dividing the average value by the standard deviation of the whole image, Wherein μ is the mean value of all sample data, σ is the standard deviation of all sample data;
s3.2, randomly overturning the left/right, up/down and front/back directions of the input quantity, and carrying out data enhancement with the probability of 0.5, or randomly selecting a factor to adjust the contrast of each image input channel, so as to obtain MRI medical images with different contrasts;
s3.3, changing the input image into a corresponding size suitable for the model before training the data set;
S3.4, performing 5-fold cross validation on the performance of the model estimated on the training data set, and simultaneously, adjusting parameters of the model through training of the model to find out a parameter value which enables the model to reach the optimal.
3. The SA-Net based MRI medical image segmentation method according to claim 1, wherein: the method for adjusting the model parameters through the loss function in the S5 is as follows: the network employs a hybrid loss function to reduce the gap between the segmented image and the annotated image: i bce and I iou represent binary cross entropy loss function BCE and cross ratio loss function IOU,/>, respectively A hyper-parameter representing each loss function, BCE loss function: i bce=∑(a,b) [ GT (a, b) log (SEG (a, b)) + (1-GT (a, b)) log (1-SEG (a, b)) ] wherein GT (a, b) labels the expert of pixels (a, b) and SEG (a, b) labels the predicted probability of dividing a lesion area, IOU loss function: wherein H, W represent the height and width of the input image, respectively.
CN202011624371.9A 2020-12-31 2020-12-31 SA-Net-based MRI medical image segmentation method Active CN112950639B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011624371.9A CN112950639B (en) 2020-12-31 2020-12-31 SA-Net-based MRI medical image segmentation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011624371.9A CN112950639B (en) 2020-12-31 2020-12-31 SA-Net-based MRI medical image segmentation method

Publications (2)

Publication Number Publication Date
CN112950639A CN112950639A (en) 2021-06-11
CN112950639B true CN112950639B (en) 2024-05-10

Family

ID=76235037

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011624371.9A Active CN112950639B (en) 2020-12-31 2020-12-31 SA-Net-based MRI medical image segmentation method

Country Status (1)

Country Link
CN (1) CN112950639B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113077471B (en) * 2021-03-26 2022-10-14 南京邮电大学 Medical image segmentation method based on U-shaped network
US11580646B2 (en) 2021-03-26 2023-02-14 Nanjing University Of Posts And Telecommunications Medical image segmentation method based on U-Net
CN113344914B (en) * 2021-07-09 2023-04-07 重庆医科大学附属第一医院 Method and device for intelligently analyzing PPD skin test result based on image recognition
CN114882047A (en) * 2022-04-19 2022-08-09 厦门大学 Medical image segmentation method and system based on semi-supervision and Transformers

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109086807A (en) * 2018-07-16 2018-12-25 哈尔滨工程大学 A kind of semi-supervised light stream learning method stacking network based on empty convolution
CN109919948A (en) * 2019-02-26 2019-06-21 华南理工大学 Nasopharyngeal Carcinoma Lesions parted pattern training method and dividing method based on deep learning
CN110930397A (en) * 2019-12-06 2020-03-27 陕西师范大学 Magnetic resonance image segmentation method and device, terminal equipment and storage medium
CN111612754A (en) * 2020-05-15 2020-09-01 复旦大学附属华山医院 MRI tumor optimization segmentation method and system based on multi-modal image fusion
CN111681252A (en) * 2020-05-30 2020-09-18 重庆邮电大学 Medical image automatic segmentation method based on multipath attention fusion
WO2020190821A1 (en) * 2019-03-15 2020-09-24 Genentech, Inc. Deep convolutional neural networks for tumor segmentation with positron emission tomography
WO2020192471A1 (en) * 2019-03-26 2020-10-01 腾讯科技(深圳)有限公司 Image classification model training method, and image processing method and device
CN111915592A (en) * 2020-08-04 2020-11-10 西安电子科技大学 Remote sensing image cloud detection method based on deep learning

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA3041140C (en) * 2018-04-26 2021-12-14 NeuralSeg Ltd. Systems and methods for segmenting an image

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109086807A (en) * 2018-07-16 2018-12-25 哈尔滨工程大学 A kind of semi-supervised light stream learning method stacking network based on empty convolution
CN109919948A (en) * 2019-02-26 2019-06-21 华南理工大学 Nasopharyngeal Carcinoma Lesions parted pattern training method and dividing method based on deep learning
WO2020190821A1 (en) * 2019-03-15 2020-09-24 Genentech, Inc. Deep convolutional neural networks for tumor segmentation with positron emission tomography
WO2020192471A1 (en) * 2019-03-26 2020-10-01 腾讯科技(深圳)有限公司 Image classification model training method, and image processing method and device
CN110930397A (en) * 2019-12-06 2020-03-27 陕西师范大学 Magnetic resonance image segmentation method and device, terminal equipment and storage medium
CN111612754A (en) * 2020-05-15 2020-09-01 复旦大学附属华山医院 MRI tumor optimization segmentation method and system based on multi-modal image fusion
CN111681252A (en) * 2020-05-30 2020-09-18 重庆邮电大学 Medical image automatic segmentation method based on multipath attention fusion
CN111915592A (en) * 2020-08-04 2020-11-10 西安电子科技大学 Remote sensing image cloud detection method based on deep learning

Also Published As

Publication number Publication date
CN112950639A (en) 2021-06-11

Similar Documents

Publication Publication Date Title
CN112950639B (en) SA-Net-based MRI medical image segmentation method
CN111784671B (en) Pathological image focus region detection method based on multi-scale deep learning
CN110930416B (en) MRI image prostate segmentation method based on U-shaped network
CN112017198B (en) Right ventricle segmentation method and device based on self-attention mechanism multi-scale features
CN110717907A (en) Intelligent hand tumor detection method based on deep learning
CN109754007A (en) Peplos intelligent measurement and method for early warning and system in operation on prostate
JP2023540910A (en) Connected Machine Learning Model with Collaborative Training for Lesion Detection
CN115496771A (en) Brain tumor segmentation method based on brain three-dimensional MRI image design
CN112862805B (en) Automatic auditory neuroma image segmentation method and system
Wu et al. BA‐GCA net: boundary‐aware grid contextual attention net in osteosarcoma MRI image segmentation
CN115496720A (en) Gastrointestinal cancer pathological image segmentation method based on ViT mechanism model and related equipment
Molahasani Majdabadi et al. Capsule GAN for prostate MRI super-resolution
CN115471470A (en) Esophageal cancer CT image segmentation method
CN117392389A (en) MT-SASS network-based kidney cancer MRI image segmentation classification method
Yin et al. Super resolution reconstruction of CT images based on multi-scale attention mechanism
CN114119558B (en) Method for automatically generating nasopharyngeal carcinoma image diagnosis structured report
CN113379770B (en) Construction method of nasopharyngeal carcinoma MR image segmentation network, image segmentation method and device
CN115294023A (en) Liver tumor automatic segmentation method and device
Cifci et al. Deep learning algorithms for diagnosis of breast cancer with maximum likelihood estimation
CN114463320A (en) Magnetic resonance imaging brain glioma IDH gene prediction method and system
CN116597041B (en) Nuclear magnetic image definition optimization method and system for cerebrovascular diseases and electronic equipment
Shi et al. Diagnostic Segmentation Based on Kidney Medical Image
Godla et al. An Ensemble Learning Approach for Multi-Modal Medical Image Fusion using Deep Convolutional Neural Networks
CN117830327A (en) Viral pneumonia focus segmentation method based on field self-adaption and multi-scale feature fusion
CN114974558A (en) Hepatocellular carcinoma auxiliary screening method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant