CN114155195B - Brain tumor segmentation quality evaluation method, device and medium based on deep learning - Google Patents

Brain tumor segmentation quality evaluation method, device and medium based on deep learning Download PDF

Info

Publication number
CN114155195B
CN114155195B CN202111280955.3A CN202111280955A CN114155195B CN 114155195 B CN114155195 B CN 114155195B CN 202111280955 A CN202111280955 A CN 202111280955A CN 114155195 B CN114155195 B CN 114155195B
Authority
CN
China
Prior art keywords
segmentation
model
mri image
network
brain
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111280955.3A
Other languages
Chinese (zh)
Other versions
CN114155195A (en
Inventor
廖伟华
胡蓉
杨利
吴静
彭健
孟舒娟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiangya Hospital of Central South University
Original Assignee
Xiangya Hospital of Central South University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiangya Hospital of Central South University filed Critical Xiangya Hospital of Central South University
Priority to CN202111280955.3A priority Critical patent/CN114155195B/en
Publication of CN114155195A publication Critical patent/CN114155195A/en
Application granted granted Critical
Publication of CN114155195B publication Critical patent/CN114155195B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Medical Informatics (AREA)
  • Image Processing (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a brain tumor segmentation quality evaluation method, equipment and a medium based on deep learning, wherein the method comprises the following steps: collecting and preprocessing MRI images, and manually segmenting the focus to obtain a segmentation gold standard of the MRI images; training a segmentation model by using an MRI image and a corresponding segmentation gold standard; taking the probability output of the segmentation model to the MRI image as input, taking the difference value between the binarization segmentation result and the segmentation gold standard as output, and training a segmentation error prediction model; splicing the MRI image, the binarization segmentation result of the segmentation model on the MRI image and the uncertainty prediction map of the segmentation error prediction model on the MRI image in the channel direction and taking the result and the dice similarity coefficient as output, and training a segmentation quality evaluation network; and finally, obtaining a brain tumor segmentation result, a pixel-level segmentation error prediction map and an image-level quality prediction result by using the trained model, and providing a segmentation quality evaluation reference.

Description

Brain tumor segmentation quality evaluation method, device and medium based on deep learning
Technical Field
The invention belongs to the technical field of medical information, and particularly relates to a brain tumor segmentation quality assessment method, device and medium based on deep learning.
Background
In recent years, deep learning technology has made great progress in the field of medical image analysis such as disease diagnosis and lesion segmentation, however, due to the influence of many factors such as data noise, sensory noise and non-optimal hyper-parameter setting, the prediction result given by the deep learning model is not always reliable. Lesion segmentation is often the middle part of the whole analysis process, and inaccurate segmentation results can generate deviation on subsequent analysis, so that final results, particularly brain tumor segmentation, are influenced, and diagnosis, monitoring and treatment of diseases are necessary. The quality of a segmentation result obtained by segmenting a brain tumor by using a deep learning technology cannot be quantitatively evaluated at present, and the uncertainty of the segmentation result makes the brain tumor segmentation by using the deep learning technology difficult to fall into clinical practice. Therefore, it is necessary to perform quality evaluation on the brain tumor segmentation result, and determine whether to perform manual intervention on the segmentation result according to the evaluation result, so as to implement full-process automation of auxiliary diagnosis.
Disclosure of Invention
The invention provides a brain tumor segmentation quality evaluation method, equipment and medium based on deep learning, which can perform region segmentation on a brain MRI image, quantify uncertainty of segmentation, avoid result errors caused by inaccurate model segmentation and provide quality assurance for subsequent analysis processes.
In order to achieve the technical purpose, the invention adopts the following technical scheme:
a brain tumor segmentation quality assessment method based on deep learning comprises the following steps:
step 1, collecting brain MRI images of a brain tumor patient, and preprocessing the brain MRI images;
step 2, segmenting all focus in the brain MRI image, including three areas of tumor, peripheral edema and necrosis, to obtain an artificial segmentation gold standard;
step 3, taking the preprocessed brain MRI image as input and the corresponding artificial segmentation gold standard as output, and training a deep learning model to obtain a focus segmentation model;
step 4, inputting the brain MRI image after pretreatment into a lesion segmentation model, and outputting to obtain a lesion segmentation probability map of the brain MRI image;
performing binarization processing on the focus segmentation probability map to obtain a binarization result map of focus segmentation;
step 5, taking a focus segmentation probability map of the brain MRI image as input, taking a difference value between a corresponding artificial segmentation gold standard and a binarization result map as output, and training a deep learning network to obtain a segmentation error prediction model;
step 6, inputting a focus segmentation probability map of the brain MRI image into a segmentation error prediction model, and outputting to obtain a segmentation uncertainty prediction map;
splicing the preprocessed brain MRI image, the focus segmentation binarization result graph obtained in the step 4 and the segmentation uncertainty prediction graph obtained in the step 6 in the channel direction to obtain a spliced graph;
calculating a dice similarity coefficient between the manual segmentation gold standard and the binarization result graph;
step 7, taking the brain MRI image splicing image obtained correspondingly in the step 6 as input, taking the dice similarity coefficient as output, and training a deep learning model to obtain a segmentation quality evaluation network;
and step 8, preprocessing the newly obtained brain tumor MRI image according to the step 1, obtaining a focus segmentation probability map and a focus segmentation binary result map according to the step 4 by using a focus segmentation model, obtaining a segmentation uncertainty prediction map according to the step 6 by using a segmentation error prediction model, splicing the newly obtained brain tumor MRI image, the focus segmentation binary result map and the segmentation uncertainty prediction map in the channel direction, and finally evaluating the spliced map by using a segmentation quality evaluation network to obtain a dice correlation coefficient between the brain tumor MRI image and an artificial gold standard, namely the segmentation quality score of the focus segmentation model on the brain tumor MRI image.
In a more preferred embodiment, the preprocessing in step 1 includes image enhancement and data augmentation.
In a more preferable technical scheme, the focus segmentation model adopts a U-net network, and the loss function adopted by the step 3 for training the focus segmentation model is a class cross entropy loss function.
In a more preferable technical scheme, the segmentation error prediction model adopts a U-net network, and the loss function adopted by the training of the segmentation error prediction model in the step 5 is a category cross entropy loss function.
In a more preferable technical scheme, the segmentation quality evaluation network adopts a 3D VGG network, and a loss function adopted by the step 7 for training the segmentation quality evaluation network is a mean square error loss function.
In a more preferable technical scheme, the process of training the segmentation model and the segmentation error prediction model in the steps 3 to 5 takes the segmentation model as a generation network and the segmentation error prediction model as a discrimination network, and the segmentation model and the segmentation error prediction model are trained simultaneously in a manner of generating the discrimination network.
In a more preferable technical scheme, the brain MRI image comprises four modal data of T1, T2, T1CE and Flair, wherein the input of the segmentation network adopts a single slice image to input the T1, T2, T1CE and Flair in a 4-channel mode, and the multi-modal image data information complementation is utilized to improve the segmentation precision.
An electronic device comprising a memory and a processor, wherein the memory stores a computer program, and the computer program, when executed by the processor, causes the processor to implement any one of the above brain tumor segmentation quality assessment methods based on deep learning.
A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements the deep learning-based brain tumor segmentation quality assessment method according to any one of the above.
Advantageous effects
The invention extracts deep convolution characteristics from the brain MRI image by using an artificial intelligence method, realizes automatic segmentation of the brain tumor and intelligent evaluation of the quality of the segmentation result, and the model can provide a high-quality segmentation result for subsequent automatic auxiliary diagnosis and simultaneously provide a pixel-level segmentation uncertainty map and a single image Dice coefficient prediction value. In a large-scale brain tumor segmentation task, the cases with inaccurate automatic segmentation results can be rapidly identified by setting a threshold value and by means of the Dice coefficient predicted by the model, uncertain areas of the model can be found through the uncertain maps of the images, and then whether the segmentation results need to be modified manually or not is judged, so that the follow-up analysis process is guaranteed to be carried out in high quality.
Drawings
Fig. 1 is a technical roadmap of a method described in an embodiment of the present application.
Detailed Description
The following describes embodiments of the present invention in detail, which are developed based on the technical solutions of the present invention, and give detailed implementation manners and specific operation procedures to further explain the technical solutions of the present invention.
Example 1
The embodiment provides a brain tumor segmentation quality assessment method based on deep learning, as shown in fig. 1, including the following steps:
step 1, collecting brain MRI images of brain tumor patients and preprocessing the brain MRI images.
The MRI images of the brain collected in this example include data from the four modalities T1CE, T1, T2 and Flair. Data were collected from central southern university Hunan Yao hospital, hunan province Children hospital, philadelphia children hospital, american Brown university subsidiary Rodride island hospital, and American Pennsylvania university subsidiary hospital, among which tumor categories included glioma, meningioma, pituitary tumor, brain metastasis, lymphoma, craniopharyngioma, glioma, ependymoma, medulloblastoma, oligocolloid tumor, astrocytoma, schwanoma, atypical teratoma-like rhabdomyoma, primary neuroectodermal tumor, dysplastic neuroepithelial tumor, and other brain tumors in total 5543 cases.
Because conditions such as the shooting environment and shooting equipment of the images cannot be guaranteed to be consistent, and differences exist among the collected different MRI images, the collected MRI images need to be preprocessed, including image enhancement and data augmentation, so that the influence of the differences on subsequent model training is reduced. Since labeling of medical image data requires consumption of a large amount of medical resources, especially three-dimensional image data, necessary data expansion can increase data diversity and enhance model robustness.
In MR scanning, the bias field causes inhomogeneities in the magnetic field strength, so that the MR intensity values vary in images obtained from the same scanner, the same patient or even the same tissue. In the embodiment of the scheme, an N4 BiasFileDecorrection function in a Python package SimpleITK is adopted to solve the problem.
Since the MR scan time is long, different slices in a sequence are likely to be offset, and in order to make the model more efficient to use information between different dimensions, registration is performed on each sequence based on the first slice using simplex itk, while image-level registration is performed on all modality data.
The image intensities are normalized using the Nyul algorithm so that within the same body region, a particular intensity value represents all of the different slices and a particular tissue of the patient. And (3) for each image, standardizing the image by adopting a z-score method, wherein the data conform to standard normal distribution, the mean value is 0, and the standard deviation is 1.
And 2, segmenting all focuses in the brain MRI image T1, including three regions of tumor, peripheral edema and necrosis, and obtaining an artificial segmentation gold standard. The segmentation method comprises the following steps: and importing the collected image data into a 3D slicer, delineating the boundary of the brain tumor, and finally exporting the original image data image and the segmented image into a format of ". Nii".
And 3, taking the preprocessed brain MRI image as input and the corresponding artificial segmentation gold standard as output, and training a deep learning model to obtain a focus segmentation model.
In this embodiment, data of four modalities, i.e., T1CE, T1, T2 and Flair, of a brain MRI image are input into a deep learning model for lesion segmentation in a single-slice multi-modality image channel stacking manner, and the segmentation accuracy is improved by using multi-modality image data information complementation.
Step 4, inputting the brain MRI image after pretreatment into a lesion segmentation model, and outputting to obtain a lesion segmentation probability map of the brain MRI image; and (4) carrying out binarization processing on the lesion segmentation probability map to obtain a binarization result map of lesion segmentation.
And 5, taking a focus segmentation probability map of the brain MRI image as input, taking a difference value between a corresponding artificial segmentation gold standard and a binarization result map as output, and training a deep learning network to obtain a segmentation error prediction model.
The steps 3-5 are as follows: after all training data are used for training to obtain a focus segmentation model in the step 3, the trained focus segmentation model is used for obtaining a focus segmentation probability map and a corresponding binarization result map in the step 4, and then the training data are used for training to obtain a segmentation error prediction model in the step 5.
In the process of training the segmentation models and the segmentation error prediction models in the steps 3-5, the segmentation models can be used as a generation network, the segmentation error prediction models can be used as a discrimination network, and the segmentation models and the segmentation error prediction models are trained simultaneously in a manner of generating the discrimination network. Therefore, the segmented network can be simultaneously subjected to standard cross entropy loss function based on the artificial segmentation golden standard and antagonistic loss supervision of the discriminant network during training. The embodiment adopts a training mode of generating a discriminant network. And the programming language adopted by the model training is Python, and the framework adopted is Keras. The training graphics card is Quadro GV100 with 32GB memory. In the training process, images are augmented, and random axis overturning, translation, random cutting and the like are adopted.
The segmentation model (i.e. generation of network, segmentor) adopts a structure of Unet, and the input is the preprocessed brain MRI image slice T1 and the corresponding T2, T1CE, flair, and the output is the binary segmentation mask, i.e. the artificial segmentation gold standard obtained in step 2. The network structure comprises an encoder and a decoder. In the encoder, each layer contains two convolution operations 3*3 followed by a maximum pooling layer of 2*2 with step size 2 for down-sampling purposes, while Batch Normalization is used to make the network converge better, with the convolutional layer activation function ReLU. In the decoder, each layer is upsampled by 2*2, a transposed convolution operation with step size of 2, spliced with the encoder output of the corresponding layer, and then followed by two normal convolution operations of 3*3. Finally, the number of the network output channels is the number of the categories through the convolution of 1*1. The loss function adopted by the network is a category cross entropy loss function plus a discriminant network loss function.
Defining a segmentation network as S, preprocessing a slice of an input T1 mode as X, artificially segmenting a golden standard as Y, and segmenting a result as S (X), wherein a loss function of the network can be defined as:
Figure BDA0003331077390000051
where λ =0.01, P is the pixel in the input MRI image slice X, S () P ,Y P For the output of the generator and the cut-off gold criteria at the P position, L D For the discriminator-loss function, N is the number of slices, and C =3 is the number of segmentation classes.
The split Error prediction network (i.e., discriminant network, error Map Predictor) also adopts the structure of Unet. In the training stage, the output of the segmentation network is used as the input of the network, the difference between one-hot coding of the golden standard and the binary segmentation result is used as the output of the network (segmentation uncertainty map), and the loss function adopted in the training is the class cross entropy loss function.
Figure BDA0003331077390000052
Wherein the content of the first and second substances,
Figure BDA0003331077390000053
p is the pixel in the input slice X, D () P 、S() P The output of the discriminator and generator at the P position, respectively, and N is the number of slices.
Step 6, obtaining training data of the segmentation quality evaluation model: inputting the preprocessed brain MRI image into a lesion segmentation model, and outputting to obtain a lesion segmentation probability map of the brain MRI image; inputting a focus segmentation probability map of the brain MRI image into a segmentation error prediction model, and outputting to obtain a segmentation uncertainty prediction map; splicing the preprocessed brain MRI image, the binary result image of focus segmentation and the segmentation uncertainty prediction map in the channel direction to obtain a spliced image; and calculating a dice similarity coefficient between the manual segmentation gold standard and the binarization result image.
And 7, taking the brain MRI image splicing map obtained in the step 6 as input, taking the dice similarity coefficient as output, training a deep learning model, and obtaining a segmentation quality evaluation network.
The network structure adopted by the segmentation quality evaluation network is 3D VGG. The network contains a total of 5 convolution layers, the convolution kernel size is 3 x 3, and there are two convolution operations per layer. Each convolution layer is followed by the largest pooling layer with convolution kernel size of 2 x 2 and step size of 2. And (3) obtaining the input of the last 3 fully-connected layers through convolution operation of 1-1, wherein the output of the last layer is a single neuron, and the adopted activation function is a sigmoid activation function. The loss function employed by the network is a mean square error function. The loss function is defined as follows:
Figure BDA0003331077390000061
wherein Q is a segmentation quality evaluation network, X i For the ith input image, Y i The golden standard is divided for the ith image and d is the dice similarity coefficient.
And 8, preprocessing the newly acquired brain tumor MRI image according to the step 1, obtaining a focus segmentation probability map and a binarization result map according to the step 4 by using a focus segmentation model, obtaining a segmentation uncertainty prediction map according to the step 6 by using a segmentation error prediction model, splicing the newly acquired brain tumor MRI image, the focus segmentation binarization result map and the segmentation uncertainty prediction map in the channel direction, and finally evaluating the spliced map by using a segmentation quality evaluation network to obtain a dice correlation coefficient between the brain tumor MRI image and an artificial gold standard, namely the segmentation quality score of the focus segmentation model on the brain tumor MRI image.
By the brain tumor segmentation quality evaluation method based on deep learning, the method can be obtained from brain MRI images: (1) The brain tumor segmentation result realizes the automatic segmentation of the focus, and can reduce the time cost of manual segmentation and the subjective influence of segmentation; (2) The segmentation uncertainty atlas at the pixel level can display areas with higher uncertainty in the segmentation result and provide reference for modifying the segmentation result through manual intervention; (3) The image-level segmentation quality evaluation result can provide a quick and simple quality evaluation basis for large-scale segmentation tasks.
Example 2
The present embodiment provides an electronic device, which includes a memory and a processor, where the memory stores a computer program, and when the computer program is executed by the processor, the processor is enabled to implement the method of embodiment 1.
Example 3
The present embodiment provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the method of embodiment 1.
The above embodiments are preferred embodiments of the present application, and those skilled in the art can make various changes or modifications without departing from the general concept of the present application, and such changes or modifications should fall within the scope of the claims of the present application.

Claims (9)

1. A brain tumor segmentation quality assessment method based on deep learning is characterized by comprising the following steps:
step 1, collecting brain MRI images of a brain tumor patient, and preprocessing the brain MRI images;
step 2, segmenting all focuses in the brain MRI image, including three areas of tumor, peripheral edema and necrosis, and obtaining an artificial segmentation gold standard;
step 3, taking the preprocessed brain MRI image as input and the corresponding artificial segmentation gold standard as output, and training a deep learning model to obtain a focus segmentation model;
step 4, inputting the brain MRI image after pretreatment into a lesion segmentation model, and outputting to obtain a lesion segmentation probability map of the brain MRI image;
performing binarization processing on the focus segmentation probability map to obtain a binarization result map of focus segmentation;
step 5, taking a focus segmentation probability map of the brain MRI image as input, taking a difference value between a corresponding artificial segmentation gold standard and a binarization result map as output, and training a deep learning network to obtain a segmentation error prediction model;
step 6, inputting a focus segmentation probability map of the brain MRI image into a segmentation error prediction model, and outputting to obtain a segmentation uncertainty prediction map;
splicing the preprocessed brain MRI image, the focus segmentation binarization result graph obtained in the step 4 and the segmentation uncertainty prediction graph obtained in the step 6 in the channel direction to obtain a spliced graph;
calculating a dice similarity coefficient between the manual segmentation gold standard and the binarization result image;
step 7, taking the brain MRI image splicing image obtained correspondingly in the step 6 as input, taking the dice similarity coefficient as output, and training a deep learning model to obtain a segmentation quality evaluation network;
and 8, preprocessing the newly acquired brain tumor MRI image according to the step 1, obtaining a focus segmentation probability map and a binarization result map according to the step 4 by using a focus segmentation model, obtaining a segmentation uncertainty prediction map according to the step 6 by using a segmentation error prediction model, splicing the newly acquired brain tumor MRI image, the focus segmentation binarization result map and the segmentation uncertainty prediction map in the channel direction, and finally evaluating the spliced map by using a segmentation quality evaluation network to obtain a dice correlation coefficient between the brain tumor MRI image and an artificial gold standard, namely the segmentation quality score of the focus segmentation model on the brain tumor MRI image.
2. The method of claim 1, wherein the preprocessing of step 1 comprises image enhancement and data augmentation.
3. The method of claim 1, wherein the lesion segmentation model uses a U-net network, and the loss function used in the step 3 of training the lesion segmentation model is a class cross entropy loss function.
4. The method of claim 1, wherein the segmentation error prediction model uses a U-net network, and the loss function used in the step 5 for training the segmentation error prediction model is a class cross entropy loss function.
5. The method of claim 1, wherein the segmentation quality evaluation network is a 3D VGG network, and the loss function used in the step 7 of training the segmentation quality evaluation network is a mean square error loss function.
6. The method of claim 1, wherein the step 3 to the step 5 are a process of training the segmentation model and the segmentation error prediction model, wherein the segmentation model is used as a generation network, the segmentation error prediction model is used as a discrimination network, and the segmentation model and the segmentation error prediction model are trained simultaneously by adopting a manner of generating the discrimination network.
7. The method according to claim 1, wherein the brain MRI image comprises four modality data of T1, T2, T1CE and Flair, the input of the segmentation network is input in a 4-channel manner by using 4 modality data of T1, T2, T1CE and Flair corresponding to a single slice image, and the step 2 is performed by using the T1 data in the segmented brain MRI image to perform lesion segmentation to obtain an artificial segmentation gold standard.
8. An electronic device comprising a memory and a processor, the memory having stored therein a computer program, wherein the computer program, when executed by the processor, causes the processor to carry out the method according to any one of claims 1 to 7.
9. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method according to any one of claims 1 to 7.
CN202111280955.3A 2021-11-01 2021-11-01 Brain tumor segmentation quality evaluation method, device and medium based on deep learning Active CN114155195B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111280955.3A CN114155195B (en) 2021-11-01 2021-11-01 Brain tumor segmentation quality evaluation method, device and medium based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111280955.3A CN114155195B (en) 2021-11-01 2021-11-01 Brain tumor segmentation quality evaluation method, device and medium based on deep learning

Publications (2)

Publication Number Publication Date
CN114155195A CN114155195A (en) 2022-03-08
CN114155195B true CN114155195B (en) 2023-04-07

Family

ID=80459016

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111280955.3A Active CN114155195B (en) 2021-11-01 2021-11-01 Brain tumor segmentation quality evaluation method, device and medium based on deep learning

Country Status (1)

Country Link
CN (1) CN114155195B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110047068A (en) * 2019-04-19 2019-07-23 山东大学 MRI brain tumor dividing method and system based on pyramid scene analysis network
CN110197493A (en) * 2019-05-24 2019-09-03 清华大学深圳研究生院 Eye fundus image blood vessel segmentation method
CN110675419A (en) * 2019-10-11 2020-01-10 上海海事大学 Multi-modal brain glioma image segmentation method for self-adaptive attention gate
WO2021184817A1 (en) * 2020-03-16 2021-09-23 苏州科技大学 Method for segmenting liver and focus thereof in medical image

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018125580A1 (en) * 2016-12-30 2018-07-05 Konica Minolta Laboratory U.S.A., Inc. Gland segmentation with deeply-supervised multi-level deconvolution networks

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110047068A (en) * 2019-04-19 2019-07-23 山东大学 MRI brain tumor dividing method and system based on pyramid scene analysis network
CN110197493A (en) * 2019-05-24 2019-09-03 清华大学深圳研究生院 Eye fundus image blood vessel segmentation method
CN110675419A (en) * 2019-10-11 2020-01-10 上海海事大学 Multi-modal brain glioma image segmentation method for self-adaptive attention gate
WO2021184817A1 (en) * 2020-03-16 2021-09-23 苏州科技大学 Method for segmenting liver and focus thereof in medical image

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
Reza Karimzadeh等.Prediction Error Propagation: A Novel Strategy to Enhance Performance of Deep Learning Models in Seminal Segmentation.《2021 IEEE Nuclear Science Symposium and Medical Imaging Conference (NSS/MIC)》.2021,第1-3页. *
Rongzhao Zhang等.A Fine-Grain Error Map Prediction and Segmentation Quality Assessment Framework for Whole-Heart Segmentation.《arXiv:1907.12244v1》.2019,第1-9页. *
万林祥.基于图切割算法的骨组织CT图像分割方法研究.《中国优秀硕士学位论文全文数据库 医药卫生科技辑》.2019,第E076-7页. *
吕念祖.基于深度学习的医学图像分割算法研究.《中国优秀硕士学位论文全文数据库医药卫生科技辑》.第E080-37页. *
潘沛克等.基于U-net模型的全自动鼻咽肿瘤MR图像分割.《计算机应用》.2019,第39卷(第4期),第1183-1188页. *
王平等.基于3D深度残差网络与级联U-Net的缺血性脑卒中病灶分割算法.《计算机应用》.2019,第39卷(第11期),第3274-3279页. *

Also Published As

Publication number Publication date
CN114155195A (en) 2022-03-08

Similar Documents

Publication Publication Date Title
CN112017198B (en) Right ventricle segmentation method and device based on self-attention mechanism multi-scale features
CN110930416B (en) MRI image prostate segmentation method based on U-shaped network
CN110444277B (en) Multi-mode brain MRI image bidirectional conversion method based on multi-generation and multi-confrontation
CN114581662B (en) Brain tumor image segmentation method, system, device and storage medium
US20230104945A1 (en) Systems and methods for image processing
CN110415234A (en) Brain tumor dividing method based on multi-parameter magnetic resonance imaging
CN116097302A (en) Connected machine learning model with joint training for lesion detection
RU2654199C1 (en) Segmentation of human tissues in computer image
Angkoso et al. Multiplane Convolutional Neural Network (Mp-CNN) for Alzheimer’s Disease Classification.
CN113012086A (en) Cross-modal image synthesis method
CN114972362A (en) Medical image automatic segmentation method and system based on RMAU-Net network
CN115147600A (en) GBM multi-mode MR image segmentation method based on classifier weight converter
Shan et al. SCA-Net: A spatial and channel attention network for medical image segmentation
Javidi et al. Retinal image assessment using bi-level adaptive morphological component analysis
Amiri et al. Bayesian Network and Structured Random Forest Cooperative Deep Learning for Automatic Multi-label Brain Tumor Segmentation.
CN113706486A (en) Pancreas tumor image segmentation method based on dense connection network migration learning
Verma et al. Role of deep learning in classification of brain MRI images for prediction of disorders: a survey of emerging trends
Guy-Fernand et al. Classification of brain tumor leveraging goal-driven visual attention with the support of transfer learning
Tao et al. Tooth CT Image Segmentation Method Based on the U‐Net Network and Attention Module
CN114155195B (en) Brain tumor segmentation quality evaluation method, device and medium based on deep learning
CN116935164A (en) Cerebral vessel recognition model construction method and network construction method
CN113379770B (en) Construction method of nasopharyngeal carcinoma MR image segmentation network, image segmentation method and device
WO2022127500A1 (en) Multiple neural networks-based mri image segmentation method and apparatus, and device
CN113487579B (en) Multi-mode migration method for automatically sketching model
CN114419309A (en) High-dimensional feature automatic extraction method based on brain T1-w magnetic resonance image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant