CN111862066A - Brain tumor image segmentation method, device, equipment and medium based on deep learning - Google Patents
Brain tumor image segmentation method, device, equipment and medium based on deep learning Download PDFInfo
- Publication number
- CN111862066A CN111862066A CN202010737226.5A CN202010737226A CN111862066A CN 111862066 A CN111862066 A CN 111862066A CN 202010737226 A CN202010737226 A CN 202010737226A CN 111862066 A CN111862066 A CN 111862066A
- Authority
- CN
- China
- Prior art keywords
- segmentation
- image
- brain tumor
- model
- brain
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 208000003174 Brain Neoplasms Diseases 0.000 title claims abstract description 194
- 238000000034 method Methods 0.000 title claims abstract description 78
- 238000003709 image segmentation Methods 0.000 title claims abstract description 67
- 238000013135 deep learning Methods 0.000 title claims abstract description 34
- 230000011218 segmentation Effects 0.000 claims abstract description 178
- 238000012549 training Methods 0.000 claims abstract description 153
- 238000005481 NMR spectroscopy Methods 0.000 claims abstract description 95
- 238000002790 cross-validation Methods 0.000 claims abstract description 58
- 210000004556 brain Anatomy 0.000 claims abstract description 54
- 210000003625 skull Anatomy 0.000 claims abstract description 44
- 201000007983 brain glioma Diseases 0.000 claims abstract description 18
- 238000013136 deep learning model Methods 0.000 claims abstract description 11
- 238000007781 pre-processing Methods 0.000 claims abstract description 10
- 238000012545 processing Methods 0.000 claims description 31
- 238000010606 normalization Methods 0.000 claims description 29
- 238000012360 testing method Methods 0.000 claims description 29
- 230000008569 process Effects 0.000 claims description 25
- 238000013101 initial test Methods 0.000 claims description 21
- 206010028980 Neoplasm Diseases 0.000 claims description 18
- 230000003044 adaptive effect Effects 0.000 claims description 16
- 238000002372 labelling Methods 0.000 claims description 16
- 238000004590 computer program Methods 0.000 claims description 13
- 238000011176 pooling Methods 0.000 claims description 6
- 208000032612 Glial tumor Diseases 0.000 claims description 4
- 206010018338 Glioma Diseases 0.000 claims description 4
- 206010030113 Oedema Diseases 0.000 claims description 4
- 230000017074 necrotic cell death Effects 0.000 claims description 4
- 238000013473 artificial intelligence Methods 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 7
- 238000002595 magnetic resonance imaging Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 5
- 201000011510 cancer Diseases 0.000 description 3
- JXSJBGJIGXNWCI-UHFFFAOYSA-N diethyl 2-[(dimethoxyphosphorothioyl)thio]succinate Chemical compound CCOC(=O)CC(SP(=S)(OC)OC)C(=O)OCC JXSJBGJIGXNWCI-UHFFFAOYSA-N 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000012952 Resampling Methods 0.000 description 2
- 201000010099 disease Diseases 0.000 description 2
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000007917 intracranial administration Methods 0.000 description 2
- 230000036210 malignancy Effects 0.000 description 2
- 230000036285 pathological change Effects 0.000 description 2
- 231100000915 pathological change Toxicity 0.000 description 2
- 238000004393 prognosis Methods 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 1
- 206010064571 Gene mutation Diseases 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 230000002238 attenuated effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 238000010304 firing Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000002271 resection Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000001356 surgical procedure Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/751—Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10088—Magnetic resonance imaging [MRI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30008—Bone
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30016—Brain
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Medical Informatics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Databases & Information Systems (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Multimedia (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Magnetic Resonance Imaging Apparatus (AREA)
Abstract
The invention is applied to the technical field of artificial intelligence, relates to the technical field of block chains, and discloses a brain tumor image segmentation method, a brain tumor image segmentation device, brain tumor image segmentation equipment and a brain tumor image segmentation medium based on deep learning, wherein the method comprises the following steps: acquiring a multi-modal brain nuclear magnetic resonance image, preprocessing the brain nuclear magnetic resonance image to acquire a target image with a skull part removed, and finally inputting the target image into a preset brain tumor segmentation model to acquire a brain tumor image segmentation result, wherein the preset brain glioma segmentation model is a deep learning model obtained by performing cross validation training according to a self-adaptive segmentation framework and the brain nuclear magnetic resonance image with the skull part removed, and the self-adaptive segmentation framework comprises a plurality of different types of U-Net models and U-Net integrated models; according to the method and the device, the optimal network structure for prediction in the multiple models can be automatically selected according to the cross validation result, the segmentation performance of the preset brain tumor segmentation model is improved, and therefore the accuracy of brain tumor image segmentation is improved.
Description
Technical Field
The invention relates to the technical field of brain glioma image segmentation, in particular to a brain glioma image segmentation method, device, equipment and medium based on deep learning.
Background
Brain glioma is the most common intracranial primary malignant tumor, and can be classified into WHO I-IV grade according to the malignancy degree of the tumor, the malignancy degree is increased along with the increase of the grade, and the treatment modes and prognosis of the brain glioma with different grades and different gene mutations are different. Therefore, if the tumor region and the tumor grade can be accurately segmented and judged before the surgical treatment, the method is helpful for guiding the selection of the treatment scheme and the surgical resection region, and has important values for improving the treatment effect and the prognosis of the patient.
In the traditional brain glioma segmentation process, a clinician needs to manually or semi-automatically segment the brain glioma region of a patient by means of image processing software, the workload is large, the time consumption is long, and the manual segmentation is influenced by a manager, so that internal scoring errors are easy to occur, and the accuracy of brain glioma segmentation is not high.
With the development of the technology in recent years, Magnetic Resonance Imaging (MRI) has become the main image examination technology for various intracranial diseases, and the MRI image can be used to more sensitively find the pathological changes and display the pathological change characteristics, which is beneficial to the detection and accurate diagnosis of the diseases. However, in some existing automatic brain tumor image segmentation methods, the segmentation result of magnetic resonance imaging is not ideal. For example, in a segmentation method based on a conventional convolutional neural network model, the segmentation model often needs to adopt different network structures and hyper-parameter adjustment strategies to achieve the optimal segmentation effect, but the optimal network structure and the optimal hyper-parameter are difficult to find in the segmentation model training process, so that the performance of the segmentation model is poor, the accuracy of the segmentation result is low, and the subsequent judgment is affected.
Disclosure of Invention
The invention provides a brain tumor image segmentation method, a brain tumor image segmentation device, brain tumor image segmentation equipment and a brain tumor image segmentation medium based on deep learning, and aims to solve the problems that in the existing brain tumor image segmentation method, an optimal network structure is difficult to find in a segmentation model training process, so that the accuracy of a model is low, and the segmentation accuracy of a brain tumor image is low.
A brain tumor image segmentation method based on deep learning is characterized by comprising the following steps:
acquiring a multi-modal brain nuclear magnetic resonance image;
preprocessing the brain nuclear magnetic resonance image to obtain a target image with a skull part removed;
and inputting the target image into a preset brain tumor segmentation model to obtain a brain tumor image segmentation result, wherein the preset brain glioma segmentation model is a deep learning model obtained by performing cross validation training according to a self-adaptive segmentation frame and a brain nuclear magnetic resonance image with a skull part removed, and the self-adaptive segmentation frame comprises a plurality of different types of U-Net models and U-Net integrated models.
A brain tumor image segmentation apparatus based on deep learning, comprising:
the acquisition module is used for acquiring a multi-modal brain nuclear magnetic resonance image;
the preprocessing module is used for preprocessing the brain nuclear magnetic resonance image to obtain a target image with a skull part removed;
the input module is used for inputting the target image into a preset brain tumor segmentation model so as to obtain a brain tumor image segmentation result, the preset brain glioma segmentation model is a deep learning model obtained by performing cross validation training according to a self-adaptive segmentation framework and a brain nuclear magnetic resonance image with a skull part removed, and the self-adaptive segmentation framework comprises a plurality of different types of U-Net models and U-Net integrated models.
A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the above-mentioned deep-learning brain tumor image segmentation method when executing the computer program.
A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the above-mentioned deep-learning brain tumor image segmentation method.
In one scheme provided by the brain tumor image segmentation method, the brain tumor image segmentation device, the brain tumor image segmentation equipment and the brain tumor image segmentation medium based on deep learning, a multi-modal brain nuclear magnetic resonance image is obtained, then the brain nuclear magnetic resonance image is preprocessed to obtain a target image with a skull part removed, finally the target image is input into a preset brain tumor segmentation model to obtain a brain tumor image segmentation result, the preset brain glioma segmentation model is a deep learning model obtained by performing cross validation training according to a self-adaptive segmentation frame and the brain nuclear magnetic resonance image with the skull part removed, and the self-adaptive segmentation frame comprises a plurality of different types of U-Net models and U-Net integrated models; according to the method, the preset brain tumor segmentation model obtained by cross validation training based on the plurality of U-Net network structures can automatically select the optimal network structure for prediction from the plurality of models according to the cross validation result, so that the optimal brain tumor segmentation model is determined, the problem that the optimal network structure is difficult to find in the segmentation model training process and the model accuracy is low is solved, the segmentation performance of the preset brain tumor segmentation model is improved, the preset brain tumor segmentation model can be quickly and accurately used for automatically segmenting brain tumor regions in the brain nuclear magnetic resonance image, and the brain tumor image segmentation accuracy is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive labor.
Fig. 1 is a schematic diagram of an application environment of a brain tumor image segmentation method based on deep learning according to an embodiment of the present invention;
fig. 2 is a flow chart illustrating a brain tumor image segmentation method based on deep learning according to an embodiment of the present invention;
fig. 3 is a schematic diagram illustrating an obtaining process of a predetermined brain tumor segmentation model according to an embodiment of the present invention;
FIG. 4 is a schematic diagram illustrating a process of determining hyper-parameters during a training process of a predetermined brain tumor segmentation model according to an embodiment of the present invention;
FIG. 5 is a flowchart illustrating an implementation of step S10 in FIG. 3;
FIG. 6 is a flowchart illustrating an implementation of step S30 in FIG. 3;
FIG. 7 is a flowchart illustrating an implementation of step S32 in FIG. 6;
fig. 8 is a schematic structural diagram of a brain tumor image segmentation apparatus based on deep learning according to an embodiment of the present invention;
FIG. 9 is a block diagram of a computer device in accordance with an embodiment of the present invention.
FIG. 10 is a block diagram of a computer device in accordance with an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The brain tumor image segmentation method based on deep learning provided by the embodiment of the invention can be applied to an application environment shown in fig. 1, wherein terminal equipment is communicated with a server through a network. The server obtains a multi-modal brain nuclear magnetic resonance image transmitted by the terminal equipment, preprocesses the brain nuclear magnetic resonance image to obtain a target image without the skull part, inputs the target image into a preset brain tumor segmentation model to obtain a brain tumor image segmentation result and outputs the brain tumor image segmentation result to the terminal equipment for displaying, the preset brain colloidal tumor segmentation model is a deep learning model obtained by performing cross validation training according to a self-adaptive segmentation framework and the brain nuclear magnetic resonance image without the skull part, and the self-adaptive segmentation framework comprises a plurality of different types of U-Net models and U-Net integrated models. The terminal device may be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices. The server may be implemented as a stand-alone server or as a server cluster consisting of a plurality of servers.
In an embodiment, as shown in fig. 2, a method for segmenting a brain tumor image based on deep learning is provided, which is described by taking the server in fig. 1 as an example, and includes the following steps:
s1: and acquiring a multi-modal brain nuclear magnetic resonance image.
Acquiring a multi-modal brain nuclear Magnetic Resonance (MRI) image transmitted by a client through a terminal device, wherein the modalities of the MRI image include T1, T2, T1c and Flair, and the multi-modal brain MRI image comprises: t1 brain nmr image, T2 brain nmr image, T1c brain nmr image, and Flair brain nmr image.
S2: the brain nuclear magnetic resonance image is preprocessed to obtain a target image from which the skull portion is removed.
After obtaining the multi-modal brain nuclear magnetic resonance image, the brain nuclear magnetic resonance image is preprocessed to obtain a target image with the skull part removed.
For example, after obtaining a multi-modal brain nuclear magnetic resonance image, firstly registering the multi-modal brain nuclear magnetic resonance image as a T1c nuclear magnetic resonance image by using a rigid body, then using a template matching algorithm to find a brain region in the T1c nuclear magnetic resonance image, and finally using an image threshold algorithm to remove a skull part of the non-brain region, thereby obtaining a target image from which the skull part is removed, reducing the influence of the skull part on the MRI image, and improving the accuracy of image segmentation by using a preset brain tumor segmentation model subsequently.
In this embodiment, the rigid body registration is adopted as a T1c magnetic resonance image, and the template matching algorithm and the image threshold algorithm are used to obtain the target image with the skull removed, which is only an exemplary illustration.
S3: inputting the target image into a preset brain tumor segmentation model to obtain a brain tumor image segmentation result, wherein the preset brain glioma segmentation model is a deep learning model obtained by performing cross validation training according to a self-adaptive segmentation frame and a brain nuclear magnetic resonance image with a skull part removed, and the self-adaptive segmentation frame comprises a plurality of different U-Net models and U-Net integrated models.
After the target image with the skull part removed is obtained, the target image obtained through preprocessing is input into a preset brain tumor segmentation model, the output result of the preset brain tumor segmentation model is obtained, and the output result of the preset brain tumor segmentation model is used as the brain tumor image segmentation result, so that a user can conveniently analyze the brain tumor image segmentation result.
The preset brain tumor segmentation model is a deep learning model obtained by performing cross validation training according to a self-adaptive segmentation frame and a brain nuclear magnetic resonance image with a skull part removed, and the self-adaptive segmentation frame comprises a plurality of different U-Net models and U-Net integrated models. During cross validation training according to the self-adaptive segmentation frame and the brain nuclear magnetic resonance image without the skull part, the acquired related data information and the generated preset brain tumor segmentation model are both stored in the block chain network.
The blockchain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism and an encryption algorithm. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product services layer, and an application services layer. In this embodiment, the preset brain tumor segmentation model and the related data are stored in the block chain network, which is convenient for fast query of the target model and the data and improves the processing speed.
The preset brain tumor segmentation model obtained by cross validation training based on the plurality of U-Net network structures can automatically select the optimal network structure for prediction in the self-adaptive segmentation frame according to the cross validation result, so that the optimal brain tumor segmentation model is determined, the generalization capability of the preset brain tumor segmentation model is improved, the segmentation performance of the preset brain tumor segmentation model is improved, and the accuracy of performing brain tumor segmentation on the target image by using the preset brain tumor segmentation model is improved.
The preset brain tumor segmentation model in this embodiment is used for segmenting the brain glioma nuclear magnetic resonance image to determine brain glioma regions of different degrees in the image. In the embodiment, the multi-modal brain nuclear magnetic resonance image is subjected to image segmentation through the preset brain tumor segmentation model, so that the automatic processing process of artificial intelligence and image segmentation is realized, the segmentation result can be accurately obtained in an express way without manual participation, and the segmentation efficiency is improved.
In the embodiment, a multi-modal brain nuclear magnetic resonance image is acquired, then the brain nuclear magnetic resonance image is preprocessed to obtain a target image with a skull part removed, and finally the target image is input into a preset brain tumor segmentation model to obtain a brain tumor image segmentation result, wherein the preset brain tumor segmentation model is a deep learning model obtained by performing cross validation training according to a self-adaptive segmentation framework and the brain nuclear magnetic resonance image with the skull part removed, and the self-adaptive segmentation framework comprises a plurality of different types of U-Net models and U-Net integrated models; the preset brain tumor segmentation model obtained by cross validation training based on the plurality of U-Net network structures can automatically select the optimal network structure for prediction in the plurality of models according to the cross validation result, so that the optimal brain tumor segmentation model is determined, the problem that the accuracy of the model is not high due to the fact that the optimal network structure is difficult to find in the segmentation model training process is solved, the segmentation performance of the preset brain tumor segmentation model is improved, the preset brain tumor segmentation model can be quickly and accurately used for automatically segmenting brain tumor regions in a brain nuclear magnetic resonance image, and the accuracy of brain tumor image segmentation is improved.
In one embodiment, before inputting the target image into the preset brain tumor segmentation model, model training is further required to obtain the preset brain tumor segmentation model. As shown in fig. 3, the preset brain tumor segmentation model is obtained as follows:
s10: and acquiring a multi-modal nuclear magnetic resonance brain tumor image, and processing the multi-modal nuclear magnetic resonance brain tumor image to obtain brain tumor image sample data with the skull part removed.
The method comprises the steps of obtaining a large number of previous multi-modal nuclear magnetic resonance brain tumor images, processing the multi-modal nuclear magnetic resonance brain tumor images to obtain brain tumor image sample data with the skull part removed, reducing the influence of the skull part on the nuclear magnetic resonance brain tumor images, and improving the precision of the brain tumor image sample data, so that the accuracy of a preset brain tumor model trained according to the brain tumor image sample data subsequently is improved.
Among them, the nmr brain tumor image includes nmr brain tumor images for four modalities of T1, T2, T1c, and Flair.
S20: and establishing an adaptive segmentation framework comprising a plurality of U-Net structures, wherein the adaptive segmentation framework comprises a 2D U-Net model, a 3D U-Net model and two 3D U-Net integrated models.
The method comprises the steps of establishing a self-adaptive segmentation framework comprising a plurality of U-Net structures, wherein the self-adaptive segmentation framework comprises a 2D U-Net model, a 3D U-Net model and two 3D U-Net integrated models, and combining different U-Net models into the self-adaptive segmentation framework to improve the self-adaptability and the accuracy of a preset brain tumor model trained subsequently.
Among them, the 2D U-Net model, the 3D U-Net model, and the two 3D U-Net integration models need to be configured, designed, and trained independently of each other. The two 3D U-Net integrated models are a cascading mode of the two 3D U-Net models, namely the two 3DU-Net integrated models are obtained by firstly obtaining a low-resolution image segmentation result by the first 3D U-Net model and then performing further complementary refinement by the second 3D U-Net model. During the training of two 3D U-Net integrated models, filled convolutions (padded convolutions) are used to achieve the same image output and image input shape, and the Leaky ReLU is used as the activation function.
S30: and performing five-fold cross validation training on all U-Net structures in the self-adaptive segmentation framework according to the brain tumor image sample data to obtain a preset brain tumor segmentation model.
After acquiring brain tumor image sample data and establishing a self-adaptive segmentation frame, performing five-fold cross validation training on all U-Net structures in the self-adaptive segmentation frame according to the brain tumor image sample data to obtain a preset brain tumor segmentation model, performing five-fold cross validation training on different types of U-Net models, automatically selecting an optimal network structure for prediction in a plurality of U-Ne models according to a result of the five-fold cross validation, further determining an optimal brain tumor segmentation model, solving the problem that the optimal network structure is difficult to find in the segmentation model training process to obtain the preset brain tumor segmentation model, and improving the segmentation performance of the preset brain tumor segmentation model.
In the embodiment, by acquiring a multi-modal nuclear magnetic resonance brain image and processing the multi-modal nuclear magnetic resonance brain image to obtain brain image sample data of which the skull part is removed, an adaptive segmentation frame comprising a plurality of U-Net structures is established, the adaptive segmentation frame comprises a 2D U-Net model, a 3D U-Net model and two 3D U-Net integrated models, five-fold cross validation training is performed on all U-Net structures in the adaptive segmentation frame according to the brain image sample data to obtain a preset brain tumor segmentation model, the precision of the brain tumor image sample data is improved by reducing the influence of the skull part on the nuclear magnetic resonance brain image, the segmentation performance of the preset brain tumor segmentation model is improved, and then the five-fold cross validation training is performed on the adaptive segmentation frame, by using the five-fold cross-validation method with higher accuracy, the accuracy of the segmentation performance of the preset brain tumor segmentation model is further ensured, and the accuracy of image segmentation by using the preset brain tumor segmentation model subsequently is further improved.
In one embodiment, during the five-fold cross-validation training, the hyper-parameters of the model training are automatically adjusted according to the shape of the pre-processed training data, and specifically, as shown in fig. 4, the hyper-parameters of the model training are determined as follows:
s301: and determining the video memory consumption range of a central processing unit of the model training equipment.
In the process of performing five-fold cross validation training on all U-Net structures in the self-adaptive segmentation frame according to brain tumor image sample data, determining a video memory consumption range of a Central Processing Unit (CPU) of the model training device, and automatically adjusting hyper-parameters according to the video memory consumption range.
S302: and in the video memory consumption range, adjusting the hyper-parameters of model training according to the shape of the training image, wherein the hyper-parameters comprise the step size, the image block size and the pooling times of each axis.
After the video memory consumption range is determined, the hyper-parameters of the model training are adjusted according to the shape of the training image in the brain tumor image sample data in the video memory consumption range, and the optimal hyper-parameters are determined according to the training results of different hyper-parameters. Wherein the hyper-parameters comprise step size, image block size and pooling times per axis. Where the larger image block size takes precedence over the step size, and the minimum step size is 2.
For example, the video memory consumption range is 12GB TitanXp GPU, and the step size (batch), the image block size, and the pooling (firing) number of each axis are automatically set according to the shape of the training image in the brain tumor image sample data, so that the video memory consumption remains within the specific range of 12GB TitanXp GPU.
In the embodiment, in the process of five-fold cross validation training, the video memory consumption range of the central processing unit of the model training device is determined, and the hyper-parameters of the model training are adjusted according to the shape of the training image in the video memory consumption range, wherein the hyper-parameters comprise the step size, the image block size and the pooling times of each axis, so that the hyper-parameters of the model training are automatically adjusted according to the shape of the preprocessed training data without manual setting, thereby reducing manual participation, improving the automation of the model training, solving the problem that the optimal hyper-parameters are difficult to find in the training process, improving the adaptivity of the model, and further improving the precision of the model.
In an embodiment, as shown in fig. 5, in step S10, the processing the multi-modal magnetic resonance brain tumor image to obtain brain tumor image sample data with the skull portion removed includes the following steps:
s11: and registering the multi-modal nuclear magnetic resonance brain tumor image as a T1c nuclear magnetic resonance image by using a rigid body.
After obtaining the multi-modal nuclear magnetic resonance brain tumor image, registering the multi-modal nuclear magnetic resonance brain tumor image as a T1c nuclear magnetic resonance image by adopting a rigid body, so as to convert the nuclear magnetic resonance brain tumor images of other modalities into the nuclear magnetic resonance brain tumor image of the T1c modality, and facilitating the subsequent training of the nuclear magnetic resonance brain tumor image to obtain the preset brain tumor segmentation model.
S12: and removing the skull part in the T1c nuclear magnetic resonance image to obtain a preprocessed nuclear magnetic resonance image.
After the multi-modal nuclear magnetic resonance brain tumor image is registered as a T1c nuclear magnetic resonance image by adopting a rigid body, the skull part in the T1c nuclear magnetic resonance image is removed, and a preprocessed nuclear magnetic resonance image is obtained, so that the interference of the skull part on model training is reduced, and the segmentation performance of the model is improved.
S13: receiving marking instructions of different pixel regions in the preprocessed nuclear magnetic resonance image, and carrying out pixel marking on the preprocessed nuclear magnetic resonance image according to the marking instructions, wherein the pixel marking comprises a non-tumor region, necrosis, edema, a non-enhanced tumor and an enhanced tumor.
After the preprocessed nuclear magnetic resonance image is obtained, each pixel region in the preprocessed nuclear magnetic resonance image needs to be manually labeled to distinguish each pixel region in the preprocessed nuclear magnetic resonance image, that is, labeling instructions for different pixel regions in the preprocessed nuclear magnetic resonance image need to be received, so that pixel labeling is performed on the preprocessed nuclear magnetic resonance image according to the labeling instructions. The pixel labeling in each preprocessed nuclear magnetic resonance image at least comprises one or more of a non-tumor region, necrosis, edema, a non-enhanced tumor and an enhanced tumor, so that the diversity and the accuracy of the sample data of the brain tumor image are improved.
S14: and taking the preprocessed nuclear magnetic resonance image after the pixel labeling as brain tumor image sample data.
And after the preprocessed nuclear magnetic resonance images are labeled, taking all the preprocessed nuclear magnetic resonance images after the pixel labeling is finished as the brain tumor image sample data.
In this embodiment, a preprocessed nmr image is obtained by registering a multi-modal nmr brain image using a rigid body as a T1c nmr image and then removing the skull portion of the T1c nmr image, and finally, the preprocessed nuclear magnetic resonance image after the pixel marking is finished is used as brain tumor image sample data, the process of processing the multi-modal magnetic resonance brain tumor image to obtain the brain tumor image sample data of which the skull part is removed is refined, the accuracy and the diversity of the brain tumor image sample data are improved, and the accuracy of the preset brain tumor segmentation model is further improved.
In an embodiment, as shown in fig. 6, in step S30, performing five-fold cross validation training on all U-Net structures in the adaptive segmentation framework according to the brain tumor image sample data to obtain the preset brain tumor segmentation model, specifically includes the following steps:
s31: and determining an initial training set and an initial testing set in the brain tumor image sample data according to a five-fold cross-validation method.
And after the brain tumor image sample data, determining an initial training set and an initial testing set in the brain tumor image sample data according to a five-fold cross-validation method.
Specifically, brain tumor image sample data is divided into five disjoint data sets according to a five-fold cross-validation method, one data set is selected as an initial test set, and the remaining four data sets are used as training sets until the five data sets are all used as the initial test set.
S32: and carrying out normalization processing on the initial training set to obtain a target training set.
After the initial training set is determined, the initial training set is subjected to normalization processing to obtain a target training set, so that various types of data of images in the target training set are unified, model training is performed according to the target training set subsequently, and the influence of image inconsistency on a model training process is reduced.
S33: and performing five-fold cross validation training on all U-Net structures in the self-adaptive segmentation frame according to the target training set to obtain a cross validation result.
After the target training set is obtained, performing five-fold cross validation training on all U-Net structures in the self-adaptive segmentation frame according to the target training set to obtain a cross validation result of model training, namely obtaining a plurality of trained models.
In an embodiment, in the process of training a model, 250 batchs are set for each epoch, the sum of cross entropy loss and Dice loss is used as a loss function, Adam is used as an optimizer, the initial learning rate is 3 × 10-4, the l2 weight is attenuated to 3 × 10-5, when the exponential moving average of the training loss is not improved in the last 30 epochs, the learning rate is reduced by 0.2 time, and when the learning rate is lower than a preset value (10-6) or exceeds 1000 training epochs, the training is stopped, and a trained model is obtained.
S34: and determining the model meeting the preset conditions according to the cross validation result.
After five-fold cross validation training is carried out on all U-Net structures in the self-adaptive segmentation frame according to a target training set, a model meeting a preset condition is automatically selected according to a cross validation result.
For example, after five-fold cross validation training is performed on all U-Net structures in the adaptive segmentation framework according to the target training set, five models are generated, and if the segmentation performance of the third model in the five models is the best, the third model is taken as a model meeting preset conditions, and then the model is tested.
S35: and performing model prediction on the model meeting the preset conditions according to the initial test set to determine the model with the model prediction result meeting the preset result as a preset glioma segmentation model.
After the model meeting the preset condition is determined, model prediction is carried out on the model meeting the preset condition according to the initial test set, and the model prediction result is used as the performance index of the model to reduce the model error. And determining whether the model prediction result meets a preset result, if so, indicating that the performance of the model meeting the preset condition meets the segmentation requirement, and taking the model meeting the preset condition as a preset glioma segmentation model.
In the embodiment, an initial training set and an initial test set are determined in brain tumor image sample data according to a five-fold cross validation method, the initial training set is normalized to obtain a target training set, five-fold cross validation training is performed on all U-Net structures in an adaptive segmentation frame according to the target training set to obtain a cross validation result, a model meeting preset conditions is determined according to the cross validation result, model prediction is performed on the model meeting the preset conditions according to the initial test set, the model with the determined model prediction result meeting the preset results is used as a preset brain glioma segmentation model, five-fold cross validation training is further detailed on all U-Net structures in the adaptive segmentation frame according to the brain tumor image sample data to obtain the preset brain tumor segmentation model, and a basis is provided for obtaining the preset brain tumor segmentation model, the segmentation performance of the preset brain tumor segmentation model is ensured.
In an embodiment, as shown in fig. 7, in step S32, performing normalization processing on the initial training set to obtain a target training set, specifically includes the following steps:
s321: and carrying out gray value normalization processing on all images in the initial training set to obtain a gray value normalization training set.
For example, after the initial training set is determined, the average value and the standard deviation of the gray levels of all the images in the initial training set are determined, the gray levels of all the images are normalized by subtracting the average value and dividing the average value by the standard deviation, and the gray level normalization training set is obtained, so that the consistency of the gray levels of the training images is ensured, and the influence of the gray level inconsistency on the model training process is reduced.
In this embodiment, the normalization of the gray values of all the images by subtracting the average value and dividing by the standard deviation is only an exemplary illustration, and in other embodiments, the gray value normalization processing of all the images may also be performed in other manners, which is not described herein again.
S322: and carrying out image space normalization processing on all images in the gray scale normalization training set to obtain a target training set.
For example, after a gray scale normalization training set is obtained, all image distances of training images in the gray scale normalization training set are counted, then an image distance intermediate value of each axis is obtained, the image distance intermediate value of each axis is used as a target distance, the target distance is used as interpolation of a third-order spline interpolation method, the training images in the gray scale normalization training set are subjected to third-order spline interpolation resampling, and a target training set is obtained, so that the image distances of the training images in the target training set are ensured to be consistent, and the influence of inconsistent image distances on a model training process is reduced.
In this embodiment, the image interval intermediate value of each axis is taken as the target interval, and the third-order spline interpolation is adopted for resampling to perform the image interval normalization processing, which is only an exemplary description.
In the embodiment, the gray value normalization processing is performed on all the images in the initial training set to obtain the gray value normalization training set, the image interval normalization processing is performed on all the images in the gray value normalization training set to obtain the target training set, the step of performing the normalization processing on the initial training set to obtain the target training set is thinned, the consistency of the gray value of the training images and the image interval is ensured, the influence of the inconsistency of the gray value of the training images and the image interval on model training is reduced, and the stability and the accuracy of the model are improved.
In an embodiment, as shown in fig. 8, in step S35, performing model prediction on a model meeting a preset condition according to an initial test set, so as to determine a preset brain tumor segmentation model according to a result of the model prediction, specifically, the method includes the following steps:
s351: and performing data enhancement on the test images in the initial test set to obtain a target test set.
After the initial test set is determined, data enhancement is carried out on the test images in the initial test set to obtain a target test set, and the data diversity of the target test set is improved to ensure the accuracy of subsequent model prediction according to the target test set.
In other embodiments, the target test set may also be obtained in other data enhancement modes, which is not described herein again.
S352: and in the model meeting the preset condition, performing model prediction on the test image in the target test set by adopting a sliding window method so as to determine the segmentation performance of the model meeting the preset condition.
After a target test set and a model meeting preset conditions are obtained, dividing a test image in the target test set into a plurality of subimages according to the image block size in the automatically set hyper-parameter, and predicting each subimage of the test image in the model meeting the preset conditions by using a sliding window method so as to determine the segmentation performance of the model meeting the preset conditions. The idea of sliding window is used to reasonably utilize the test image in the mapping test set, so that the execution times can be effectively reduced, the prediction complexity is reduced, and the prediction process is accelerated.
S353: and taking the model with the segmentation performance meeting the requirements as a preset brain tumor segmentation model.
After the segmentation performance of the model meeting the preset conditions is determined, the model with the segmentation performance meeting the requirements is used as a preset brain tumor segmentation model, so that the preset brain tumor segmentation model meets the requirements, and the segmentation performance of the preset brain tumor segmentation model is ensured.
In the embodiment, the data enhancement is performed on the test images in the initial test set to obtain the target test set, the sliding window method is adopted to perform model prediction on the test images in the target test set in the model meeting the preset condition to determine the segmentation performance of the model meeting the preset condition, the step of performing model prediction on the model meeting the preset condition according to the initial test set to determine the preset brain tumor segmentation model according to the model prediction result is refined, the data diversity of the target test set is improved, the accuracy of subsequent model prediction according to the target test set is ensured, and the segmentation performance of the preset brain tumor segmentation model is further ensured.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
In an embodiment, a brain tumor image segmentation device based on deep learning is provided, and the brain tumor image segmentation device based on deep learning corresponds to the brain tumor image segmentation method based on deep learning in the embodiment one to one. As shown in fig. 9, the brain tumor image segmentation apparatus based on deep learning includes an acquisition module 901, a preprocessing module 902, and an input module 903. The functional modules are explained in detail as follows:
an obtaining module 901, configured to obtain a multi-modal brain nuclear magnetic resonance image;
a preprocessing module 902, configured to preprocess the brain nuclear magnetic resonance image to obtain a target image with a skull portion removed;
an input module 903, configured to input the target image into a preset brain tumor segmentation model to obtain a brain tumor image segmentation result, where the preset brain glioma segmentation model is a deep learning model obtained by performing cross validation training according to a self-adaptive segmentation framework and a brain nuclear magnetic resonance image with a skull portion removed, and the self-adaptive segmentation framework includes multiple different types of U-Net models and U-Net integrated models.
Further, the brain tumor image segmentation device based on deep learning further includes: a training module 904 for training the adaptive segmentation framework to obtain a preset brain tumor segmentation model, the training module 904 being specifically configured to:
acquiring a multi-modal nuclear magnetic resonance brain tumor image, and processing the multi-modal nuclear magnetic resonance brain tumor image to obtain brain tumor image sample data with a skull part removed;
establishing an adaptive segmentation framework comprising a plurality of U-Net structures, wherein the adaptive segmentation framework comprises a 2D U-Net model, a 3D U-Net model and two 3D U-Net integrated models;
and performing five-fold cross validation training on all U-Net structures in the self-adaptive segmentation framework according to the brain tumor image sample data to obtain the preset brain tumor segmentation model.
Further, the training module 904 is specifically further configured to:
determining the video memory consumption range of a central processing unit of the model training equipment;
and in the video memory consumption range, adjusting the hyper-parameters of the model training according to the shape of the training image, wherein the hyper-parameters comprise the step size, the image block size and the pooling times of each axis.
Further, the training module 904 is specifically further configured to:
registering the multi-modal nuclear magnetic resonance brain tumor image as a T1c nuclear magnetic resonance image by adopting a rigid body;
removing the skull part in the T1c nuclear magnetic resonance image to obtain a preprocessed nuclear magnetic resonance image;
receiving labeling instructions of different pixel regions in the preprocessed nuclear magnetic resonance image, and performing pixel labeling on the preprocessed nuclear magnetic resonance image according to the labeling instructions, wherein the pixel labeling comprises a non-tumor region, necrosis, edema, a non-enhanced tumor and an enhanced tumor;
and taking all the preprocessed nuclear magnetic resonance images after the pixel labeling as the brain tumor image sample data.
Further, the training module 904 is specifically further configured to:
determining an initial training set and an initial testing set in the brain tumor image sample data according to a five-fold cross-validation method;
carrying out normalization processing on the initial training set to obtain a target training set;
performing the five-fold cross validation training on all U-Net structures in the self-adaptive segmentation frame according to the target training set to obtain a cross validation result;
determining a model meeting preset conditions according to the cross validation result;
and performing model prediction on the model meeting the preset conditions according to the initial test set to determine that the model with the model prediction result meeting the preset result is used as the preset glioma segmentation model.
Further, the training module 904 is specifically further configured to:
performing gray value normalization processing on all images in the initial training set to obtain a gray value normalization training set;
and carrying out image space normalization processing on all images in the gray scale normalization training set to obtain the target training set.
Further, the training module 904 is specifically further configured to:
performing data enhancement on the test images in the initial test set to obtain a target test set;
in the model meeting the preset condition, performing model prediction on the test image in the target test set by adopting a sliding window method so as to determine the segmentation performance of the model meeting the preset condition;
and taking the model with the segmentation performance meeting the requirements as the preset brain tumor segmentation model.
For specific limitations of the deep-learning brain tumor image segmentation apparatus, reference may be made to the above limitations of the deep-learning brain tumor image segmentation method, which will not be described herein again. The modules in the above-mentioned deep-learning brain tumor image segmentation apparatus may be wholly or partially implemented by software, hardware, or a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, and its internal structure diagram may be as shown in fig. 10. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used to store data used or generated during the model training process. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement the above-described deep learning brain tumor image segmentation method.
In one embodiment, a computer device is provided, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, and when the processor executes the computer program, the processor implements the steps of the above-mentioned deep learning brain tumor image segmentation method.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the above-mentioned deep-learning brain tumor image segmentation method.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.
Claims (10)
1. A brain tumor image segmentation method based on deep learning is characterized by comprising the following steps:
acquiring a multi-modal brain nuclear magnetic resonance image;
preprocessing the brain nuclear magnetic resonance image to obtain a target image with a skull part removed;
and inputting the target image into a preset brain tumor segmentation model to obtain a brain tumor image segmentation result, wherein the preset brain glioma segmentation model is a deep learning model obtained by performing cross validation training according to a self-adaptive segmentation frame and a brain nuclear magnetic resonance image with a skull part removed, and the self-adaptive segmentation frame comprises a plurality of different types of U-Net models and U-Net integrated models.
2. The deep learning-based brain tumor image segmentation method according to claim 1, wherein the preset brain tumor segmentation model is obtained by:
acquiring a multi-modal nuclear magnetic resonance brain tumor image, and processing the multi-modal nuclear magnetic resonance brain tumor image to obtain brain tumor image sample data with a skull part removed;
establishing an adaptive segmentation framework comprising a plurality of U-Net structures, wherein the adaptive segmentation framework comprises a 2D U-Net model, a 3D U-Net model and two 3D U-Net integrated models;
and performing five-fold cross validation training on all U-Net structures in the self-adaptive segmentation framework according to the brain tumor image sample data to obtain the preset brain tumor segmentation model.
3. The brain tumor image segmentation method based on deep learning of claim 2, wherein in the process of performing the five-fold cross validation training, the hyper-parameters of the model training are determined by:
determining the video memory consumption range of a central processing unit of the model training equipment;
and in the video memory consumption range, adjusting the hyper-parameters of the model training according to the shape of the training image, wherein the hyper-parameters comprise the step size, the image block size and the pooling times of each axis.
4. The method of deep learning-based brain tumor image segmentation according to claim 2, wherein the processing the multi-modality magnetic resonance brain tumor image to obtain brain tumor image sample data with skull portion removed comprises:
registering the multi-modal nuclear magnetic resonance brain tumor image as a T1c nuclear magnetic resonance image by adopting a rigid body;
removing the skull part in the T1c nuclear magnetic resonance image to obtain a preprocessed nuclear magnetic resonance image;
receiving labeling instructions of different pixel regions in the preprocessed nuclear magnetic resonance image, and performing pixel labeling on the preprocessed nuclear magnetic resonance image according to the labeling instructions, wherein the pixel labeling comprises a non-tumor region, necrosis, edema, a non-enhanced tumor and an enhanced tumor;
and taking all the preprocessed nuclear magnetic resonance images after the pixel labeling as the brain tumor image sample data.
5. The method according to claim 2, wherein the five-fold cross validation training of all U-Net structures in the adaptive segmentation framework according to the brain tumor image sample data to obtain the preset brain tumor segmentation model comprises:
determining an initial training set and an initial testing set in the brain tumor image sample data according to a five-fold cross-validation method;
carrying out normalization processing on the initial training set to obtain a target training set;
performing the five-fold cross validation training on all U-Net structures in the self-adaptive segmentation frame according to the target training set to obtain a cross validation result;
determining a model meeting preset conditions according to the cross validation result;
and performing model prediction on the model meeting the preset conditions according to the initial test set to determine that the model with the model prediction result meeting the preset result is used as the preset glioma segmentation model.
6. The method of deep learning-based brain tumor image segmentation according to claim 5, wherein the normalizing the initial training set to obtain a target training set comprises:
performing gray value normalization processing on all images in the initial training set to obtain a gray value normalization training set;
and carrying out image space normalization processing on all images in the gray scale normalization training set to obtain the target training set.
7. The method for brain tumor image segmentation based on deep learning according to claim 5, wherein the performing model prediction on the model meeting the preset condition according to the initial test set to determine the preset brain tumor segmentation model according to the result of model prediction comprises:
performing data enhancement on the test images in the initial test set to obtain a target test set;
in the model meeting the preset condition, performing model prediction on the test image in the target test set by adopting a sliding window method so as to determine the segmentation performance of the model meeting the preset condition;
and taking the model with the segmentation performance meeting the requirements as the preset brain tumor segmentation model.
8. A brain tumor image segmentation device based on deep learning, comprising:
the acquisition module is used for acquiring a multi-modal brain nuclear magnetic resonance image;
the preprocessing module is used for preprocessing the brain nuclear magnetic resonance image to obtain a target image with a skull part removed;
the input module is used for inputting the target image into a preset brain tumor segmentation model so as to obtain a brain tumor image segmentation result, the preset brain glioma segmentation model is a deep learning model obtained by performing cross validation training according to a self-adaptive segmentation framework and a brain nuclear magnetic resonance image with a skull part removed, and the self-adaptive segmentation framework comprises a plurality of different types of U-Net models and U-Net integrated models.
9. A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor when executing the computer program implements the steps of the deep-learned brain tumor image segmentation method according to any one of claims 1 to 7.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method of deep-learning brain tumor image segmentation according to any one of claims 1 to 7.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010737226.5A CN111862066B (en) | 2020-07-28 | 2020-07-28 | Brain tumor image segmentation method, device, equipment and medium based on deep learning |
PCT/CN2020/135335 WO2021121126A1 (en) | 2020-07-28 | 2020-12-10 | Deep learning-based brain tumor image division method, device, apparatus, and medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010737226.5A CN111862066B (en) | 2020-07-28 | 2020-07-28 | Brain tumor image segmentation method, device, equipment and medium based on deep learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111862066A true CN111862066A (en) | 2020-10-30 |
CN111862066B CN111862066B (en) | 2024-04-09 |
Family
ID=72948385
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010737226.5A Active CN111862066B (en) | 2020-07-28 | 2020-07-28 | Brain tumor image segmentation method, device, equipment and medium based on deep learning |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN111862066B (en) |
WO (1) | WO2021121126A1 (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112508827A (en) * | 2020-11-06 | 2021-03-16 | 中南大学湘雅医院 | Deep learning-based multi-scene fusion endangered organ segmentation method |
CN112508953A (en) * | 2021-02-05 | 2021-03-16 | 四川大学 | Meningioma rapid segmentation qualitative method based on deep neural network |
CN112580580A (en) * | 2020-12-28 | 2021-03-30 | 厦门理工学院 | Pathological myopia identification method based on data enhancement and model fusion |
CN112819042A (en) * | 2021-01-18 | 2021-05-18 | 首都医科大学附属北京朝阳医院 | Method, system and medium for processing esophageal squamous dysplasia image |
CN112837226A (en) * | 2021-01-15 | 2021-05-25 | 深圳市铱硙医疗科技有限公司 | Morphology-based method, system, terminal and medium for extracting sagittal plane in brain |
CN112991320A (en) * | 2021-04-07 | 2021-06-18 | 德州市人民医院 | System and method for predicting hematoma expansion risk of cerebral hemorrhage patient |
WO2021121126A1 (en) * | 2020-07-28 | 2021-06-24 | 平安科技(深圳)有限公司 | Deep learning-based brain tumor image division method, device, apparatus, and medium |
CN113570655A (en) * | 2021-06-22 | 2021-10-29 | 厦门理工学院 | Method, device, equipment and storage medium for identifying tumor deterioration degree |
CN113763388A (en) * | 2021-07-29 | 2021-12-07 | 山东师范大学 | Deep coagulation population P system and method for brain metastasis tumor hybrid supervised learning |
CN114359671A (en) * | 2022-01-05 | 2022-04-15 | 中山大学 | Multi-target learning-based ultrasonic image thyroid nodule classification method and system |
CN114419066A (en) * | 2022-01-14 | 2022-04-29 | 深圳市铱硙医疗科技有限公司 | Method, device and equipment for segmenting high signal of white matter of brain and storage medium |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113608722A (en) * | 2021-07-31 | 2021-11-05 | 云南电网有限责任公司信息中心 | Algorithm packaging method based on distributed technology |
CN115239688B (en) * | 2022-08-09 | 2024-03-12 | 四川大学华西医院 | Brain metastasis recognition method and system based on magnetic resonance contrast enhancement 3D-T1WI image |
CN116030063B (en) * | 2023-03-30 | 2023-07-04 | 同心智医科技(北京)有限公司 | Classification diagnosis system, method, electronic device and medium for MRI image |
CN117409019B (en) * | 2023-09-15 | 2024-08-30 | 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) | Multi-mode brain tumor image segmentation method and system based on ensemble learning |
CN117557724B (en) * | 2023-11-15 | 2024-06-04 | 广东工业大学 | Head presentation method and system for brain surgery patient based on pose estimation |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170356976A1 (en) * | 2016-06-10 | 2017-12-14 | Board Of Trustees Of Michigan State University | System and method for quantifying cell numbers in magnetic resonance imaging (mri) |
CN109087318A (en) * | 2018-07-26 | 2018-12-25 | 东北大学 | A kind of MRI brain tumor image partition method based on optimization U-net network model |
CN110766693A (en) * | 2018-09-06 | 2020-02-07 | 北京连心医疗科技有限公司 | Method for jointly predicting radiotherapy structure position based on multi-model neural network |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018082084A1 (en) * | 2016-11-07 | 2018-05-11 | 中国科学院自动化研究所 | Brain tumor automatic segmentation method by means of fusion of full convolutional neural network and conditional random field |
CN107767378B (en) * | 2017-11-13 | 2020-08-04 | 浙江中医药大学 | GBM multi-mode magnetic resonance image segmentation method based on deep neural network |
CN111862066B (en) * | 2020-07-28 | 2024-04-09 | 平安科技(深圳)有限公司 | Brain tumor image segmentation method, device, equipment and medium based on deep learning |
-
2020
- 2020-07-28 CN CN202010737226.5A patent/CN111862066B/en active Active
- 2020-12-10 WO PCT/CN2020/135335 patent/WO2021121126A1/en active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170356976A1 (en) * | 2016-06-10 | 2017-12-14 | Board Of Trustees Of Michigan State University | System and method for quantifying cell numbers in magnetic resonance imaging (mri) |
CN109087318A (en) * | 2018-07-26 | 2018-12-25 | 东北大学 | A kind of MRI brain tumor image partition method based on optimization U-net network model |
CN110766693A (en) * | 2018-09-06 | 2020-02-07 | 北京连心医疗科技有限公司 | Method for jointly predicting radiotherapy structure position based on multi-model neural network |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021121126A1 (en) * | 2020-07-28 | 2021-06-24 | 平安科技(深圳)有限公司 | Deep learning-based brain tumor image division method, device, apparatus, and medium |
CN112508827A (en) * | 2020-11-06 | 2021-03-16 | 中南大学湘雅医院 | Deep learning-based multi-scene fusion endangered organ segmentation method |
CN112508827B (en) * | 2020-11-06 | 2022-04-22 | 中南大学湘雅医院 | Deep learning-based multi-scene fusion endangered organ segmentation method |
CN112580580A (en) * | 2020-12-28 | 2021-03-30 | 厦门理工学院 | Pathological myopia identification method based on data enhancement and model fusion |
CN112837226B (en) * | 2021-01-15 | 2023-11-07 | 深圳市铱硙医疗科技有限公司 | Morphology-based mid-brain sagittal plane extraction method, system, terminal and medium |
CN112837226A (en) * | 2021-01-15 | 2021-05-25 | 深圳市铱硙医疗科技有限公司 | Morphology-based method, system, terminal and medium for extracting sagittal plane in brain |
CN112819042A (en) * | 2021-01-18 | 2021-05-18 | 首都医科大学附属北京朝阳医院 | Method, system and medium for processing esophageal squamous dysplasia image |
CN112819042B (en) * | 2021-01-18 | 2023-10-24 | 首都医科大学附属北京朝阳医院 | Processing method, system and medium for esophageal squamous epithelium abnormal hyperplasia image |
CN112508953A (en) * | 2021-02-05 | 2021-03-16 | 四川大学 | Meningioma rapid segmentation qualitative method based on deep neural network |
CN112991320A (en) * | 2021-04-07 | 2021-06-18 | 德州市人民医院 | System and method for predicting hematoma expansion risk of cerebral hemorrhage patient |
CN113570655A (en) * | 2021-06-22 | 2021-10-29 | 厦门理工学院 | Method, device, equipment and storage medium for identifying tumor deterioration degree |
CN113763388A (en) * | 2021-07-29 | 2021-12-07 | 山东师范大学 | Deep coagulation population P system and method for brain metastasis tumor hybrid supervised learning |
CN114359671A (en) * | 2022-01-05 | 2022-04-15 | 中山大学 | Multi-target learning-based ultrasonic image thyroid nodule classification method and system |
CN114419066A (en) * | 2022-01-14 | 2022-04-29 | 深圳市铱硙医疗科技有限公司 | Method, device and equipment for segmenting high signal of white matter of brain and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN111862066B (en) | 2024-04-09 |
WO2021121126A1 (en) | 2021-06-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111862066B (en) | Brain tumor image segmentation method, device, equipment and medium based on deep learning | |
US20230267611A1 (en) | Optimization of a deep learning model for performing a medical imaging analysis task | |
WO2021189843A1 (en) | Vertebra positioning method and apparatus for ct image, and device and medium | |
Zewdie et al. | Classification of breast cancer types, sub-types and grade from histopathological images using deep learning technique | |
CN110619635B (en) | Hepatocellular carcinoma magnetic resonance image segmentation system and method based on deep learning | |
CN112348785B (en) | Epileptic focus positioning method and system | |
CN111488872B (en) | Image detection method, image detection device, computer equipment and storage medium | |
CN110555856A (en) | Macular edema lesion area segmentation method based on deep neural network | |
CN111968130B (en) | Brain contrast image processing method, device, medium and electronic equipment | |
CN113223015A (en) | Vascular wall image segmentation method, device, computer equipment and storage medium | |
CN117392093A (en) | Breast ultrasound medical image segmentation algorithm based on global multi-scale residual U-HRNet network | |
CN111210909A (en) | Deep neural network-based rectal cancer T stage automatic diagnosis system and construction method thereof | |
CN114332132A (en) | Image segmentation method and device and computer equipment | |
JP2024000482A (en) | Processing method for medical image and computer device for medical image processing | |
CN112750137B (en) | Liver tumor segmentation method and system based on deep learning | |
CN110751187A (en) | Training method of abnormal area image generation network and related product | |
CN111681205B (en) | Image analysis method, computer device, and storage medium | |
CN114463288B (en) | Brain medical image scoring method and device, computer equipment and storage medium | |
Giannini et al. | Specificity improvement of a CAD system for multiparametric MR prostate cancer using texture features and artificial neural networks | |
Islam et al. | CosSIF: Cosine similarity-based image filtering to overcome low inter-class variation in synthetic medical image datasets | |
CN114707742A (en) | Artificial intelligence prediction method and system for adaptive radiotherapy strategy | |
CN110458842A (en) | Brain tumor dividing method based on the three-dimensional intensive connection network of binary channels | |
CN110929730A (en) | Image processing method, image processing device, computer equipment and storage medium | |
CN114693671A (en) | Lung nodule semi-automatic segmentation method, device, equipment and medium based on deep learning | |
Hatamikia et al. | Breast MRI radiomics and machine learning-based predictions of response to neoadjuvant chemotherapy–How are they affected by variations in tumor delineation? |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |