WO2022127500A1 - Multiple neural networks-based mri image segmentation method and apparatus, and device - Google Patents

Multiple neural networks-based mri image segmentation method and apparatus, and device Download PDF

Info

Publication number
WO2022127500A1
WO2022127500A1 PCT/CN2021/131340 CN2021131340W WO2022127500A1 WO 2022127500 A1 WO2022127500 A1 WO 2022127500A1 CN 2021131340 W CN2021131340 W CN 2021131340W WO 2022127500 A1 WO2022127500 A1 WO 2022127500A1
Authority
WO
WIPO (PCT)
Prior art keywords
segmentation
mri image
image block
model
coronal
Prior art date
Application number
PCT/CN2021/131340
Other languages
French (fr)
Chinese (zh)
Inventor
黄钢
聂生东
张小兵
Original Assignee
上海健康医学院
上海理工大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海健康医学院, 上海理工大学 filed Critical 上海健康医学院
Publication of WO2022127500A1 publication Critical patent/WO2022127500A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Definitions

  • the invention belongs to the technical field of medical image processing, and in particular relates to a method, device and equipment for MRI image segmentation based on multiple neural networks.
  • Glioma is one of the most common and aggressive types of brain tumors in primary brain tumors, with a very short life expectancy and extremely low survival rate.
  • gliomas can appear anywhere in the brain with different sizes and shapes.
  • manual segmentation of gliomas is time-consuming and labor-intensive and subject to subjective interference. Therefore, fully automated and reliable glioma segmentation method technology is an important research direction.
  • Magnetic Resonance Imaging (MRI) technology is a common non-invasive imaging method that can provide high tissue contrast and is especially suitable for imaging special structures similar to tumors in the human brain. Multiple sequences can be measured by MRI.
  • T1-weighted images T1C (Contrast enhanced T1-weighted images), T2-weighted images, and Flair (Fluid attenuated inversion recovery images), etc.
  • T1C Contrast enhanced T1-weighted images
  • T2-weighted images T2-weighted images
  • Flair Flud attenuated inversion recovery images
  • CNN convolutional neural network
  • CNN-based segmentation methods automatically learn a set of increasingly complex features directly from the data to represent learning without relying on manually extracted features.
  • CNN-based glioma image segmentation has the following problems: (1) 2D images are used as input, and the spatial domain relationship of pixels is not considered; (2) 3D image blocks As the input, the computational load of the neural network is increased, and the running time and the required storage space are greatly increased; (3) the features of each layer of the CNN are only used once, which is easy to miss important information.
  • the purpose of the present invention is to provide an MRI image segmentation method, device and equipment based on multiple neural networks with reduced running time and high precision in order to overcome the above-mentioned defects in the prior art.
  • An MRI image segmentation method based on multiple neural networks comprising the following steps:
  • the axial plane image block, the coronal plane image block and the sagittal plane image block are respectively used as the input of the trained axial segmentation model, coronal segmentation model and sagittal segmentation model, and the segmentation result of each model is obtained;
  • the axial segmentation model, coronal segmentation model and sagittal segmentation model are densely connected 2D-CNN neural network segmentation models with the same structure.
  • the original MRI image is a multimodal MRI image
  • the preprocessing includes superposition fusion processing and normalization processing of image data of different modalities.
  • the modalities include Flair, T1, T1c and T2.
  • the original MRI image is an original glioma MRI image
  • the superposition fusion processing is to perform algebraic superposition fusion of images of two modalities of Flair and T1.
  • the densely connected 2D-CNN neural network segmentation model includes a first convolution layer, a first densely connected block for extracting low-level feature data, a second densely connected block for extracting high-level feature data,
  • the second convolution layer, the fully connected layer and the classification layer, the low-level feature data and the high-level feature data are jointly used as the input of the second convolution layer.
  • first dense connection block and the second dense connection block respectively include several convolutional layers, and the outputs of all the previous layers are used as the input of the next convolutional layer.
  • the axial segmentation model, the coronal segmentation model and the sagittal segmentation model are obtained by training respectively based on the training image blocks of the axial plane, the coronal plane and the sagittal plane of the same training data set.
  • each convolutional layer is provided with a batch regularization layer, and the fully connected layer is added with Dropout technology.
  • r is equal to two equal values
  • the method further includes: obtaining a final segmentation result based on the fusion result and the set volume constraint;
  • the volume constraint is: remove the area with a volume smaller than the set number of pixels and fill it with zeros.
  • the present invention also provides an MRI image segmentation device based on multiple neural networks, including:
  • a receiving module for acquiring an original MRI image, preprocessing the original MRI image, and generating a processed MRI image
  • the segmentation module is used to respectively correspond the axial plane image block, coronal plane image block and sagittal plane image block as the input of the trained axial segmentation model, coronal segmentation model and sagittal segmentation model, and obtain the input of each model. segmentation result;
  • a fusion module used to fuse the segmentation results of the various models, and obtain the final segmentation result based on the fusion results and the set volume constraints;
  • the axial segmentation model, the coronal segmentation model and the sagittal segmentation model are densely connected 2D-CNN neural network segmentation models with the same structure.
  • the present invention also provides an electronic device, comprising:
  • processors one or more processors
  • One or more programs stored in memory the one or more programs including instructions for performing the multiple neural network based MRI image segmentation method as described above.
  • the present invention has the following beneficial effects:
  • the present invention extracts image blocks from three views of the axial plane, coronal plane and sagittal plane of the original MRI image, obtains the segmentation results of each view, and then fuses the segmentation results to obtain the final segmentation result.
  • the 2D-CNN model divides the image, not only using the plane neighborhood relationship of each pixel point, but also using the spatial characteristics of each pixel point, ensuring high segmentation accuracy and segmentation accuracy, and greatly reducing the running time. and storage space.
  • the multi-modal MRI image of the present invention performs preprocessing such as superposition fusion, normalization, etc., to strengthen the image features and improve the image standardization, thereby improving the segmentation accuracy.
  • glioma MRI images of the present invention by fusing the glioma data of T1 and Flair modalities, the edema part of gliomas is effectively enhanced, and then the MRI images are reduced by normalization operation. Interference due to uneven light.
  • the present invention adds dense connection blocks to the network structure, and realizes the extraction of different features in stages, and the features are reused without missing important information, thereby effectively improving the segmentation accuracy.
  • the present invention also imposes volume constraints on the fusion results, and removes regions with a volume smaller than the set number of pixels and fills them with zeros to improve segmentation accuracy.
  • the present invention can be applied to glioma segmentation of multimodal MRI images, and can ensure high segmentation accuracy and segmentation accuracy, so that different sub-regions of glioma can be accurately and finely segmented, which is beneficial for doctors.
  • follow-up work provides important reference.
  • Fig. 1 is the flow chart of the three-dimensional segmentation method of the present invention
  • Fig. 2 is the dense connection block 1 constructed in the present invention
  • Fig. 3 is the structure diagram of densely connected 2D-CNN constructed in the present invention.
  • FIG. 4 is a flow chart of three-dimensional segmentation of MRI images of glioma in the embodiment
  • Figure 5 is a comparison diagram of three-dimensional segmentation results of MRI images of brain gliomas in Example, (a) original image, (b) gold standard, (c) axial segmentation results, (d) coronal segmentation results, (e) sagittal segmentation results Segmentation results, (f) segmentation results after fusion processing, (g) final segmentation results after adding volume constraints.
  • this embodiment provides an MRI image segmentation method based on multiple neural networks, including:
  • Step S1 acquiring an original MRI image, where the original MRI image is a multimodal MRI image.
  • the original MRI image is a multimodal glioma MRI image.
  • step S2 the original MRI image is preprocessed to generate a processed MRI image sequence.
  • Preprocessing includes superposition fusion processing and normalization processing of image data of different modalities.
  • the three-dimensional glioma MRI image data is segmented, which includes four modalities: Flair, T1, T1c, and T2. Since the boundary of the edema inside the glioma is ambiguous, and the edema is in the Flair modal image There is a high-brightness signal in the middle, and the T1 modality can well display the internal structural information of the glioma. Therefore, in order to display the complete edema part of the glioma, the images of the two modalities of Flair and T1 are algebraically superimposed and fused. , get I ehance , and then normalize the four data of Flair, I ehance , T1c, T2.
  • Step S3 extracting an axial plane image block, a coronal plane image block and a sagittal plane image block from the processed MRI image.
  • the size of each image block is 33*33.
  • Step S4 transforming the glioma segmentation task into a multi-classification problem based on multimodal MRI images.
  • the axial plane image block, coronal plane image block and sagittal plane image block are respectively used as the input of the trained axial segmentation model, coronal segmentation model and sagittal segmentation model, and the node value of the classification layer is obtained. value, obtain the segmentation result map of MRI image axial plane, coronal plane, sagittal plane, and record the segmentation results ra , rc and rs of each model.
  • the axial segmentation model, coronal segmentation model and sagittal segmentation model are densely connected 2D-CNN neural network segmentation models with the same structure.
  • the densely connected 2D-CNN neural network segmentation model includes a first convolutional layer, a first densely connected block for extracting low-level feature data, a second densely connected block for extracting high-level feature data, and a second convolutional layer. , fully connected layer and classification layer.
  • the input data is first extracted through the first convolution layer unsupervised once; then it is sent to the first dense connection block, and the output data of the first dense connection block is simultaneously sent to two channels, the pooling layer of channel 1, and the image data comparison is obtained.
  • the shallow feature data is sent to the second channel containing the second dense connection block and the pooling layer to obtain the high-level feature data of the image data; then the low-level and high-level feature data are connected and sent to the second convolutional layer. , without missing important information; finally, it is sent to the fully connected layer, and the softmax function of the classification layer outputs the probability that the center point of the image block belongs to each class.
  • the model also adds a Batch Normalization (BN) layer after each convolutional layer, and adds Dropout technology to the fully connected layer.
  • BN Batch Normalization
  • the first densely connected block and the second densely connected block respectively contain several convolutional layers.
  • the first dense connection block contains 8 layers of convolutional layers.
  • the outputs of all previous layers are used as the input of the next layer of convolutional layers.
  • the number of convolution kernels in each layer is 24 and the size is 3*3.
  • a second dense connection block can be designed, which can include 6 layers of convolution layers, the number of convolution kernels in each layer is 12, and the size is 3*3.
  • the input image size is 33*33*4, and the output size is 1*1*4.
  • the training process of the densely connected 2D-CNN neural network segmentation model is as follows:
  • the axial, coronal, and sagittal slice images of the 3D MRI image data of the test set are respectively sent to the three trained neural networks, and each view obtains a 3D MRI image segmentation result, a voxel Click to get three segmentation results of ra , rc , and rs to verify the effect of the model.
  • the loss function used in the training of the densely connected 2D-CNN neural network segmentation model is:
  • Step S6 fuse the segmentation results of the various models.
  • the MRI image sequence is fused according to the majority voting strategy.
  • the obtained segmentation results ra , rc , and rs under the three views are fused according to the fusion strategy.
  • step S5 post-processing is performed.
  • a volume constraint is added, and the area with a volume of less than 200 pixels is removed and filled with zeros to improve the accuracy of segmentation.
  • the data used in the experiment is the training data set in the Brats2018 challenge database, which contains a total of 210 groups of high-grade glioma (HGG) data and 75 groups of low-grade gliomas.
  • Tumor (LGG) data each group of data includes MRI data of four modalities of Flair, T1, T1C, T2 and the gold standard data of manual segmentation, label 4 represents the enhanced tumor part, label 2 represents the edema part, label 1 represents necrosis and Non-enhanced part, label 0 is other tissue parts;
  • This experiment selected 80% of the patient data as the training set and 20% of the data as the test set.
  • the evaluation indicators of the test results include three items: Dice coefficient (Dice Score), positive predictive value (PPV, Positive Predictive Value), sensitivity (Sensitivity), they are defined as follows:
  • TP is true positive
  • FP false positive FN false negative
  • TN true negative there are 57 groups (42 groups of HGG, 15 groups of LGG) on the test set divided into the data of the Brats18 challenge.
  • the average segmentation evaluation index is shown in Table 1, where Comp represents the entire tumor area (label 1+2+4 part), Core represents the tumor core region (label 1+4 part), and Enh represents the tumor enhancement region (label 4 part).
  • Table 1 shows that the present invention performs well on the validation set, and the steps of fusion processing and post-processing can effectively improve the segmentation accuracy of gliomas.
  • Figure 5 shows a set of data randomly tested during the experiment.
  • Table 1 The present invention averages the segmentation evaluation index on the test set
  • This embodiment provides an MRI image segmentation device based on multiple neural networks, including a receiving module, an image block providing module, a segmentation module and a fusion module, wherein the receiving module is used to acquire an original MRI image, and perform a Preprocessing to generate a processed MRI image; the image block providing module is used to extract the axial plane image block, the coronal plane image block and the sagittal plane image block from the processed MRI image; the segmentation module is used to extract the axial plane image block.
  • the plane image block, coronal plane image block and sagittal plane image block are respectively used as the input of the trained axial segmentation model, coronal segmentation model and sagittal segmentation model, and the segmentation results of each model are obtained; the fusion module is used to fuse the For the segmentation results of each model, the final segmentation results are obtained based on the fusion results and the set volume constraints; in the segmentation module, the axial segmentation model, coronal segmentation model and sagittal segmentation model are densely connected 2D- CNN neural network segmentation model. The rest are the same as in Example 1.
  • This embodiment provides an electronic device, including one or more processors, a memory, and one or more programs stored in the memory, wherein the one or more programs include a method for executing the multi-based method described in Embodiment 1. Instructions for a neural network-based MRI image segmentation method.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

A multiple neural networks-based MRI image segmentation method and apparatus, and a device. Said method comprises the following steps: acquiring an original MRI image, pre-processing the original MRI image, and generating a processed MRI image; extracting an axial plane image block, a coronal plane image block and a sagittal plane image block from the processed MRI image; using the axial plane image block, the coronal plane image block and the sagittal plane image block respectively as inputs of a trained axial segmentation model, coronal segmentation model and sagittal segmentation model correspondingly, and obtaining segmentation results of the models; and fusing the segmentation results of the models, and obtaining a final segmentation result on the basis of a fusing result and a set volume constraint, wherein the axial segmentation model, the coronal segmentation model and the sagittal segmentation model are closely connected 2D-CNN neural network segmentation models with the same structure. Compared with the prior art, the method has the advantages of short operation time and high accuracy.

Description

基于多个神经网络的MRI图像分割方法、装置及设备MRI image segmentation method, device and equipment based on multiple neural networks 技术领域technical field
本发明属于医学图像处理技术领域,尤其是涉及一种基于多个神经网络的MRI图像分割方法、装置及设备。The invention belongs to the technical field of medical image processing, and in particular relates to a method, device and equipment for MRI image segmentation based on multiple neural networks.
背景技术Background technique
脑胶质瘤是原发性脑瘤中最常见、最具有侵袭性的脑肿瘤类型之一,其预期寿命非常短,生存率极低,准确可靠的脑胶质瘤分割是脑胶质瘤诊断、治疗计划和治疗效果评价的重要前提,但脑胶质瘤可以出现在大脑的任何位置且大小不一、形状各异,此外,人工手动分割脑胶质瘤耗时费力且容易受到主观的干扰,因此,全自动化的可靠的脑胶质瘤分割方法技术是一个重要的研究方向。核磁共振成像(Magnetic Resonance Imaging,MRI)技术是一种常见的非侵入式成像方式,能够提供较高的组织对比度,特别适合成像人脑中类似肿瘤的特殊结构,可以通过MRI测量多个序列,如T1加权(T1-weighted images)、T1C(Contrast enhanced T1-weighted images)、T2加权(T2-weighted images)及Flair(Fluid attenuated inversion recovery images)等,结合使用四种图像序列来共同诊断和分割脑胶质瘤。Glioma is one of the most common and aggressive types of brain tumors in primary brain tumors, with a very short life expectancy and extremely low survival rate. However, gliomas can appear anywhere in the brain with different sizes and shapes. In addition, manual segmentation of gliomas is time-consuming and labor-intensive and subject to subjective interference. Therefore, fully automated and reliable glioma segmentation method technology is an important research direction. Magnetic Resonance Imaging (MRI) technology is a common non-invasive imaging method that can provide high tissue contrast and is especially suitable for imaging special structures similar to tumors in the human brain. Multiple sequences can be measured by MRI. Such as T1-weighted images, T1C (Contrast enhanced T1-weighted images), T2-weighted images, and Flair (Fluid attenuated inversion recovery images), etc., four image sequences are used in combination for common diagnosis and segmentation Glioma.
磁共振图像中脑胶质瘤分割的研究主要从图像处理,模式识别,人工智能等领域对图像中病变组织进行分析,目前对脑胶质瘤分割的方法主要分为基于生成模型的方法和基于判别模型的方法等。近年来,随着深度学习技术在一般图像领域取得很大成功的同时,在脑胶质瘤分割研究中也得到了广泛地研究,尤其是基于深度学习的卷积神经网络(Convolutional Neural Network,CNN)在脑胶质瘤分割方面已经取得了一定的成功,CNN由输入层、卷积层、非线性层、池化层和完全连接层搭建组装而成,基于训练样本和标签的监督学习算法通过学习得到一个分类模型,以此实现脑胶质瘤图像的分割任务。基于CNN的分割方法直接从数据中自动学习一组越来越复杂的特征来表示学习,而不依赖于人工提取特征。通过分析已有的基于CNN的分割方法发现,基于CNN的脑胶质瘤图像分割存在以下问题:(1)二维图像作为输入,没有考虑到像素点 的空间领域关系;(2)三维图像块作为输入,增加了神经网络的计算量,大大增加了运行时间和所需存储空间;(3)CNN每层特征只用了一次,容易遗漏重要信息。The research on glioma segmentation in magnetic resonance images mainly analyzes the diseased tissue in images from the fields of image processing, pattern recognition, artificial intelligence, etc. Methods of discriminating models, etc. In recent years, with the great success of deep learning technology in the general image field, it has also been widely studied in the research of glioma segmentation, especially the convolutional neural network (CNN) based on deep learning. ) has achieved certain success in brain glioma segmentation. CNN is assembled from input layer, convolution layer, nonlinear layer, pooling layer and fully connected layer. The supervised learning algorithm based on training samples and labels passes Learn to obtain a classification model to achieve the task of segmentation of brain glioma images. CNN-based segmentation methods automatically learn a set of increasingly complex features directly from the data to represent learning without relying on manually extracted features. By analyzing the existing CNN-based segmentation methods, it is found that CNN-based glioma image segmentation has the following problems: (1) 2D images are used as input, and the spatial domain relationship of pixels is not considered; (2) 3D image blocks As the input, the computational load of the neural network is increased, and the running time and the required storage space are greatly increased; (3) the features of each layer of the CNN are only used once, which is easy to miss important information.
发明内容SUMMARY OF THE INVENTION
本发明的目的就是为了克服上述现有技术存在的缺陷而提供一种减少运行时间、精度高的基于多个神经网络的MRI图像分割方法、装置及设备。The purpose of the present invention is to provide an MRI image segmentation method, device and equipment based on multiple neural networks with reduced running time and high precision in order to overcome the above-mentioned defects in the prior art.
本发明的目的可以通过以下技术方案来实现:The object of the present invention can be realized through the following technical solutions:
一种基于多个神经网络的MRI图像分割方法,包括以下步骤:An MRI image segmentation method based on multiple neural networks, comprising the following steps:
获取原始MRI图像,对所述原始MRI图像进行预处理,生成处理后MRI图像;acquiring an original MRI image, preprocessing the original MRI image, and generating a processed MRI image;
从所述处理后MRI图像中提取轴状面图像块、冠状面图像块和矢状面图像块;extracting an axial plane image block, a coronal plane image block and a sagittal plane image block from the processed MRI image;
把所述轴状面图像块、冠状面图像块和矢状面图像块分别对应作为经训练的轴状分割模型、冠状分割模型和矢状分割模型的输入,获得各模型的分割结果;The axial plane image block, the coronal plane image block and the sagittal plane image block are respectively used as the input of the trained axial segmentation model, coronal segmentation model and sagittal segmentation model, and the segmentation result of each model is obtained;
融合所述各模型的分割结果,基于融合结果获得最终分割结果;Fusion of the segmentation results of the various models, and obtain the final segmentation result based on the fusion results;
其中,所述轴状分割模型、冠状分割模型和矢状分割模型为结构相同的密集连接型2D-CNN神经网络分割模型。Wherein, the axial segmentation model, coronal segmentation model and sagittal segmentation model are densely connected 2D-CNN neural network segmentation models with the same structure.
进一步地,所述原始MRI图像为多模态MRI图像,所述预处理包括不同模态图像数据的叠加融合处理和归一化处理。Further, the original MRI image is a multimodal MRI image, and the preprocessing includes superposition fusion processing and normalization processing of image data of different modalities.
进一步地,所述模态包括Flair、T1、T1c和T2。Further, the modalities include Flair, T1, T1c and T2.
进一步地,所述原始MRI图像为原始脑胶质瘤MRI图像,所述叠加融合处理为将Flair和T1两种模态的影像进行代数叠加融合。Further, the original MRI image is an original glioma MRI image, and the superposition fusion processing is to perform algebraic superposition fusion of images of two modalities of Flair and T1.
进一步地,所述密集连接型2D-CNN神经网络分割模型包括第一卷积层、用于提取低层次特征数据的第一密集连接块、用于提取高层次特征数据的第二密集连接块、第二卷积层、全连接层和分类层,所述低层次特征数据和高层次特征数据共同作为第二卷积层的输入。Further, the densely connected 2D-CNN neural network segmentation model includes a first convolution layer, a first densely connected block for extracting low-level feature data, a second densely connected block for extracting high-level feature data, The second convolution layer, the fully connected layer and the classification layer, the low-level feature data and the high-level feature data are jointly used as the input of the second convolution layer.
进一步地,所述第一密集连接块和第二密集连接块分别包含若干层卷积层, 前面所有层的输出均作为下一层卷积层的输入。Further, the first dense connection block and the second dense connection block respectively include several convolutional layers, and the outputs of all the previous layers are used as the input of the next convolutional layer.
进一步地,所述轴状分割模型、冠状分割模型和矢状分割模型基于同一组训练数据集的轴状面、冠状面和矢状面的训练图像块分别训练获得。Further, the axial segmentation model, the coronal segmentation model and the sagittal segmentation model are obtained by training respectively based on the training image blocks of the axial plane, the coronal plane and the sagittal plane of the same training data set.
进一步地,所述密集连接型2D-CNN神经网络分割模型中,每一层卷积层之后设有批量正则化层,所述全连接层加入有Dropout技术。Further, in the densely connected 2D-CNN neural network segmentation model, each convolutional layer is provided with a batch regularization layer, and the fully connected layer is added with Dropout technology.
进一步地,融合所述各模型的分割结果具体为:Further, the segmentation results of the fusion models are specifically:
将所述各模型的分割结果记为r a、r c和r s,融合结果记为r; Denote the segmentation results of the respective models as ra , rc and rs , and denote the fusion result as r ;
若r a=r c=r s,则r=r aIf ra =rc = rs , then r = ra ;
若r a、r c、r s中任意两个相等,则r等于两个相等的值; If any two of r a , rc , and rs are equal, then r is equal to two equal values;
若r a、r c、r s各不相等,若有两个大于1,则取r=2,否则取r=0。 If r a , rc , and rs are not equal, if two are greater than 1, take r=2, otherwise take r=0.
进一步地,该方法还包括:基于所述融合结果及设定的体积约束获得最终分割结果;Further, the method further includes: obtaining a final segmentation result based on the fusion result and the set volume constraint;
所述体积约束为:将体积小于设定像素点个数的区域移除并用零填充。The volume constraint is: remove the area with a volume smaller than the set number of pixels and fill it with zeros.
本发明还提供一种基于多个神经网络的MRI图像分割装置,包括:The present invention also provides an MRI image segmentation device based on multiple neural networks, including:
接收模块,用于获取原始MRI图像,对所述原始MRI图像进行预处理,生成处理后MRI图像;a receiving module for acquiring an original MRI image, preprocessing the original MRI image, and generating a processed MRI image;
图像块提供模块,用于从所述处理后MRI图像中提取轴状面图像块、冠状面图像块和矢状面图像块;an image block providing module for extracting an axial plane image block, a coronal plane image block and a sagittal plane image block from the processed MRI image;
分割模块,用于把所述轴状面图像块、冠状面图像块和矢状面图像块分别对应作为经训练的轴状分割模型、冠状分割模型和矢状分割模型的输入,获得各模型的分割结果;The segmentation module is used to respectively correspond the axial plane image block, coronal plane image block and sagittal plane image block as the input of the trained axial segmentation model, coronal segmentation model and sagittal segmentation model, and obtain the input of each model. segmentation result;
融合模块,用于融合所述各模型的分割结果,基于融合结果及设定的体积约束获得最终分割结果;a fusion module, used to fuse the segmentation results of the various models, and obtain the final segmentation result based on the fusion results and the set volume constraints;
所述分割模块中,所述轴状分割模型、冠状分割模型和矢状分割模型为结构相同的密集连接型2D-CNN神经网络分割模型。In the segmentation module, the axial segmentation model, the coronal segmentation model and the sagittal segmentation model are densely connected 2D-CNN neural network segmentation models with the same structure.
本发明还提供一种电子设备,包括:The present invention also provides an electronic device, comprising:
一个或多个处理器;one or more processors;
存储器;和memory; and
被存储在存储器中的一个或多个程序,所述一个或多个程序包括用于执行 如上所述基于多个神经网络的MRI图像分割方法的指令。One or more programs stored in memory, the one or more programs including instructions for performing the multiple neural network based MRI image segmentation method as described above.
与现有技术相比,本发明具有以下有益效果:Compared with the prior art, the present invention has the following beneficial effects:
1、本发明对原始MRI图像进行轴状面、冠状面和矢状面三个视图的图像块提取,获得各视图的分割结果,再对各分割结果进行融合后获得最终分割结果,用多个2D-CNN模型对图像进行分割,既利用了每个像素点平面邻域关系,又利用了每个像素点的空间特性,保证较高的分割准确率和分割精确度,而且大大减少了运行时间和存储空间。1. The present invention extracts image blocks from three views of the axial plane, coronal plane and sagittal plane of the original MRI image, obtains the segmentation results of each view, and then fuses the segmentation results to obtain the final segmentation result. The 2D-CNN model divides the image, not only using the plane neighborhood relationship of each pixel point, but also using the spatial characteristics of each pixel point, ensuring high segmentation accuracy and segmentation accuracy, and greatly reducing the running time. and storage space.
2、本发明多模态MRI图像进行叠加融合、归一化等预处理,加强图像特征,并提高图像规范性,从而提高分割精度。2. The multi-modal MRI image of the present invention performs preprocessing such as superposition fusion, normalization, etc., to strengthen the image features and improve the image standardization, thereby improving the segmentation accuracy.
3、本发明对于脑胶质瘤MRI图像,通过将T1和Flair两种模态的脑胶质瘤数据进行融合,有效增强了脑胶质瘤水肿部分,然后通过归一化操作减少了MRI图像由于光线不均匀造成的干扰。3. For glioma MRI images of the present invention, by fusing the glioma data of T1 and Flair modalities, the edema part of gliomas is effectively enhanced, and then the MRI images are reduced by normalization operation. Interference due to uneven light.
4、本发明在网络结构中加入了密集连接块,而且实现分阶段提取不同特征,特征重复利用,不会遗漏重要信息,有效提高分割精度。4. The present invention adds dense connection blocks to the network structure, and realizes the extraction of different features in stages, and the features are reused without missing important information, thereby effectively improving the segmentation accuracy.
5、本发明在进行多视图的分割结果融合后,还对融合结果进行体积约束,把体积小于设定像素点个数的区域移除用零填充,提高分割的精确度。5. After the multi-view segmentation results are fused, the present invention also imposes volume constraints on the fusion results, and removes regions with a volume smaller than the set number of pixels and fills them with zeros to improve segmentation accuracy.
6、本发明可应用于多模态MRI图像的脑胶质瘤分割中,可以保证较高的分割准确率和分割精确度,使得脑胶质瘤不同子区域得到准确的精细分割,为医生的后续工作提供重要参考。6. The present invention can be applied to glioma segmentation of multimodal MRI images, and can ensure high segmentation accuracy and segmentation accuracy, so that different sub-regions of glioma can be accurately and finely segmented, which is beneficial for doctors. Follow-up work provides important reference.
附图说明Description of drawings
图1是本发明三维分割方法的流程图;Fig. 1 is the flow chart of the three-dimensional segmentation method of the present invention;
图2是本发明中所构建的密集连接块1;Fig. 2 is the dense connection block 1 constructed in the present invention;
图3是本发明中所构建的密集连接型2D-CNN结构图;Fig. 3 is the structure diagram of densely connected 2D-CNN constructed in the present invention;
图4是实施例中脑胶质瘤MRI图像三维分割流程图;4 is a flow chart of three-dimensional segmentation of MRI images of glioma in the embodiment;
图5是实施例中脑胶质瘤MRI图像三维分割结果比较图,(a)原图、(b)金标准、(c)轴状分割结果、(d)冠状分割结果、(e)矢状分割结果、(f)融合处理后分割结果、(g)添加体积约束后最终分割结果。Figure 5 is a comparison diagram of three-dimensional segmentation results of MRI images of brain gliomas in Example, (a) original image, (b) gold standard, (c) axial segmentation results, (d) coronal segmentation results, (e) sagittal segmentation results Segmentation results, (f) segmentation results after fusion processing, (g) final segmentation results after adding volume constraints.
具体实施方式Detailed ways
下面结合附图和具体实施例对本发明进行详细说明。本实施例以本发明技术方案为前提进行实施,给出了详细的实施方式和具体的操作过程,但本发明的保护范围不限于下述的实施例。The present invention will be described in detail below with reference to the accompanying drawings and specific embodiments. This embodiment is implemented on the premise of the technical solution of the present invention, and provides a detailed implementation manner and a specific operation process, but the protection scope of the present invention is not limited to the following embodiments.
实施例1Example 1
如图1所示,本实施例提供一种基于多个神经网络的MRI图像分割方法,包括:As shown in FIG. 1 , this embodiment provides an MRI image segmentation method based on multiple neural networks, including:
步骤S1,获取原始MRI图像,该原始MRI图像为多模态MRI图像。本实施例中,原始MRI图像为多模态脑胶质瘤MRI图像。Step S1, acquiring an original MRI image, where the original MRI image is a multimodal MRI image. In this embodiment, the original MRI image is a multimodal glioma MRI image.
步骤S2,对所述原始MRI图像进行预处理,生成处理后MRI图像序列。In step S2, the original MRI image is preprocessed to generate a processed MRI image sequence.
预处理包括不同模态图像数据的叠加融合处理和归一化处理等。本实施例对三维脑胶质瘤MRI图像数据进行分割处理,其包括Flair、T1、T1c和T2四种模态,由于脑胶质瘤内部水肿边界模糊不清,而水肿部分在Flair模态影像中呈现高亮信号,而T1模态能很好的显示脑胶质瘤的内部结构信息,因此为了显示完整的脑胶质瘤水肿部分,将Flair和T1两种模态的影像进行代数叠加融合,得到I ehance,然后将Flair、I ehance、T1c、T2四种数据进行归一化处理。 Preprocessing includes superposition fusion processing and normalization processing of image data of different modalities. In this example, the three-dimensional glioma MRI image data is segmented, which includes four modalities: Flair, T1, T1c, and T2. Since the boundary of the edema inside the glioma is ambiguous, and the edema is in the Flair modal image There is a high-brightness signal in the middle, and the T1 modality can well display the internal structural information of the glioma. Therefore, in order to display the complete edema part of the glioma, the images of the two modalities of Flair and T1 are algebraically superimposed and fused. , get I ehance , and then normalize the four data of Flair, I ehance , T1c, T2.
步骤S3,从所述处理后MRI图像中提取轴状面图像块、冠状面图像块和矢状面图像块。本实施例中,各图像块的大小为33*33。Step S3, extracting an axial plane image block, a coronal plane image block and a sagittal plane image block from the processed MRI image. In this embodiment, the size of each image block is 33*33.
步骤S4,将脑胶质瘤分割任务转化为基于多模态MRI图像的多分类问题。把所述轴状面图像块、冠状面图像块和矢状面图像块分别对应作为经训练的轴状分割模型、冠状分割模型和矢状分割模型的输入,得到分类层的节点值,根据节点值,获得MRI图像轴状面、冠状面、矢状面的分割结果图,记各模型的分割结果r a、r c和r sStep S4, transforming the glioma segmentation task into a multi-classification problem based on multimodal MRI images. The axial plane image block, coronal plane image block and sagittal plane image block are respectively used as the input of the trained axial segmentation model, coronal segmentation model and sagittal segmentation model, and the node value of the classification layer is obtained. value, obtain the segmentation result map of MRI image axial plane, coronal plane, sagittal plane, and record the segmentation results ra , rc and rs of each model.
所述轴状分割模型、冠状分割模型和矢状分割模型为结构相同的密集连接型2D-CNN神经网络分割模型。密集连接型2D-CNN神经网络分割模型包括第一卷积层、用于提取低层次特征数据的第一密集连接块、用于提取高层次特征数据的第二密集连接块、第二卷积层、全连接层和分类层。输入数据首先通过第一卷积层无监督提取一次特征;然后送入第一密集连接块,第一密集连接块的输出数据同时送入两个通道,通道一的池化层,获得图像数据比较浅的特征 数据,送入包含第二密集连接块和池化层的通道二,获得图像数据高层次的特征数据;再然后将低层次和高层次的特征数据连接,送入第二卷积层,不会遗漏重要信息;最后再送入全连接层,由分类层的softmax函数输出图像块中心点属于每个类的概率。另外,为了提高网络收敛速度和避免网络过拟合,该模型中还将批量正则化(Batch Normalization,BN)层加入到每一层卷积层之后,而且在全连接层加入了Dropout技术。The axial segmentation model, coronal segmentation model and sagittal segmentation model are densely connected 2D-CNN neural network segmentation models with the same structure. The densely connected 2D-CNN neural network segmentation model includes a first convolutional layer, a first densely connected block for extracting low-level feature data, a second densely connected block for extracting high-level feature data, and a second convolutional layer. , fully connected layer and classification layer. The input data is first extracted through the first convolution layer unsupervised once; then it is sent to the first dense connection block, and the output data of the first dense connection block is simultaneously sent to two channels, the pooling layer of channel 1, and the image data comparison is obtained. The shallow feature data is sent to the second channel containing the second dense connection block and the pooling layer to obtain the high-level feature data of the image data; then the low-level and high-level feature data are connected and sent to the second convolutional layer. , without missing important information; finally, it is sent to the fully connected layer, and the softmax function of the classification layer outputs the probability that the center point of the image block belongs to each class. In addition, in order to improve the network convergence speed and avoid network overfitting, the model also adds a Batch Normalization (BN) layer after each convolutional layer, and adds Dropout technology to the fully connected layer.
第一密集连接块和第二密集连接块分别包含若干层卷积层。如图2所示,第一密集连接块包含8层卷积层,前面所有层的输出均作为下一层卷积层的输入,每层卷积核个数为24,尺寸为3*3。同理可设计第二密集连接块,可包含6层卷积层,每层卷积核个数为12,尺寸为3*3。The first densely connected block and the second densely connected block respectively contain several convolutional layers. As shown in Figure 2, the first dense connection block contains 8 layers of convolutional layers. The outputs of all previous layers are used as the input of the next layer of convolutional layers. The number of convolution kernels in each layer is 24 and the size is 3*3. Similarly, a second dense connection block can be designed, which can include 6 layers of convolution layers, the number of convolution kernels in each layer is 12, and the size is 3*3.
如图3所示,本实施例构建的密集连接型2D-CNN神经网络分割模型中,输入图像尺寸为33*33*4,输出尺寸为1*1*4。As shown in Figure 3, in the densely connected 2D-CNN neural network segmentation model constructed in this embodiment, the input image size is 33*33*4, and the output size is 1*1*4.
密集连接型2D-CNN神经网络分割模型的训练过程如下:The training process of the densely connected 2D-CNN neural network segmentation model is as follows:
1.1、获取Flair、T1、T1c、T2四种模态的三维脑胶质瘤MRI图像数据和对应的由手动分割的标签数据,标签4代表增强肿瘤部分,标签2代表水肿部分,标签1代表坏死及非增强部分,标签0是其他组织部分;然后对T1和Flair两种模态的脑胶质瘤数据进行融合,得到I ehance,再将Flair、I ehance、T1c、T2四种数据由留出法划分出训练集和测试集; 1.1. Obtain 3D MRI image data of glioma in four modalities of Flair, T1, T1c, and T2 and the corresponding label data by manual segmentation, label 4 represents the enhanced tumor part, label 2 represents the edema part, and label 1 represents necrosis and the non-enhanced part, label 0 is other tissue parts; then the glioma data of T1 and Flair modalities are fused to obtain I ehance , and then the four data of Flair, I ehance , T1c, T2 are set aside by The method divides the training set and the test set;
1.2、对训练集的三维MRI图像数据进行轴状面、冠状面、矢状面切片处理,获得对应的切片图像以及切片图像对应的标签图像,然后对切片图像进行灰度归一化;1.2. Perform axial, coronal, and sagittal slice processing on the 3D MRI image data of the training set to obtain the corresponding slice images and label images corresponding to the slice images, and then perform grayscale normalization on the slice images;
1.3、由归一化后的轴状面、冠状面、矢状面的图像切片分别获取33*33*4的图像块作为轴状分割模型、冠状分割模型、矢状分割模型的训练图像块;1.3. Obtain 33*33*4 image blocks from the normalized axial, coronal, and sagittal image slices as training image blocks for the axial segmentation model, coronal segmentation model, and sagittal segmentation model;
1.4、采用无监督的逐步逐层训练方法提取脑胶质瘤特征,并利用反向传播算法和随机梯度下降算法有监督地最小化损失函数,从而优化网络参数,最后获得三个优化后的密集连接型2D-CNN神经网络分割模型;1.4. Use an unsupervised step-by-step layer-by-layer training method to extract glioma features, and use backpropagation algorithm and stochastic gradient descent algorithm to supervised to minimize the loss function to optimize network parameters, and finally obtain three optimized dense Connected 2D-CNN neural network segmentation model;
1.5、将测试集的三维MRI图像数据的轴状面、冠状面、矢状面的切片图像分别送入到训练好的三个神经网络,各个视图分别得到一个三维MRI图像分割结果,一个体素点得到r a、r c、r s三个分割结果,以验证模型效果。 1.5. The axial, coronal, and sagittal slice images of the 3D MRI image data of the test set are respectively sent to the three trained neural networks, and each view obtains a 3D MRI image segmentation result, a voxel Click to get three segmentation results of ra , rc , and rs to verify the effect of the model.
密集连接型2D-CNN神经网络分割模型训练时采用的损失函数为:The loss function used in the training of the densely connected 2D-CNN neural network segmentation model is:
Figure PCTCN2021131340-appb-000001
Figure PCTCN2021131340-appb-000001
其中,
Figure PCTCN2021131340-appb-000002
是交叉熵损失函数,
Figure PCTCN2021131340-appb-000003
是新添损失项,施加了一个控制因子ε。
in,
Figure PCTCN2021131340-appb-000002
is the cross-entropy loss function,
Figure PCTCN2021131340-appb-000003
is the newly added loss term, which imposes a control factor ε.
步骤S6,融合所述各模型的分割结果。本实施例中,按照多数投票策略对MRI图像序列进行融合。Step S6, fuse the segmentation results of the various models. In this embodiment, the MRI image sequence is fused according to the majority voting strategy.
本实施例中,将得到的三个视图下的分割结果r a、r c、r s按照融合策略进行融合处理,融合的规则为:(1)若r a=r c=r s,则r=r a,r为融合结果;(2)若r a、r c、r s中任意两个相等,则r等于两个相等的值;(3)若r a、r c、r s各不相等,若有两个大于1,则取r=2,否则取r=0。 In this embodiment, the obtained segmentation results ra , rc , and rs under the three views are fused according to the fusion strategy. The fusion rules are: (1) If ra = rc = rs , then r = ra, r is the fusion result; (2) if any two of ra , rc, rs are equal, then r is equal to two equal values; (3) if ra , rc, rs are different Equal, if there are two greater than 1, take r=2, otherwise take r=0.
步骤S5,进行后处理。In step S5, post-processing is performed.
为了减少错分割和过分割,在融合处理之后,添加了一项体积约束,把体积小于200个像素点的区域移除用零填充,提高分割的精确度。In order to reduce mis-segmentation and over-segmentation, after the fusion process, a volume constraint is added, and the area with a volume of less than 200 pixels is removed and filled with zeros to improve the accuracy of segmentation.
实验:experiment:
为了说明上述方法的有效性和适应性,实验所采用的数据为Brats2018挑战赛数据库中的训练数据集,一共包含210组高等级的脑胶质瘤(HGG)数据,75组低等级脑胶质瘤(LGG)数据,每组数据包含Flair、T1、T1C、T2四种模态的MRI数据及手动分割的金标准数据,标签4代表增强肿瘤部分,标签2代表水肿部分,标签1代表坏死及非增强部分,标签0是其他组织部分;本实验选取了80%的病人数据作为训练集,20%数据作为测试集,测试结果的评价指标包含三项:Dice系数(Dice Score)、阳性预测值(PPV,Positive Predictive Value)、灵敏度(Sensitivity),它们的定义如下:In order to illustrate the effectiveness and adaptability of the above method, the data used in the experiment is the training data set in the Brats2018 challenge database, which contains a total of 210 groups of high-grade glioma (HGG) data and 75 groups of low-grade gliomas. Tumor (LGG) data, each group of data includes MRI data of four modalities of Flair, T1, T1C, T2 and the gold standard data of manual segmentation, label 4 represents the enhanced tumor part, label 2 represents the edema part, label 1 represents necrosis and Non-enhanced part, label 0 is other tissue parts; this experiment selected 80% of the patient data as the training set and 20% of the data as the test set. The evaluation indicators of the test results include three items: Dice coefficient (Dice Score), positive predictive value (PPV, Positive Predictive Value), sensitivity (Sensitivity), they are defined as follows:
Figure PCTCN2021131340-appb-000004
Figure PCTCN2021131340-appb-000004
Figure PCTCN2021131340-appb-000005
Figure PCTCN2021131340-appb-000005
Figure PCTCN2021131340-appb-000006
Figure PCTCN2021131340-appb-000006
其中,TP真阳性,FP假阳性,FN假阴性,TN真阴性。本实验在Brats18挑战赛的数据中划分的测试集上共57组(42组HGG,15组LGG)数据平均 分割评价指标如表1所示,其中Comp代表整个肿瘤区(标签1+2+4部分),Core代表肿瘤核心区(标签1+4部分),Enh代表肿瘤增强区(标签4部分)。通过表1的展示,说明本发明在验证集上表现良好,融合处理和后处理的步骤能够有效提高脑胶质瘤的分割精确度,图5展示了实验过程中随机测试的一组数据。Among them, TP is true positive, FP false positive, FN false negative, TN true negative. In this experiment, there are 57 groups (42 groups of HGG, 15 groups of LGG) on the test set divided into the data of the Brats18 challenge. The average segmentation evaluation index is shown in Table 1, where Comp represents the entire tumor area (label 1+2+4 part), Core represents the tumor core region (label 1+4 part), and Enh represents the tumor enhancement region (label 4 part). Table 1 shows that the present invention performs well on the validation set, and the steps of fusion processing and post-processing can effectively improve the segmentation accuracy of gliomas. Figure 5 shows a set of data randomly tested during the experiment.
表1本发明在测试集上平均分割评价指标Table 1 The present invention averages the segmentation evaluation index on the test set
Figure PCTCN2021131340-appb-000007
Figure PCTCN2021131340-appb-000007
实施例2Example 2
本实施例提供一种基于多个神经网络的MRI图像分割装置,包括接收模块、图像块提供模块、分割模块和融合模块,其中,接收模块用于获取原始MRI图像,对所述原始MRI图像进行预处理,生成处理后MRI图像;图像块提供模块用于从所述处理后MRI图像中提取轴状面图像块、冠状面图像块和矢状面图像块;分割模块用于把所述轴状面图像块、冠状面图像块和矢状面图像块分别对应作为经训练的轴状分割模型、冠状分割模型和矢状分割模型的输入,获得各模型的分割结果;融合模块用于融合所述各模型的分割结果,基于融合结果及设定的体积约束获得最终分割结果;所述分割模块中,所述轴状分割模型、冠状分割模型和矢状分割模型为结构相同的密集连接型2D-CNN神经网络分割模型。其余同实施例1。This embodiment provides an MRI image segmentation device based on multiple neural networks, including a receiving module, an image block providing module, a segmentation module and a fusion module, wherein the receiving module is used to acquire an original MRI image, and perform a Preprocessing to generate a processed MRI image; the image block providing module is used to extract the axial plane image block, the coronal plane image block and the sagittal plane image block from the processed MRI image; the segmentation module is used to extract the axial plane image block. The plane image block, coronal plane image block and sagittal plane image block are respectively used as the input of the trained axial segmentation model, coronal segmentation model and sagittal segmentation model, and the segmentation results of each model are obtained; the fusion module is used to fuse the For the segmentation results of each model, the final segmentation results are obtained based on the fusion results and the set volume constraints; in the segmentation module, the axial segmentation model, coronal segmentation model and sagittal segmentation model are densely connected 2D- CNN neural network segmentation model. The rest are the same as in Example 1.
实施例3Example 3
本实施例提供一种电子设备,包括一个或多个处理器、存储器和被存储在 存储器中的一个或多个程序,所述一个或多个程序包括用于执行如实施例1所述基于多个神经网络的MRI图像分割方法的指令。This embodiment provides an electronic device, including one or more processors, a memory, and one or more programs stored in the memory, wherein the one or more programs include a method for executing the multi-based method described in Embodiment 1. Instructions for a neural network-based MRI image segmentation method.
以上详细描述了本发明的较佳具体实施例。应当理解,本领域的普通技术人员无需创造性劳动就可以根据本发明的构思作出诸多修改和变化。因此,凡本技术领域中技术人员依本发明的构思在现有技术的基础上通过逻辑分析、推理或者有限的实验可以得到的技术方案,皆应在由权利要求书所确定的保护范围内。The preferred embodiments of the present invention have been described in detail above. It should be understood that those skilled in the art can make many modifications and changes according to the concept of the present invention without creative efforts. Therefore, all technical solutions that can be obtained by those skilled in the art through logical analysis, reasoning or limited experiments on the basis of the prior art according to the concept of the present invention shall fall within the protection scope determined by the claims.

Claims (10)

  1. 一种基于多个神经网络的MRI图像分割方法,其特征在于,包括以下步骤:A method for MRI image segmentation based on multiple neural networks, characterized in that it comprises the following steps:
    获取原始MRI图像,对所述原始MRI图像进行预处理,生成处理后MRI图像;acquiring an original MRI image, preprocessing the original MRI image, and generating a processed MRI image;
    从所述处理后MRI图像中提取轴状面图像块、冠状面图像块和矢状面图像块;extracting an axial plane image block, a coronal plane image block and a sagittal plane image block from the processed MRI image;
    把所述轴状面图像块、冠状面图像块和矢状面图像块分别对应作为经训练的轴状分割模型、冠状分割模型和矢状分割模型的输入,获得各模型的分割结果;The axial plane image block, the coronal plane image block and the sagittal plane image block are respectively used as the input of the trained axial segmentation model, coronal segmentation model and sagittal segmentation model, and the segmentation result of each model is obtained;
    融合所述各模型的分割结果,基于融合结果获得最终分割结果;Fusion of the segmentation results of the various models, and obtain the final segmentation result based on the fusion results;
    其中,所述轴状分割模型、冠状分割模型和矢状分割模型为结构相同的密集连接型2D-CNN神经网络分割模型。Wherein, the axial segmentation model, coronal segmentation model and sagittal segmentation model are densely connected 2D-CNN neural network segmentation models with the same structure.
  2. 根据权利要求1所述的基于多个神经网络的MRI图像分割方法,其特征在于,所述原始MRI图像为多模态MRI图像,所述预处理包括不同模态图像数据的叠加融合处理和归一化处理。The MRI image segmentation method based on multiple neural networks according to claim 1, wherein the original MRI image is a multimodal MRI image, and the preprocessing includes superposition fusion processing and normalization of image data of different modalities. Unified processing.
  3. 根据权利要求2所述的基于多个神经网络的MRI图像分割方法,其特征在于,所述模态包括Flair、T1、T1c和T2。The MRI image segmentation method based on multiple neural networks according to claim 2, wherein the modalities include Flair, T1, T1c and T2.
  4. 根据权利要求3所述的基于多个神经网络的MRI图像分割方法,其特征在于,所述原始MRI图像为原始脑胶质瘤MRI图像,所述叠加融合处理为将Flair和T1两种模态的影像进行代数叠加融合。The MRI image segmentation method based on multiple neural networks according to claim 3, wherein the original MRI image is an original glioma MRI image, and the superposition fusion process is to combine two modalities of Flair and T1. Algebraic overlay fusion of the images.
  5. 根据权利要求1所述的基于多个神经网络的MRI图像分割方法,其特征在于,所述密集连接型2D-CNN神经网络分割模型包括第一卷积层、用于提取低层次特征数据的第一密集连接块、用于提取高层次特征数据的第二密集连接块、第二卷积层、全连接层和分类层,所述低层次特征数据和高层次特征数据共同作为第二卷积层的输入。The MRI image segmentation method based on multiple neural networks according to claim 1, wherein the densely connected 2D-CNN neural network segmentation model comprises a first convolution layer, a first convolution layer for extracting low-level feature data A densely connected block, a second densely connected block for extracting high-level feature data, a second convolutional layer, a fully-connected layer and a classification layer, the low-level feature data and high-level feature data jointly serve as the second convolutional layer input of.
  6. 根据权利要求5所述的基于多个神经网络的MRI图像分割方法,其特征在于,所述第一密集连接块和第二密集连接块分别包含若干层卷积层,前面 所有层的输出均作为下一层卷积层的输入。The MRI image segmentation method based on multiple neural networks according to claim 5, wherein the first densely connected block and the second densely connected block respectively comprise several convolutional layers, and the outputs of all the previous layers are used as Input to the next convolutional layer.
  7. 根据权利要求1所述的基于多个神经网络的MRI图像分割方法,其特征在于,融合所述各模型的分割结果具体为:The MRI image segmentation method based on a plurality of neural networks according to claim 1, wherein the segmentation results of the fusion models are specifically:
    将所述各模型的分割结果记为r a、r c和r s,融合结果记为r; Denote the segmentation results of the respective models as ra , rc and rs , and denote the fusion result as r ;
    若r a=r c=r s,则r=r aIf ra =rc = rs , then r = ra ;
    若r a、r c、r s中任意两个相等,则r等于两个相等的值; If any two of r a , rc , and rs are equal, then r is equal to two equal values;
    若r a、r c、r s各不相等,若有两个大于1,则取r=2,否则取r=0。 If r a , rc , and rs are not equal, if two are greater than 1, take r=2, otherwise take r=0.
  8. 根据权利要求1所述的基于多个神经网络的MRI图像分割方法,其特征在于,该方法还包括:基于所述融合结果及设定的体积约束获得最终分割结果;The MRI image segmentation method based on multiple neural networks according to claim 1, wherein the method further comprises: obtaining a final segmentation result based on the fusion result and the set volume constraint;
    所述体积约束为:将体积小于设定像素点个数的区域移除并用零填充。The volume constraint is: remove the area with a volume smaller than the set number of pixels and fill it with zeros.
  9. 一种基于多个神经网络的MRI图像分割装置,其特征在于,包括:An MRI image segmentation device based on multiple neural networks, characterized in that, comprising:
    接收模块,用于获取原始MRI图像,对所述原始MRI图像进行预处理,生成处理后MRI图像;a receiving module, configured to obtain an original MRI image, preprocess the original MRI image, and generate a processed MRI image;
    图像块提供模块,用于从所述处理后MRI图像中提取轴状面图像块、冠状面图像块和矢状面图像块;an image block providing module for extracting an axial plane image block, a coronal plane image block and a sagittal plane image block from the processed MRI image;
    分割模块,用于把所述轴状面图像块、冠状面图像块和矢状面图像块分别对应作为经训练的轴状分割模型、冠状分割模型和矢状分割模型的输入,获得各模型的分割结果;The segmentation module is used to respectively correspond the axial plane image block, coronal plane image block and sagittal plane image block as the input of the trained axial segmentation model, coronal segmentation model and sagittal segmentation model, and obtain the input of each model. segmentation result;
    融合模块,用于融合所述各模型的分割结果,基于融合结果及设定的体积约束获得最终分割结果;a fusion module, used to fuse the segmentation results of the various models, and obtain the final segmentation result based on the fusion results and the set volume constraints;
    所述分割模块中,所述轴状分割模型、冠状分割模型和矢状分割模型为结构相同的密集连接型2D-CNN神经网络分割模型。In the segmentation module, the axial segmentation model, the coronal segmentation model and the sagittal segmentation model are densely connected 2D-CNN neural network segmentation models with the same structure.
  10. 一种电子设备,其特征在于,包括:An electronic device, comprising:
    一个或多个处理器;one or more processors;
    存储器;和memory; and
    被存储在存储器中的一个或多个程序,所述一个或多个程序包括用于执行如权利要求1-8任一所述基于多个神经网络的MRI图像分割方法的指令。One or more programs stored in a memory, the one or more programs comprising instructions for performing the multiple neural network based MRI image segmentation method of any of claims 1-8.
PCT/CN2021/131340 2020-12-14 2021-11-18 Multiple neural networks-based mri image segmentation method and apparatus, and device WO2022127500A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011465416.2 2020-12-14
CN202011465416.2A CN112634211A (en) 2020-12-14 2020-12-14 MRI (magnetic resonance imaging) image segmentation method, device and equipment based on multiple neural networks

Publications (1)

Publication Number Publication Date
WO2022127500A1 true WO2022127500A1 (en) 2022-06-23

Family

ID=75312512

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/131340 WO2022127500A1 (en) 2020-12-14 2021-11-18 Multiple neural networks-based mri image segmentation method and apparatus, and device

Country Status (2)

Country Link
CN (1) CN112634211A (en)
WO (1) WO2022127500A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116168276A (en) * 2023-02-27 2023-05-26 脉得智能科技(无锡)有限公司 Multi-modal feature fusion-based breast nodule classification method, device and storage medium

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112634211A (en) * 2020-12-14 2021-04-09 上海健康医学院 MRI (magnetic resonance imaging) image segmentation method, device and equipment based on multiple neural networks
CN113192031B (en) * 2021-04-29 2023-05-30 上海联影医疗科技股份有限公司 Vascular analysis method, vascular analysis device, vascular analysis computer device, and vascular analysis storage medium
CN113487591A (en) * 2021-07-22 2021-10-08 上海嘉奥信息科技发展有限公司 CT-based whole spine segmentation method and system
CN114419067A (en) * 2022-01-19 2022-04-29 支付宝(杭州)信息技术有限公司 Image processing method and device based on privacy protection

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106296699A (en) * 2016-08-16 2017-01-04 电子科技大学 Cerebral tumor dividing method based on deep neural network and multi-modal MRI image
WO2017210690A1 (en) * 2016-06-03 2017-12-07 Lu Le Spatial aggregation of holistically-nested convolutional neural networks for automated organ localization and segmentation in 3d medical scans
CN109308728A (en) * 2018-10-25 2019-02-05 上海联影医疗科技有限公司 PET-Positron emission computed tomography scan image processing method and processing device
CN110689543A (en) * 2019-09-19 2020-01-14 天津大学 Improved convolutional neural network brain tumor image segmentation method based on attention mechanism
CN110706214A (en) * 2019-09-23 2020-01-17 东南大学 Three-dimensional U-Net brain tumor segmentation method fusing condition randomness and residual error
CN111210444A (en) * 2020-01-03 2020-05-29 中国科学技术大学 Method, apparatus and medium for segmenting multi-modal magnetic resonance image
CN112634211A (en) * 2020-12-14 2021-04-09 上海健康医学院 MRI (magnetic resonance imaging) image segmentation method, device and equipment based on multiple neural networks

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107767378B (en) * 2017-11-13 2020-08-04 浙江中医药大学 GBM multi-mode magnetic resonance image segmentation method based on deep neural network

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017210690A1 (en) * 2016-06-03 2017-12-07 Lu Le Spatial aggregation of holistically-nested convolutional neural networks for automated organ localization and segmentation in 3d medical scans
CN106296699A (en) * 2016-08-16 2017-01-04 电子科技大学 Cerebral tumor dividing method based on deep neural network and multi-modal MRI image
CN109308728A (en) * 2018-10-25 2019-02-05 上海联影医疗科技有限公司 PET-Positron emission computed tomography scan image processing method and processing device
CN110689543A (en) * 2019-09-19 2020-01-14 天津大学 Improved convolutional neural network brain tumor image segmentation method based on attention mechanism
CN110706214A (en) * 2019-09-23 2020-01-17 东南大学 Three-dimensional U-Net brain tumor segmentation method fusing condition randomness and residual error
CN111210444A (en) * 2020-01-03 2020-05-29 中国科学技术大学 Method, apparatus and medium for segmenting multi-modal magnetic resonance image
CN112634211A (en) * 2020-12-14 2021-04-09 上海健康医学院 MRI (magnetic resonance imaging) image segmentation method, device and equipment based on multiple neural networks

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116168276A (en) * 2023-02-27 2023-05-26 脉得智能科技(无锡)有限公司 Multi-modal feature fusion-based breast nodule classification method, device and storage medium
CN116168276B (en) * 2023-02-27 2023-10-31 脉得智能科技(无锡)有限公司 Multi-modal feature fusion-based breast nodule classification method, device and storage medium

Also Published As

Publication number Publication date
CN112634211A (en) 2021-04-09

Similar Documents

Publication Publication Date Title
WO2022127500A1 (en) Multiple neural networks-based mri image segmentation method and apparatus, and device
Gour et al. Residual learning based CNN for breast cancer histopathological image classification
Hosny et al. Skin cancer classification using deep learning and transfer learning
Xie et al. Relational modeling for robust and efficient pulmonary lobe segmentation in CT scans
Zhou et al. Lung cancer cell identification based on artificial neural network ensembles
CN106408001B (en) Area-of-interest rapid detection method based on depth core Hash
CN112446891B (en) Medical image segmentation method based on U-Net network brain glioma
El-Shafai et al. Efficient Deep-Learning-Based Autoencoder Denoising Approach for Medical Image Diagnosis.
CN107480702B (en) Feature selection and feature fusion method for HCC pathological image recognition
CN111738363B (en) Alzheimer disease classification method based on improved 3D CNN network
An et al. Medical image segmentation algorithm based on multilayer boundary perception-self attention deep learning model
Niyaz et al. Advances in deep learning techniques for medical image analysis
Banerjee et al. A CADe system for gliomas in brain MRI using convolutional neural networks
Sangeetha et al. Diagnosis of Pneumonia using Image Recognition Techniques
CN111210398A (en) White blood cell recognition system based on multi-scale pooling
CN117218453A (en) Incomplete multi-mode medical image learning method
CN116912253A (en) Lung cancer pathological image classification method based on multi-scale mixed neural network
CN116797817A (en) Autism disease prediction technology based on self-supervision graph convolution model
Nawaz et al. MSeg-Net: a melanoma mole segmentation network using CornerNet and fuzzy K-means clustering
Lin et al. Hybrid CNN-SVM for alzheimer’s disease classification from structural MRI and the alzheimer’s disease neuroimaging initiative (ADNI)
Khan et al. Attresdu-net: Medical image segmentation using attention-based residual double u-net
Vinta et al. Segmentation and Classification of Interstitial Lung Diseases Based on Hybrid Deep Learning Network Model
Mesbahi et al. Automatic segmentation of medical images using convolutional neural networks
Tan et al. SwinUNeLCsT: Global–local spatial representation learning with hybrid CNN–transformer for efficient tuberculosis lung cavity weakly supervised semantic segmentation
Indraswari et al. Brain tumor detection on magnetic resonance imaging (MRI) images using convolutional neural network (CNN)

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21905429

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21905429

Country of ref document: EP

Kind code of ref document: A1