CN110782427A - Magnetic resonance brain tumor automatic segmentation method based on separable cavity convolution - Google Patents
Magnetic resonance brain tumor automatic segmentation method based on separable cavity convolution Download PDFInfo
- Publication number
- CN110782427A CN110782427A CN201910761883.0A CN201910761883A CN110782427A CN 110782427 A CN110782427 A CN 110782427A CN 201910761883 A CN201910761883 A CN 201910761883A CN 110782427 A CN110782427 A CN 110782427A
- Authority
- CN
- China
- Prior art keywords
- brain tumor
- magnetic resonance
- convolution
- separable
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 208000003174 Brain Neoplasms Diseases 0.000 title claims abstract description 75
- 230000011218 segmentation Effects 0.000 title claims abstract description 70
- 238000000034 method Methods 0.000 title claims abstract description 67
- 238000012549 training Methods 0.000 claims abstract description 35
- 238000012360 testing method Methods 0.000 claims abstract description 24
- 238000000605 extraction Methods 0.000 claims abstract description 13
- 206010028980 Neoplasm Diseases 0.000 claims abstract description 11
- 238000007781 pre-processing Methods 0.000 claims abstract description 6
- 230000006870 function Effects 0.000 claims description 13
- 238000012545 processing Methods 0.000 claims description 13
- 238000013135 deep learning Methods 0.000 claims description 8
- 230000008569 process Effects 0.000 claims description 8
- JXSJBGJIGXNWCI-UHFFFAOYSA-N diethyl 2-[(dimethoxyphosphorothioyl)thio]succinate Chemical compound CCOC(=O)CC(SP(=S)(OC)OC)C(=O)OCC JXSJBGJIGXNWCI-UHFFFAOYSA-N 0.000 claims description 6
- 238000010606 normalization Methods 0.000 claims description 5
- 230000004913 activation Effects 0.000 claims description 4
- 238000011478 gradient descent method Methods 0.000 claims description 2
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 claims 10
- 238000005096 rolling process Methods 0.000 claims 3
- 238000000926 separation method Methods 0.000 claims 1
- 230000010354 integration Effects 0.000 abstract description 2
- 239000003814 drug Substances 0.000 abstract 1
- 238000005728 strengthening Methods 0.000 abstract 1
- 238000002595 magnetic resonance imaging Methods 0.000 description 16
- 238000013527 convolutional neural network Methods 0.000 description 9
- 210000004556 brain Anatomy 0.000 description 7
- 238000010276 construction Methods 0.000 description 4
- 230000004927 fusion Effects 0.000 description 4
- 238000003709 image segmentation Methods 0.000 description 4
- 238000013528 artificial neural network Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 239000000284 extract Substances 0.000 description 3
- 230000001537 neural effect Effects 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 238000005070 sampling Methods 0.000 description 3
- 210000001519 tissue Anatomy 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 230000001338 necrotic effect Effects 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 238000011176 pooling Methods 0.000 description 2
- 238000002759 z-score normalization Methods 0.000 description 2
- 208000032612 Glial tumor Diseases 0.000 description 1
- 206010018338 Glioma Diseases 0.000 description 1
- 206010030113 Oedema Diseases 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 230000011157 brain segmentation Effects 0.000 description 1
- 210000005013 brain tissue Anatomy 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000002512 chemotherapy Methods 0.000 description 1
- 238000003759 clinical diagnosis Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000002497 edematous effect Effects 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 230000003902 lesion Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000002620 method output Methods 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 238000004445 quantitative analysis Methods 0.000 description 1
- 238000001959 radiotherapy Methods 0.000 description 1
- 210000004872 soft tissue Anatomy 0.000 description 1
- 238000001356 surgical procedure Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10088—Magnetic resonance imaging [MRI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30016—Brain
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Magnetic Resonance Imaging Apparatus (AREA)
Abstract
本发明属于计算机辅助医疗领域,具体涉及基于可分离空洞卷积的磁共振脑肿瘤图像自动分割方法。本发明的方法具体步骤包括:首先,将磁共振脑肿瘤图像数据集划分为训练集和测试集,并对训练集中的磁共振脑肿瘤图像进行预处理操作;其次,构建基于可分离空洞卷积的磁共振脑肿瘤图像深度分割网络架构;第三,采用预处理后的训练集磁共振脑肿瘤图像,对所构建的可分离空洞卷积脑肿瘤分割网络实施端对端训练,获得优化的脑肿瘤分割网络模型;最后,采用训练后的脑肿瘤分割网络模型来对测试集磁共振脑肿瘤图像进行分割处理。本发明的方法通过加强对磁共振脑肿瘤图像的判别深度特征提取和空间多尺度信息整合,可获得更优的磁共振脑肿瘤分割结果。
The invention belongs to the field of computer-aided medicine, in particular to a method for automatic segmentation of magnetic resonance brain tumor images based on separable hole convolution. The specific steps of the method of the present invention include: first, dividing the magnetic resonance brain tumor image data set into a training set and a test set, and performing preprocessing operations on the magnetic resonance brain tumor images in the training set; secondly, constructing a convolution based on separable holes MRI brain tumor image depth segmentation network architecture; thirdly, using the preprocessed training set MRI brain tumor images, the constructed separable cavity convolution brain tumor segmentation network is trained end-to-end to obtain an optimized brain tumor segmentation network. Tumor segmentation network model; finally, the trained brain tumor segmentation network model is used to segment the test set MRI brain tumor images. The method of the invention can obtain better segmentation results of the magnetic resonance brain tumor by strengthening the extraction of the discriminative depth feature and the integration of spatial multi-scale information for the magnetic resonance brain tumor image.
Description
技术领域technical field
本发明涉及医学图像分割方法,具体涉及一种基于可分离空洞卷积的磁共振脑肿瘤自动分割方法。The invention relates to a medical image segmentation method, in particular to a magnetic resonance brain tumor automatic segmentation method based on separable hole convolution.
背景技术Background technique
脑胶质瘤是最常见和侵袭性的原发性脑肿瘤之一,严重危害人类健康。脑肿瘤的治疗以手术为主,辅以放射治疗、化学治疗等综合治疗措施。磁共振图像技术通过非接入性、无伤害性、多方位,多参数成像、清晰的软组织显示能力等特点,已经成为脑肿瘤临床诊断和治疗的主要参考依据,脑肿瘤图像的精确分割对医学影像分析和临床应用研究具有十分重要的意义,它是提取影像中特殊组织的定量信息所不可缺少的手段,是实现脑组织三维可视化重建的先决条件。基于脑肿瘤的精确分割结果,医生可获得肿瘤的形态、大小及位置等多种信息,对其进行定量的分析和跟踪比较,掌握肿瘤病变发展和生长状态。目前已有的脑肿瘤分割方法大体分为两类,一类是传统机器学习的分割方法,一类是基于深度学习的分割方法。Glioma is one of the most common and aggressive primary brain tumors, seriously endangering human health. The treatment of brain tumor is mainly surgery, supplemented by radiation therapy, chemotherapy and other comprehensive treatment measures. Magnetic resonance imaging technology has become the main reference for clinical diagnosis and treatment of brain tumors through the characteristics of non-access, non-invasive, multi-directional, multi-parameter imaging, and clear soft tissue display ability. Image analysis and clinical application research are of great significance. It is an indispensable means of extracting quantitative information of special tissues in images, and a prerequisite for realizing three-dimensional visualization and reconstruction of brain tissue. Based on the accurate segmentation results of brain tumors, doctors can obtain various information such as the shape, size, and location of the tumor, perform quantitative analysis, tracking and comparison, and grasp the development and growth status of tumor lesions. The existing brain tumor segmentation methods can be roughly divided into two categories, one is the segmentation method based on traditional machine learning, and the other is the segmentation method based on deep learning.
传统的磁共振脑肿瘤分割方法主要基于图像处理、计算机图形学和传统人工智能等领域模型和方法构建,主要包括基于阈值的方法、基于区域的方法、基于模型的方法和基于分类器的方法等。Traditional MRI brain tumor segmentation methods are mainly based on image processing, computer graphics and traditional artificial intelligence and other field models and methods, including threshold-based methods, region-based methods, model-based methods and classifier-based methods, etc. .
基于深度学习的磁共振脑肿瘤自动分割已成为当今热点,近年来,深度神经卷积网络模型已经成功地应用于许多计算机视觉任务中,其可实现深层次高判别表达能力特征的自动抽取,并快速被发展到医疗图像处理与分析领域。在基于深度学习的磁共振脑影像分割研究上,近年来也发展了一系列重要研究成果,较传统脑影像分割方法获得大幅度的性能提升。然而,目前的深度学习方法,很难在特征提取以及多种空间尺度的信息予以整合能力上进行提升,为解决这一问题,提出一种基于可分离空洞卷积的脑肿瘤自动分割方法。The automatic segmentation of MRI brain tumors based on deep learning has become a hot topic. In recent years, the deep neural convolutional network model has been successfully applied to many computer vision tasks, which can realize the automatic extraction of deep-level and high-discriminatory expressive features. It has been rapidly developed into the field of medical image processing and analysis. In the research of MRI brain image segmentation based on deep learning, a series of important research results have been developed in recent years, and the performance has been greatly improved compared with traditional brain image segmentation methods. However, the current deep learning methods are difficult to improve the ability to extract features and integrate information of various spatial scales. To solve this problem, an automatic segmentation method of brain tumors based on separable hole convolution is proposed.
例如申请号201580001261.8的中国专利公开了一种基于深度卷积神经网络的快速磁共振成像方法及装置,所述方法包括:步骤S1,构建深度卷积神经网络;步骤S2,获取离线磁共振图像数据,训练所述深度卷积神经网络,学习欠采样磁共振图像与全采图像之间的映射关系;步骤S3,利用所述步骤S2中学习到的深度卷积神经网络重建磁共振图像。该方法利用线下大量的磁共振图像,开发其先验信息,使其离线网络可从前期采集的磁共振数据里恢复更多的精细结构和图像特征,并使磁共振采样倍数和成像精度有所提高。但是该技术方案实际没有设计脑肿瘤自动切割的过程,而且信息处理能力也十分有限。For example, Chinese Patent Application No. 201580001261.8 discloses a fast magnetic resonance imaging method and device based on a deep convolutional neural network. The method includes: step S1, constructing a deep convolutional neural network; step S2, acquiring offline magnetic resonance image data , train the deep convolutional neural network, and learn the mapping relationship between the under-sampled magnetic resonance image and the full-sampling image; step S3, use the deep convolutional neural network learned in the step S2 to reconstruct the magnetic resonance image. The method utilizes a large number of offline magnetic resonance images to develop its prior information, so that its offline network can recover more fine structures and image features from the magnetic resonance data collected in the previous stage, and make the magnetic resonance sampling multiple and imaging accuracy better. improved. However, this technical solution does not actually design the process of automatically cutting brain tumors, and the information processing capability is also very limited.
发明内容SUMMARY OF THE INVENTION
为了解决现有技术中的上述问题,即解决目前深度学习技术在进行脑肿瘤分割时无法提取精确特征以及整合尺度信息能力问题,本发明提供了一种基于可分离空洞卷积的磁共振脑肿瘤自动分割方法。In order to solve the above problems in the prior art, that is, to solve the problem that the current deep learning technology cannot extract accurate features and integrate scale information when performing brain tumor segmentation, the present invention provides a magnetic resonance brain tumor based on separable hole convolution Automatic segmentation method.
本发明提出的一种基于可分离空洞卷积的磁共振脑肿瘤自动分割方法,包括下列步骤:A method for automatic segmentation of magnetic resonance brain tumors based on separable hole convolution proposed by the present invention includes the following steps:
步骤1:将磁共振图像数据划分为训练集、测试集;训练集采用数据预处理方法,生成处理后磁共振图像,测试集用于模型的测试,磁共振脑肿瘤图像的分割过程,具体包括:Step 1: Divide the magnetic resonance image data into a training set and a test set; the training set adopts a data preprocessing method to generate a processed magnetic resonance image, and the test set is used for model testing, and the segmentation process of the magnetic resonance brain tumor image, specifically including: :
步骤11:将构造包含磁共振图像数据及标签的图像数据集,并划分为训练集、测试集两部分,训练集用于本发明的模型训练,测试集用于本发明模型测试阶段;Step 11: construct an image data set including magnetic resonance image data and labels, and divide it into two parts: training set and test set, the training set is used for model training of the present invention, and the test set is used for the model testing stage of the present invention;
获取图像数据和标签,X=[x1,x2,...,xN]代表所有图片构成的样本集,每一病例图像记作xi,{i=1,2,...,N},N为图像样本个数,每个病例图像有不同磁序列四种表示,分别为Flair、T1、T2和T1c模态;Y=[y1,y2,...,yM]代表图像数据集X 对应的标签。然后进行样本集进行划分,选取一部分作为训练样本集Xtr,一部分作为测试样本集Xte;Obtain image data and labels, X=[x 1 , x 2 ,...,x N ] represents the sample set composed of all images, each case image is denoted as x i , {i=1,2,..., N}, N is the number of image samples, each case image has four representations of different magnetic sequences, namely Flair, T1, T2 and T1c modes; Y=[y 1 , y 2 ,...,y M ] represents the label corresponding to the image dataset X. Then, the sample set is divided, and a part is selected as the training sample set X tr , and the other part is used as the test sample set X te ;
步骤12:首先去除训练数据集图像的1%最高和1%最低强度区域,得到尺度大小为152×192×146 3D图像,然后将每个3D图像切割成一系列2D切片图像f1,尺度为152×192;然后根据图像肿瘤特征,将这些f1图像实现块处理,得到尺度为128×128图像f2,以解决数据不平衡;Step 12: First remove the 1% highest and 1% lowest intensity regions of the training dataset images to get 3D images with a scale of 152×192×146, then cut each 3D image into a series of 2D slice images f1 with a scale of 152× 192; Then, according to the image tumor characteristics, these f1 images are processed in blocks to obtain an image f2 with a scale of 128×128 to solve the data imbalance;
步骤13:利用z-score归一化处理,将图像f2不同量级的数据像素强度转化为统一强度,以保证数据之间的规整化。步骤13的具体包括:Step 13: Using the z-score normalization process, the data pixel intensities of different magnitudes of the image f2 are converted into uniform intensities to ensure the normalization between the data. The specifics of step 13 include:
步骤131:求解f2图像像素总体数据的均值μ;求解f2图像像素总体数据的标准差σ;Step 131: Find the mean value μ of the overall data of the f2 image pixels; find the standard deviation σ of the overall data of the f2 image pixels;
步骤132:获取f2图像像素x;利用公式:获取,生成处理后的磁共振脑肿瘤图像。Step 132: Obtain the f2 image pixel x; use the formula: Acquire, generate processed magnetic resonance brain tumor images.
步骤2,面向处理后磁共振脑肿瘤图像分割的基于可分离空洞卷积网络构建,具体包括以下步骤:Step 2, constructing a separable hole convolutional network for the segmentation of post-processing magnetic resonance brain tumor images, which specifically includes the following steps:
步骤21:使用取自处理后磁共振图像轴向切片大小对全卷积神经网络进行训练;依次获取次磁共振图像四个模式图像的各个轴向切片;具体为以特定像素为中心的Flair、T1、T2和T1c四种模态磁共振图像上正方形的块,同理标签也使用这种处理方式;所述的轴向切片大小为128×128×4,4表示Flair、T1、T2 和T1c模态;Step 21: Use the axial slice size obtained from the processed magnetic resonance image to train the fully convolutional neural network; sequentially obtain each axial slice of the four mode images of the secondary magnetic resonance image; T1, T2 and T1c four modalities of magnetic resonance images on the square block, the same label also uses this processing method; the axial slice size is 128 × 128 × 4, 4 represents Flair, T1, T2 and T1c modal;
步骤22:将步骤21中所获得的轴向切片输入可分离空洞卷积的神经卷积网络中进行脑肿瘤的分割;Step 22: Input the axial slice obtained in Step 21 into the neural convolutional network with separable hole convolution for brain tumor segmentation;
步骤22中,可分离空洞卷积网络构建主要包括以下步骤:In step 22, the construction of the separable atrous convolutional network mainly includes the following steps:
步骤S221:编码器网络通过构建可分离空洞卷积网络进行特征的提取,可分离空洞卷积块由3×3可分离卷积和空洞率为2的3×3卷积、正则化和非线性激活函数组成,然后通过捷径连接进行求和操作,实现对脑肿瘤特征的提取;Step S221: The encoder network extracts features by constructing a separable hole convolution network. The separable hole convolution block consists of 3×3 separable convolutions and 3×3 convolutions with a hole ratio of 2, regularization and nonlinearity. The activation function is composed, and then the summation operation is performed through the shortcut connection to realize the extraction of brain tumor features;
编码器网络中包含3个可分离空洞卷积块,3个卷积块提取特征的通道数分别是64,128,256,编码器网路不断进行下采样,提取脑肿瘤更加高阶信息,获取全局语义信息;The encoder network contains 3 separable atrous convolution blocks. The number of channels for extracting features of the 3 convolution blocks is 64, 128, and 256 respectively. The encoder network continuously performs downsampling to extract higher-order information of brain tumors. global semantic information;
步骤S:222:在可分离空洞卷积网络底层,分别加入2个由3×3和1×1 卷积以及求和操作构成残差块,这两个残差块的通道数分别是512、512,这种结构便于感知全局信息并描述更详细的局部特征信息;Step S: 222: At the bottom layer of the separable hole convolutional network, two residual blocks composed of 3×3 and 1×1 convolution and summation operations are added respectively. The channel numbers of these two residual blocks are 512, 512, this structure is convenient to perceive global information and describe more detailed local feature information;
步骤S:223:在解码器网络中,可分离空洞卷积网络通过上采样和卷积操作进行特征的还原,并通过合并与之映射的编码器网络特征补全丢失边界信息,提升网络的特征信息捕获能力。Step S: 223: In the decoder network, the separable hole convolution network restores features through upsampling and convolution operations, and completes the missing boundary information by merging the encoder network features mapped with it to improve the network features Information capture capability.
解码器网络中,采用3个残差块,进行特征的复原,通道数分别是256, 128,64。解码器网络与编码器网络共同协作,补全丢失的边界信息,通过 softmax层进行像素级别分类。In the decoder network, three residual blocks are used for feature restoration, and the number of channels is 256, 128, and 64 respectively. The decoder network cooperates with the encoder network to complete the missing boundary information and perform pixel-level classification through the softmax layer.
步骤3,可分离空洞卷积网络模型的端对端训练,采用构建的基于可分离卷空洞卷积的网络,将处理后磁共振图像送入到网络中进行优化网络过程,提升分割精度;Step 3, end-to-end training of the separable hole convolution network model, using the constructed network based on separable volume hole convolution, and sending the processed magnetic resonance image into the network to optimize the network process and improve the segmentation accuracy;
可分离卷积网络通过步骤3进行网络优化,在训练过程中,损失函数由Dice loss和Cross Entropy Loss函数组成,用于计算网络的误差,然后采用随机梯度下降法对网络不断的进行优化,直到达到最优。The separable convolutional network is optimized through step 3. During the training process, the loss function is composed of Dice loss and Cross Entropy Loss function, which is used to calculate the error of the network, and then the stochastic gradient descent method is used to continuously optimize the network until reach the optimum.
步骤4:将训练后的可分离空洞卷积网络模型用于磁共振脑影像进行分割,将测试集逐切片送入到网络,切片大小是240×240,同时四种模态数据一并送入到训练后的模型中,通过模型处理后,生成输出测试分割结果。Step 4: Use the trained separable convolutional network model for MRI brain image segmentation, and send the test set to the network slice by slice, the slice size is 240×240, and the four modal data are sent together at the same time. In the trained model, after processing by the model, the output test segmentation result is generated.
在上述方法中,所述基于可分离空洞卷积的磁共振脑肿瘤自动分割方法是深度神经卷积网络;特征提取网络由可分离空洞残差网络构成。所述的可分离空洞卷积的磁共振脑肿瘤自动分割神经网络为串联的卷积神经网络,由编码器网络和解码器网络组成,所述的编码器网络由卷积、池化等操作实现对图像特征的提取,将提取特征送入解码器网络中,解码器网络与解码器网路通过相应操作连接,共同将特征复原,得到最终分割后的磁共振图像。In the above method, the automatic segmentation method of magnetic resonance brain tumor based on separable hole convolution is a deep neural convolution network; the feature extraction network is composed of a separable hole residual network. The MRI brain tumor automatic segmentation neural network with separable hole convolution is a convolutional neural network in series, which is composed of an encoder network and a decoder network, and the encoder network is realized by operations such as convolution and pooling. For the extraction of image features, the extracted features are sent to the decoder network, and the decoder network and the decoder network are connected through corresponding operations to restore the features together to obtain the final segmented magnetic resonance image.
上述方法在步骤3中输出脑肿瘤分割结果之后,需要进行区域肿瘤的指标评定,得出不同区域肿瘤的分割效果。After the above method outputs the brain tumor segmentation results in step 3, it is necessary to perform index evaluation of regional tumors to obtain segmentation effects of tumors in different regions.
进一步所述的指标评定为:Flair、T1、T2和T1c四种模态磁共振图像对应的脑肿瘤分割结果中的三种病态,分别是完全肿瘤、核心肿瘤和增强肿瘤,它们之间是包含关系。磁共振脑肿瘤图像有四种标签,分别为0,1,2,4,依次表示体素(x,y,z)被标注为健康组织、坏死和非增强、水肿、增强。通过评定指标Dice参数进行表示分割的结果。The further described indicators are evaluated as: three morbidities in the brain tumor segmentation results corresponding to the four modal magnetic resonance images of Flair, T1, T2 and T1c, which are complete tumors, core tumors and enhanced tumors, respectively. relation. Magnetic resonance brain tumor images have four labels, 0, 1, 2, 4, which in turn indicate that voxels (x, y, z) are labeled as healthy tissue, necrotic and non-enhancing, edematous, and enhanced. The result of segmentation is represented by the evaluation index Dice parameter.
本发明的可分离空洞卷积的磁共振脑肿瘤自动分割方法的有益效果在于:使用可分离空洞卷积网络进行特征提取,并构建编解码网络结构,实现肿瘤特征提取与特征融合,解决了深度学习脑肿瘤分割模型难在特征提取以及多种空间尺度的信息予以整合能力的问题。另外,本发明在BraTS 2018挑战赛数据集上的验证结果也获得了不错效果。本方法可以逐片的进行处理分割实现端对端分割效果,进而降低了分割的时间和分割设备性能,使脑肿瘤的分割失误率更低,并降低了脑肿瘤自动方法分割的成本。The beneficial effect of the MRI brain tumor automatic segmentation method with separable hole convolution of the present invention is that: the separable hole convolution network is used for feature extraction, and an encoding and decoding network structure is constructed to realize tumor feature extraction and feature fusion, and solve the problem of depth Learning brain tumor segmentation models is difficult in feature extraction and the ability to integrate information from multiple spatial scales. In addition, the verification results of the present invention on the BraTS 2018 challenge data set also achieved good results. The method can be processed and segmented slice by slice to achieve end-to-end segmentation effect, thereby reducing segmentation time and segmentation equipment performance, lowering the segmentation error rate of brain tumors, and reducing the cost of automatic brain tumor segmentation.
附图说明Description of drawings
以下结合附图和具体实施方式对本发明作进一步详细说明The present invention will be further described in detail below in conjunction with the accompanying drawings and specific embodiments
图1是本发明的可分离空洞卷积的磁共振脑肿瘤自动分割方法的分割模型示意图;Fig. 1 is the segmentation model schematic diagram of the magnetic resonance brain tumor automatic segmentation method of the separable hole convolution of the present invention;
图2是本发明的可分离空洞卷积的磁共振脑肿瘤自动分割方法的流程图;Fig. 2 is the flow chart of the magnetic resonance brain tumor automatic segmentation method of separable hole convolution of the present invention;
图3是可分离空洞卷积的磁共振脑肿瘤自动分割方法和其他方法结果在 BraTS2017数据集中的比较;Figure 3 is a comparison of the results of the MRI brain tumor automatic segmentation method with separable hole convolution and other methods in the BraTS2017 dataset;
图4是可分离空洞卷积的磁共振脑肿瘤自动分割方法和其他方法结果在 BraTS2018数据集中的比较。Figure 4 is a comparison of the results of the MR brain tumor automatic segmentation method with separable hole convolution and other methods in the BraTS2018 dataset.
具体实施方式Detailed ways
下面参照附图来描述本发明的优选具体实施方式。本领域技术人员应当理解的是,这些实施例仅仅是用于解释本发明的技术方案,并非旨在限制本发明的保护范围。例如,尽管本申请中按照特定顺序描述了本发明的方法的各个步骤,但是这些顺序并不是限制性的,在不偏离本发明的基本原理的前提下,本领域技术人员可以按照不同顺序来执行所述步骤,这些变换都落入本申请的保护范围内。Preferred embodiments of the present invention are described below with reference to the accompanying drawings. It should be understood by those skilled in the art that these embodiments are only used to explain the technical solutions of the present invention, and are not intended to limit the protection scope of the present invention. For example, although the various steps of the method of the present invention are described in this application in a specific order, these orders are not limiting, and those skilled in the art may perform them in different orders without departing from the basic principles of the present invention. The steps and these transformations all fall within the protection scope of the present application.
实施例1:首先请参照图1,本发明所使用神经网络实现的可分离空洞卷积网路融合的方法来解决目前深度学习的脑肿瘤分割方法中难在特征提取以及多种空间尺度的信息予以整合能力的问题,该方法具有较快的分割速度和较高的分割精度。在实施方式中的脑肿瘤分割网络模型中,如图1所示,可分离空洞卷积构成一个U型的卷积神经网络。Embodiment 1: First of all, please refer to FIG. 1, the present invention uses the neural network to realize the method of separable hole convolution network fusion to solve the difficulty of feature extraction and various spatial scale information in the current deep learning brain tumor segmentation method. Considering the problem of integration ability, this method has faster segmentation speed and higher segmentation accuracy. In the brain tumor segmentation network model in the embodiment, as shown in FIG. 1 , a U-shaped convolutional neural network can be formed by separating hole convolutions.
请参阅图2,其具体步骤如下所示:Please refer to Figure 2, the specific steps are as follows:
步骤1:磁共振图像数据集划分和训练数据集的预处理操作:采用2D切片处理以及z-score方法进行数据不平衡处理以及数据像素强度归一化处理操作,生成处理后的磁共振图像。Step 1: division of magnetic resonance image data set and preprocessing operation of training data set: 2D slice processing and z-score method are used to perform data imbalance processing and data pixel intensity normalization processing operations to generate processed magnetic resonance images.
步骤2:脑肿瘤分割:实现基于可分离空洞卷积的磁共振脑肿瘤自动分割网络构建。Step 2: Brain tumor segmentation: Realize the construction of an automatic segmentation network for magnetic resonance brain tumors based on separable atrous convolutions.
步骤3:可分离空洞卷积网络参数优化,对处理后的磁共振图像进行分割。Step 3: Separable hole convolutional network parameter optimization to segment the processed magnetic resonance image.
步骤4:将训练后的可分离空洞卷积网络模型用于磁共振脑测试集的分割。Step 4: The trained separable atrous convolutional network model is used for segmentation of the MRI brain test set.
具体详述如下:The details are as follows:
1.磁共振图像的处理1. Processing of magnetic resonance images
该步骤中对包含数据集的划分以及磁共振脑肿瘤训练集图像的预处理步骤包括:In this step, the preprocessing steps including the division of the dataset and the images of the MRI brain tumor training set include:
步骤11:磁共振图像数据集划分,划分成训练集和测试集。Step 11: Divide the magnetic resonance image data set into training set and test set.
步骤12:对划分后的磁共振脑影像的训练集首先去除图像的1%最高和1%最低强度区域,得到152×192×146的3D数据,然后将每个3D图像切割成一系列2D切片图像f1,f1图像尺度是152×192;然后将这些f1图像通过滑动窗口实现块处理,得到图像尺度为128×128的f2。Step 12: For the training set of divided MRI brain images, first remove the 1% highest and 1% lowest intensity regions of the image to obtain 3D data of 152 × 192 × 146, and then cut each 3D image into a series of 2D slice images f1, the scale of f1 image is 152×192; then these f1 images are processed by sliding window to achieve block processing, and the image scale of f2 is 128×128.
步骤13:对磁共振图像f2进行z-score强度规整化处理。Step 13: Perform z-score intensity normalization processing on the magnetic resonance image f2.
z-score标准化是数据处理的一种常用方法。通过它能够将不同量级的数据转化为统一量度的z-score分值进行比较。z-score normalization is a common method of data processing. It can convert data of different magnitudes into a unified z-score for comparison.
本实施例的步骤13中,像素强度规整化处理是通过将两组或多组数据转化为无单位的z-score分值,使得数据标准统一化,提高了数据可比性,削弱了数据解释性。In step 13 of this embodiment, the pixel intensity normalization process is performed by Converting two or more sets of data into unit-free z-score scores unifies data standards, improves data comparability, and weakens data interpretability.
2.脑肿瘤分割网络的构建2. Construction of Brain Tumor Segmentation Network
请参阅图1,在步骤2中,基于可分离空洞卷积的磁共振脑肿瘤自动分割深度神经网络是一个编解码结构的卷积神经网络,由编码网络和解码网络串联构成。编码网络通过可分离空洞卷积、池化等操作进行图像特征的提取,解码网络通过上采样和卷积操作获取图像的特征,实现脑肿瘤分割。Please refer to Figure 1. In step 2, the deep neural network for automatic segmentation of magnetic resonance brain tumors based on separable hole convolution is a convolutional neural network with an encoder-decoder structure, which is composed of an encoder network and a decoder network in series. The encoding network extracts image features through operations such as separable hole convolution and pooling, and the decoding network obtains image features through upsampling and convolution operations to achieve brain tumor segmentation.
在步骤2中,可分离卷积和空洞卷积借鉴Chollet和Yu等人的网络,卷积层通道间的相关性和空间相关性是可以退耦合的,将它们分开映射,能达到更好的效果。可分离卷积和空洞卷积融合获得了更好的局部特征和全局特征描述能力。同时,这种网络充分利用磁共振脑图像的通道和区域信息,可更好的捕获更多像素级细节和空间信息,这有利于磁共振脑分割图像特征信息提取,加强特征捕获能力,实现更加准确分割。In step 2, the separable convolution and the atrous convolution are borrowed from the networks of Chollet and Yu et al. The correlation and spatial correlation between the channels of the convolution layer can be decoupled, and they can be mapped separately to achieve better results. Effect. The fusion of separable convolution and atrous convolution obtains better local and global feature description capabilities. At the same time, this network makes full use of the channel and region information of the MRI brain image, which can better capture more pixel-level details and spatial information, which is conducive to the feature information extraction of MRI brain segmentation images, enhances the ability to capture features, and achieves more accurate segmentation.
在步骤2中,具体地,基于可分离空洞卷积的磁共振脑肿瘤自动分割方法构建流程按照如下步骤:In step 2, specifically, the construction process of the method for automatic segmentation of magnetic resonance brain tumors based on separable hole convolution is as follows:
步骤S21,使用取自处理后的磁共振图像轴向切片的正方形小块对网络进行训练;所述取出处理后的磁共振图像轴向切片的正方形小块构成训练阶段的训练样本,具体为以特定像素为中心的Flair、T1、T1c、T2四种模态磁共振图像上正方形的块,尺度大小是128×128×4,4个通道分别是Flair、T1、T1c、T2 四种模态下轴向切片。标签也以这种方式进行处理,对应训练数据的标签值。In step S21, the network is trained by using the small square blocks taken from the axial slices of the processed magnetic resonance images; the square small blocks taken from the axial slices of the processed magnetic resonance images constitute training samples in the training stage, specifically A square block on the magnetic resonance image of Flair, T1, T1c, T2 four modalities centered on a specific pixel, the size is 128 × 128 × 4, and the four channels are Flair, T1, T1c, T2 four modalities respectively. Axial slices. Labels are also processed in this way, corresponding to the label values of the training data.
步骤S22,如图1所示,本发明的可分离空洞卷积的磁共振脑肿瘤自动分割方法的脑肿瘤分割模型示意图。其中可分离空洞卷积的磁共振脑肿瘤自动分割网络训练阶段以尺度128×128×4进行训练,通过下采样进行特征信息复原,最后通过softmax层分类。Step S22 , as shown in FIG. 1 , a schematic diagram of a brain tumor segmentation model of the method for automatic segmentation of magnetic resonance brain tumors with separable hole convolution of the present invention. Among them, the MRI brain tumor automatic segmentation network with separable convolution is trained with a scale of 128 × 128 × 4 in the training stage, and the feature information is restored by downsampling, and finally classified by the softmax layer.
3.模型优化3. Model optimization
训练预测值将用于修正Dice loss损失函数的计算,在训练阶段,输入一个尺度大小的图像,输出相同大小的图像,并进行像素分类预测,即健康组织、坏死和非增强、水肿以及增强的概率值。然后使用误差反向传播算法对整个可分离空洞卷积网络中参数进行训练。The training prediction value will be used to modify the calculation of the Dice loss loss function. In the training phase, an image of a scale size is input, and an image of the same size is output, and the pixel classification prediction is performed, i.e. healthy tissue, necrotic and non-enhanced, edema and enhanced. probability value. The parameters in the entire separable atrous convolutional network are then trained using an error back-propagation algorithm.
需要特别说明的是,训练样本通过训练数据上随机采样方式获取,并通过去除、块方式进行数据平衡处理。该阶段所使用的轴向切片的小块方式对网络进行训练,这样一方面可以增加训练样本的数量,另一方面控制不同类别样本数目,有利于实现样本均衡。该网络是全卷积神经网络,实现端对端方式,并采用2D切片方式进行测试,加快了网络测试效率。It should be noted that the training samples are obtained by random sampling on the training data, and the data balance is processed by removing and blocking. The network is trained in small blocks of axial slices used in this stage, which can increase the number of training samples on the one hand, and control the number of samples of different categories on the other hand, which is conducive to achieving sample balance. The network is a fully convolutional neural network that implements an end-to-end approach and uses 2D slices for testing, which speeds up network testing.
步骤3,使用处理后磁共振图像轴向切片,对融合可分离空洞卷积的磁共振脑肿瘤自动分割方法进行参数优化。在误差反向传播阶段同时对基于可分离空洞卷积的磁共振脑肿瘤自动分割网络参数调优。Step 3, using the axial slices of the processed magnetic resonance images, to optimize the parameters of the automatic segmentation method of magnetic resonance brain tumors fused with separable hole convolutions. The network parameters for automatic segmentation of magnetic resonance brain tumors based on separable atrous convolutions are simultaneously tuned in the error back-propagation stage.
步骤4,使用将训练后的可分离空洞卷积网络模型用于测试集磁共振脑影像进行分割。Step 4, using the trained separable atrous convolutional network model for the test set MRI brain images for segmentation.
综上所述,本发明的基于可分离空洞卷积的磁共振脑肿瘤自动分割网络,实现可分离空洞卷积融合提取图像特征,解决了深度学习脑肿瘤分割模型难在特征提取以及多种空间尺度的信息予以整合能力的问题,另外在BraTS 2017 数据集和BraTS 2018数据集上均可获得了不错的结果。To sum up, the automatic segmentation network of magnetic resonance brain tumor based on separable hole convolution of the present invention realizes the extraction of image features through separable hole convolution fusion, and solves the difficulty in feature extraction and various spatial problems of deep learning brain tumor segmentation model. In addition, good results have been obtained on both the BraTS 2017 dataset and the BraTS 2018 dataset.
Claims (6)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910761883.0A CN110782427B (en) | 2019-08-19 | 2019-08-19 | Magnetic resonance brain tumor automatic segmentation method based on separable cavity convolution |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910761883.0A CN110782427B (en) | 2019-08-19 | 2019-08-19 | Magnetic resonance brain tumor automatic segmentation method based on separable cavity convolution |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110782427A true CN110782427A (en) | 2020-02-11 |
CN110782427B CN110782427B (en) | 2023-06-20 |
Family
ID=69383306
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910761883.0A Active CN110782427B (en) | 2019-08-19 | 2019-08-19 | Magnetic resonance brain tumor automatic segmentation method based on separable cavity convolution |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110782427B (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112053342A (en) * | 2020-09-02 | 2020-12-08 | 陈燕铭 | Method and device for extracting and identifying pituitary magnetic resonance image based on artificial intelligence |
CN112200810A (en) * | 2020-09-30 | 2021-01-08 | 深圳市第二人民医院(深圳市转化医学研究院) | Multimodal automated ventricle segmentation system and method of using the same |
CN112288749A (en) * | 2020-10-20 | 2021-01-29 | 贵州大学 | Skull image segmentation method based on depth iterative fusion depth learning model |
CN112489059A (en) * | 2020-12-03 | 2021-03-12 | 山东承势电子科技有限公司 | Medical tumor segmentation and three-dimensional reconstruction method |
CN113160138A (en) * | 2021-03-24 | 2021-07-23 | 山西大学 | Brain nuclear magnetic resonance image segmentation method and system |
CN113610742A (en) * | 2020-04-16 | 2021-11-05 | 同心医联科技(北京)有限公司 | Whole brain structure volume measurement method and system based on deep learning |
CN114581696A (en) * | 2020-11-17 | 2022-06-03 | 广州柏视医疗科技有限公司 | Method and system for classifying benign and malignant conditions of digital pathological image block |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107220980A (en) * | 2017-05-25 | 2017-09-29 | 重庆理工大学 | A kind of MRI image brain tumor automatic division method based on full convolutional network |
CN109919212A (en) * | 2019-02-26 | 2019-06-21 | 中山大学肿瘤防治中心 | Method and device for multi-scale detection of tumors in endoscopy images of digestive tract |
CN109949309A (en) * | 2019-03-18 | 2019-06-28 | 安徽紫薇帝星数字科技有限公司 | A kind of CT image for liver dividing method based on deep learning |
CN109949275A (en) * | 2019-02-26 | 2019-06-28 | 中山大学肿瘤防治中心 | A method and device for diagnosing endoscopic images of the upper gastrointestinal tract |
-
2019
- 2019-08-19 CN CN201910761883.0A patent/CN110782427B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107220980A (en) * | 2017-05-25 | 2017-09-29 | 重庆理工大学 | A kind of MRI image brain tumor automatic division method based on full convolutional network |
CN109919212A (en) * | 2019-02-26 | 2019-06-21 | 中山大学肿瘤防治中心 | Method and device for multi-scale detection of tumors in endoscopy images of digestive tract |
CN109949275A (en) * | 2019-02-26 | 2019-06-28 | 中山大学肿瘤防治中心 | A method and device for diagnosing endoscopic images of the upper gastrointestinal tract |
CN109949309A (en) * | 2019-03-18 | 2019-06-28 | 安徽紫薇帝星数字科技有限公司 | A kind of CT image for liver dividing method based on deep learning |
Non-Patent Citations (1)
Title |
---|
FISHER YU 等: "Multi-scale context aggregation by dilated convolutions" * |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113610742A (en) * | 2020-04-16 | 2021-11-05 | 同心医联科技(北京)有限公司 | Whole brain structure volume measurement method and system based on deep learning |
CN112053342A (en) * | 2020-09-02 | 2020-12-08 | 陈燕铭 | Method and device for extracting and identifying pituitary magnetic resonance image based on artificial intelligence |
CN112200810A (en) * | 2020-09-30 | 2021-01-08 | 深圳市第二人民医院(深圳市转化医学研究院) | Multimodal automated ventricle segmentation system and method of using the same |
CN112200810B (en) * | 2020-09-30 | 2023-11-14 | 深圳市第二人民医院(深圳市转化医学研究院) | Multi-modal automated ventricle segmentation system and method of use thereof |
CN112288749A (en) * | 2020-10-20 | 2021-01-29 | 贵州大学 | Skull image segmentation method based on depth iterative fusion depth learning model |
CN114581696A (en) * | 2020-11-17 | 2022-06-03 | 广州柏视医疗科技有限公司 | Method and system for classifying benign and malignant conditions of digital pathological image block |
CN112489059A (en) * | 2020-12-03 | 2021-03-12 | 山东承势电子科技有限公司 | Medical tumor segmentation and three-dimensional reconstruction method |
CN113160138A (en) * | 2021-03-24 | 2021-07-23 | 山西大学 | Brain nuclear magnetic resonance image segmentation method and system |
CN113160138B (en) * | 2021-03-24 | 2022-07-19 | 山西大学 | Brain nuclear magnetic resonance image segmentation method and system |
Also Published As
Publication number | Publication date |
---|---|
CN110782427B (en) | 2023-06-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Ni et al. | GC-Net: Global context network for medical image segmentation | |
CN110782427B (en) | Magnetic resonance brain tumor automatic segmentation method based on separable cavity convolution | |
CN112465827B (en) | Contour perception multi-organ segmentation network construction method based on class-by-class convolution operation | |
CN111709953B (en) | Output method and device in lung lobe segment segmentation of CT (computed tomography) image | |
CN111429474B (en) | Mammary gland DCE-MRI image focus segmentation model establishment and segmentation method based on mixed convolution | |
CN109886986A (en) | A Dermoscopy Image Segmentation Method Based on Multi-branch Convolutional Neural Networks | |
CN107749061A (en) | Based on improved full convolutional neural networks brain tumor image partition method and device | |
Ye et al. | Medical image diagnosis of prostate tumor based on PSP-Net+ VGG16 deep learning network | |
CN112446892A (en) | Cell nucleus segmentation method based on attention learning | |
Peng et al. | LCP-Net: A local context-perception deep neural network for medical image segmentation | |
CN114882048B (en) | Image segmentation method and system based on wavelet scattering learning network | |
CN110415253A (en) | A point-based interactive medical image segmentation method based on deep neural network | |
Feng et al. | DAUnet: A U-shaped network combining deep supervision and attention for brain tumor segmentation | |
CN115147600A (en) | GBM Multimodal MR Image Segmentation Method Based on Classifier Weight Converter | |
Dai et al. | CAN3D: Fast 3D medical image segmentation via compact context aggregation | |
CN117710681A (en) | Semi-supervised medical image segmentation method based on data enhancement strategy | |
Kong et al. | Data enhancement based on M2-Unet for liver segmentation in Computed Tomography | |
Rasoulian et al. | Weakly supervised intracranial hemorrhage segmentation using head-wise gradient-infused self-attention maps from a swin transformer in categorical learning | |
Shoaib et al. | YOLO object detector and inception-V3 convolutional neural network for improved brain tumor segmentation. | |
CN115984257A (en) | Multi-modal medical image fusion method based on multi-scale transform | |
Verma et al. | Role of deep learning in classification of brain MRI images for prediction of disorders: a survey of emerging trends | |
Sünkel et al. | Hybrid quantum machine learning assisted classification of COVID-19 from computed tomography scans | |
CN111223113B (en) | Nuclear magnetic resonance hippocampus segmentation algorithm based on dual dense context-aware network | |
Bouslimi et al. | Deep Learning Based Localisation and Segmentation of Prostate Cancer from mp-MRI Images | |
Rezaei et al. | Brain abnormality detection by deep convolutional neural network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
GR01 | Patent grant | ||
GR01 | Patent grant |