CN110706209B - Method for positioning tumor in brain magnetic resonance image of grid network - Google Patents

Method for positioning tumor in brain magnetic resonance image of grid network Download PDF

Info

Publication number
CN110706209B
CN110706209B CN201910874099.0A CN201910874099A CN110706209B CN 110706209 B CN110706209 B CN 110706209B CN 201910874099 A CN201910874099 A CN 201910874099A CN 110706209 B CN110706209 B CN 110706209B
Authority
CN
China
Prior art keywords
tumor
convolution
image
dimensional
grid
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910874099.0A
Other languages
Chinese (zh)
Other versions
CN110706209A (en
Inventor
舒华忠
王如梦
谢展鹏
伍家松
孔佑勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN201910874099.0A priority Critical patent/CN110706209B/en
Publication of CN110706209A publication Critical patent/CN110706209A/en
Application granted granted Critical
Publication of CN110706209B publication Critical patent/CN110706209B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

本发明提供了一种一种网格网络的大脑磁共振图像肿瘤自动定位方法,提出了一种新的三维物物体检测方法,采用一种浅层的三维卷积神经网络模型提取图像特征,采用网格方式进行分类定位。本发明包括:从基于残差网络的三维深度卷积神经网络的骨干网络中获得图像的特征,基于骨干网络获得的特征图像进行大脑肿瘤定位。本发明能较好地应用于大脑核磁共振图像,实现三维核磁共振图像中的肿瘤区域定位,定位结果准确,且计算资源代价较低。

Figure 201910874099

The invention provides a method for automatically locating a tumor in a brain magnetic resonance image of a grid network, and proposes a new three-dimensional object detection method, which adopts a shallow three-dimensional convolutional neural network model to extract image features, and adopts Grid method for classification and positioning. The invention includes: obtaining image features from a backbone network of a three-dimensional deep convolutional neural network based on a residual network, and performing brain tumor localization based on the feature images obtained by the backbone network. The present invention can be better applied to the brain nuclear magnetic resonance image, realizes the location of the tumor region in the three-dimensional nuclear magnetic resonance image, the positioning result is accurate, and the cost of computing resources is low.

Figure 201910874099

Description

一种网格网络的大脑磁共振图像肿瘤定位方法A Grid Network-Based Tumor Localization Method in Brain Magnetic Resonance Images

技术领域technical field

本发明属于属于数字图像领域,涉及磁共振图像处理方法,具体涉及一种网格网络的大脑磁共振图像肿瘤自动定位方法。The invention belongs to the field of digital images and relates to a magnetic resonance image processing method, in particular to a grid network-based method for automatically locating tumors in a brain magnetic resonance image.

背景技术Background technique

随着计算机视觉技术的不断发展,借助广泛存在的摄像头,很多技术都已经被应用到了大家的日常生活中。虽然目前更广泛存在的是二维图像,但是在一些特定场景,三维图像能带来对现实世界更真实的反映。比如借助核磁共振技术我们能获得生物体内部器官的状态信息,借助RGB-D图像能进一步提升自动驾驶的安全性等等,因此研究高性能的三维视觉模型有着很高的现实意义。With the continuous development of computer vision technology, with the help of widely existing cameras, many technologies have been applied to everyone's daily life. Although two-dimensional images are more widely available, in some specific scenarios, three-dimensional images can bring a more realistic reflection of the real world. For example, with the help of nuclear magnetic resonance technology, we can obtain the state information of the internal organs of the organism, and with the help of RGB-D images, the safety of autonomous driving can be further improved, etc. Therefore, it is of great practical significance to study high-performance 3D visual models.

核磁共振成像是针对人体内部结构的成像技术,对核磁共振成像的研究具有很高的价值:首先,肿瘤的自动定位可以在一定程度上降低医生的工作负担,在医疗资源紧缺的今天,这项技术的研究可以让更多病人有得到诊断的机会;其次,通过算法对核磁共振图像的检查和医生检查结合,可以降低误判或者是漏判的风险,在肿瘤诊断的过程中如果发生漏判,病人很可能因此错过最佳的治疗时机,后果十分严重。而使用三维图像进行肿瘤的分割定位,由于三维图像的空间信息更丰富,我们有更大的把握生成高精度的结果。MRI is an imaging technology for the internal structure of the human body, and the research on MRI has high value: First, the automatic localization of tumors can reduce the workload of doctors to a certain extent. The research of technology can give more patients the chance to get a diagnosis; secondly, the combination of the MRI image inspection by the algorithm and the doctor's inspection can reduce the risk of misjudgment or missed diagnosis. If a missed diagnosis occurs in the process of tumor diagnosis , the patient is likely to miss the best time for treatment, and the consequences are very serious. While using 3D images for tumor segmentation and localization, we have greater confidence to generate high-precision results due to the richer spatial information of 3D images.

ImageNet等一系列的视觉挑战催生了大量的优秀的二维视觉模型如VGG,ResNet等等。这些高性能二维模型都存在对计算资源要求较高的问题,尽管借助GPU的并行计算能力很多算法已经能以较高的速度运行,但是如果将这些模型直接转化为对应的三维模型必然会造成几十倍,甚至上百倍的计算代价的提升。另外,中间结果的存储,模型参数的增长也会带来巨大的显存开销。同时,快速增长的模型参数数量也会使得模型在A series of visual challenges such as ImageNet have spawned a large number of excellent two-dimensional visual models such as VGG, ResNet and so on. These high-performance two-dimensional models all have the problem of high requirements for computing resources. Although many algorithms have been able to run at a high speed with the help of the parallel computing power of GPU, if these models are directly converted into corresponding three-dimensional models, it will inevitably cause Dozens of times, or even hundreds of times, the increase in computational cost. In addition, the storage of intermediate results and the growth of model parameters will also bring huge memory overhead. At the same time, the rapidly growing number of model parameters will also make the model in

发明内容SUMMARY OF THE INVENTION

为解决上述问题,本发明通过对现存的三维图像的深度学习模型进行探究,不断优化调整网络结构的方法,提出了一种新的三维物物体检测方法,采用一种浅层的三维卷积神经网络模型提取图像特征,采用网格方式进行分类定位。In order to solve the above problems, the present invention proposes a new three-dimensional object detection method by exploring the existing deep learning model of three-dimensional images and continuously optimizing the method of adjusting the network structure, using a shallow three-dimensional convolutional neural network. The network model extracts image features, and uses grid method for classification and positioning.

为了达到上述目的,本发明提供如下技术方案:In order to achieve the above object, the present invention provides the following technical solutions:

一种网格网络的大脑磁共振图像肿瘤自动定位方法,包括如下步骤:A method for automatically localizing tumors in a brain magnetic resonance image using a grid network, comprising the following steps:

步骤1,定义基于残差网络的三维深度卷积神经网络的骨干网络,具体包括如下子步骤:Step 1, define the backbone network of the three-dimensional deep convolutional neural network based on the residual network, which specifically includes the following sub-steps:

1-1,将输入的三维MRI图像数据X:(L,W,H,1)进行步长为2,kernel为[3,3,3]的三维卷积操作,三维卷积的卷积核数量设置为C1,生成数据Y1:(L/2,W/2,H/2,C1),其中,L,W,H分别为原始图像的长,宽,高;1-1, perform a three-dimensional convolution operation with a step size of 2 and a kernel of [3, 3, 3] on the input three-dimensional MRI image data X: (L, W, H, 1), and the convolution kernel of the three-dimensional convolution The number is set to C1, and the generated data Y1: (L/2, W/2, H/2, C1), where L, W, and H are the length, width, and height of the original image, respectively;

1-2,定义一个步长为1,kernel为[3,3,3]的三维卷积SSCNN;1-2, define a three-dimensional convolutional SSCNN with a step size of 1 and a kernel of [3, 3, 3];

1-3,将数据Y1:(L/2,W/2,H/2,C1)进行一次SSCNN卷积,卷积核的数量设定为C1,生成卷积后的结果Y1_1:(L/2,W/2,H/2,C1),再进行一次SSCNN卷积,卷积核的数量设定为C1,生成卷积后的结果Y1_2:(L/2,W/2,H/2,C1),最后将Y1和Y1_2每一个元素相加操作,生成数据Y2:(L/2,W/2,H/2,C1);1-3, perform an SSCNN convolution on the data Y1: (L/2, W/2, H/2, C1), set the number of convolution kernels to C1, and generate the convolutional result Y1_1: (L/ 2, W/2, H/2, C1), perform another SSCNN convolution, set the number of convolution kernels to C1, and generate the convolution result Y1_2: (L/2, W/2, H/2 , C1), and finally add each element of Y1 and Y1_2 to generate data Y2: (L/2, W/2, H/2, C1);

1-4,将数据Y2:(L/2,W/2,H/2,C1)进行一次SSCNN卷积,卷积核的数量必须设定为C1,生成卷积后的结果Y2_1:(L/2,W/2,H/2,C1),再进行一次SSCNN卷积,卷积核的数量必须设定为C1,生成卷积后的结果Y2_2:(L/2,W/2,H/2,C1),最后将Y2和Y2_2每一个元素相加操作,生成数据Y3:(L/2,W/2,H/2,C1);1-4, perform a SSCNN convolution on the data Y2: (L/2, W/2, H/2, C1), the number of convolution kernels must be set to C1, and generate the convolution result Y2_1: (L /2,W/2,H/2,C1), perform another SSCNN convolution, the number of convolution kernels must be set to C1, and generate the convolution result Y2_2: (L/2,W/2,H /2, C1), and finally add each element of Y2 and Y2_2 to generate data Y3: (L/2, W/2, H/2, C1);

1-5,将数据Y3:(L/2,W/2,H/2,C1)进行步长为2,kernel为[3,3,3]的三维卷积操作,三维卷积的卷积核数量指定为C2,生成数据Y4:(L/4,W/4,H/4,C2);1-5, perform a three-dimensional convolution operation with a step size of 2 and a kernel of [3, 3, 3] for the data Y3: (L/2, W/2, H/2, C1), and the convolution of the three-dimensional convolution The number of cores is designated as C2, and the generated data Y4: (L/4, W/4, H/4, C2);

1-6,重复步骤1-3,1-4,1-5两次,最后执行一次步骤1-3,1-4得到根据图像提取的特征Y:(L/16,W/16,H/16,C),其中C为最后重复1-4中的卷积核数量;1-6, repeat steps 1-3, 1-4, 1-5 twice, and finally perform steps 1-3, 1-4 to get the feature Y extracted from the image: (L/16, W/16, H/ 16, C), where C is the number of convolution kernels in the last repetitions 1-4;

步骤2,定义网格肿瘤定位网络,具体包括如下子步骤:Step 2, define a grid tumor localization network, which specifically includes the following sub-steps:

2-1,将骨干网络输出的特征图Y:(L/16,W/16,H/16,C)进行一次SSCNN卷积,卷积核的数量设定为C3,得到数据G1(L/16,W/16,H/16,C3);2-1. Perform a SSCNN convolution on the feature map Y: (L/16, W/16, H/16, C) output by the backbone network, and set the number of convolution kernels to C3 to obtain the data G1 (L/ 16,W/16,H/16,C3);

2-2,再将数据G1(L/16,W/16,H/16,C3)进行一次kernel为1×1×1,在长、宽、高上步长分别为(L/16)/N,(W/16)/M,(H/16)/K的三维卷积操作,卷积核的数量设定为2,得到数据G:(N,M,K,2),其中N,M,K分别表示设定的原始图像在长宽高上的划分个数;2-2, run the data G1 (L/16, W/16, H/16, C3) once again. The kernel is 1×1×1, and the length, width and height are respectively (L/16)/ The three-dimensional convolution operation of N, (W/16)/M, (H/16)/K, the number of convolution kernels is set to 2, and the data G is obtained: (N, M, K, 2), where N, M and K respectively represent the number of divisions of the set original image in terms of length, width and height;

2-3,将得到数据G:(N,M,K,2)通过Softmax操作,转化成原始图像每个网格包含肿瘤块和不包含肿瘤块的类别概率G1:(N,M,K,2),通过类别概率的大小,将类别概率值大的类别赋予当前的图像块,最后将所有包含肿瘤的图像块整合起来,形成完整的肿瘤块的定位;2-3, the obtained data G: (N, M, K, 2) is transformed into the original image through Softmax operation, and each grid contains tumor blocks and does not contain tumor blocks. Class probability G1: (N, M, K, 2), according to the size of the category probability, assign the category with a large category probability value to the current image block, and finally integrate all the image blocks containing the tumor to form a complete tumor block location;

步骤3,对三维大脑MRI图像进行肿瘤定位,具体包括如下子步骤:Step 3, performing tumor localization on the three-dimensional brain MRI image, which specifically includes the following sub-steps:

3-1,将三维核磁共振图像分为N×M×K个三维图像块,每个三维图像块的大小为(H/N)×(W/M)×(L/K),其中,H,W,L分别表示原始三维核磁共振图像的长,宽,高;3-1. Divide the three-dimensional nuclear magnetic resonance image into N×M×K three-dimensional image blocks, and the size of each three-dimensional image block is (H/N)×(W/M)×(L/K), where H , W, L represent the length, width and height of the original 3D MRI image, respectively;

3-2,对三维核磁共振图像进行网格化的标签,网格的类别包括包含肿瘤的图像块,以及不包含肿瘤的图像块;3-2, labeling gridded 3D MRI images, and the grid categories include image blocks that contain tumors and image blocks that do not contain tumors;

3-2,将三维核磁共振图像放入步骤1的网络中,产生(L/16)×(W/16)×(H/16)×C的特征图;3-2, put the three-dimensional nuclear magnetic resonance image into the network of step 1, and generate a feature map of (L/16)×(W/16)×(H/16)×C;

3-3,将获得的特征图像放入步骤2的网络结构中,产生N×M×K×2的每个网格分类,分类为包含肿瘤块和不包含肿瘤块两种,将包含肿瘤块和不包含肿瘤块两种概率大的的类别赋值给当前肿瘤块,形成最终N×M×K的结果,对肿瘤完成定位。3-3, put the obtained feature image into the network structure of step 2, and generate each grid classification of N×M×K×2, which is classified into two types including tumor blocks and non-tumor blocks, which will contain tumor blocks. And two categories with high probability that do not contain tumor block are assigned to the current tumor block to form the final N×M×K result, and complete the localization of the tumor.

进一步的,所述步骤3-2中还包括如下过程:设定一个阈值,称这个阈值为“网格正例阈值”,同时设图像块中,标记为肿瘤的三维像素点占三维图像块像素的比例为“肿瘤比例”;当肿瘤比例大于或等于网格正例阈值的时候,标定这个图像块为包含肿瘤的图像块,当肿瘤比例小于网格正例阈值的时候,标定这个图像块为不包含肿瘤的图像块。Further, the step 3-2 also includes the following process: setting a threshold, and calling this threshold a "grid positive threshold", and setting the three-dimensional pixel points marked as tumors in the image block to occupy the pixels of the three-dimensional image block. When the tumor proportion is greater than or equal to the grid positive example threshold, the image block is marked as the image block containing the tumor; when the tumor proportion is less than the grid positive example threshold, the image block is marked as Image blocks that do not contain tumors.

进一步的,所述步骤1中三维卷积操作通过如下公式实现:Further, the three-dimensional convolution operation in the step 1 is realized by the following formula:

Figure GDA0003524025670000031
Figure GDA0003524025670000031

Figure GDA0003524025670000032
为三维卷积核,
Figure GDA0003524025670000033
为偏置项,f为非线性的激活函数。
Figure GDA0003524025670000032
is the three-dimensional convolution kernel,
Figure GDA0003524025670000033
is the bias term, and f is the nonlinear activation function.

进一步的,所述步骤2中Softmax操作通过如下公式完成:Further, in the described step 2, the Softmax operation is completed by the following formula:

Figure GDA0003524025670000034
Figure GDA0003524025670000034

其中P表示对于输入X预测类别c的为正例的伪概率,

Figure GDA0003524025670000035
表示对于输入X在第c层的特征图的分类激活响应。where P represents the pseudo-probability of being a positive example of the predicted class c for the input X,
Figure GDA0003524025670000035
represents the categorical activation response to the feature map of input X at layer c.

与现有技术相比,本发明具有如下优点和有益效果:Compared with the prior art, the present invention has the following advantages and beneficial effects:

本发明能较好地应用于大脑核磁共振图像,实现三维核磁共振图像中的肿瘤区域定位,定位结果准确,且计算资源代价较低。通过网格正例阈值的设置,能过降低计算资源,提高分割的精度,且有助于解决样本包含肿瘤块和不包含肿瘤块分布不均的问题。The present invention can be better applied to the brain nuclear magnetic resonance image, realizes the location of the tumor region in the three-dimensional nuclear magnetic resonance image, the positioning result is accurate, and the cost of computing resources is low. By setting the positive threshold of the grid, the computing resources can be reduced, the segmentation accuracy can be improved, and the problem of uneven distribution of samples containing tumor blocks and non-tumor blocks can be solved.

附图说明Description of drawings

图1为本发明提供的基于残差学习的三维骨干网络结构Fig. 1 is a three-dimensional backbone network structure based on residual learning provided by the present invention

图2为本发明提供的网格肿瘤定位网络。FIG. 2 is a grid tumor localization network provided by the present invention.

图3为采用本发明方法的肿瘤定位结果,对brats_tcia_pat463_0001在0.3为网格正例阈值,使用resnet_small为骨干网络的定位结果可视化。Fig. 3 shows the tumor localization result using the method of the present invention. For brats_tcia_pat463_0001, 0.3 is the grid positive threshold value, and resnet_small is used as the visualization of the localization result of the backbone network.

图4为采用本发明方法的肿瘤定位结果,其中(a)为0.1的网格正例阈值定位结果可视化,(b)为0.2的网格正例阈值定位结果可视化,(c)为0.3的网格正例阈值定位结果可视化,(d)为0.4的网格正例阈值定位结果可视化。Figure 4 shows the tumor localization results using the method of the present invention, wherein (a) the grid positive threshold localization results of 0.1 are visualized, (b) the grid positive threshold localization results of 0.2 are visualized, and (c) the grid positive thresholds of 0.3 are visualized The grid positive threshold localization results are visualized, and (d) the grid positive threshold localization results of 0.4 are visualized.

具体实施方式Detailed ways

以下将结合具体实施例对本发明提供的技术方案进行详细说明,应理解下述具体实施方式仅用于说明本发明而不用于限制本发明的范围。The technical solutions provided by the present invention will be described in detail below with reference to specific embodiments. It should be understood that the following specific embodiments are only used to illustrate the present invention and not to limit the scope of the present invention.

本发明提供的网格网络的大脑磁共振图像肿瘤自动定位方法,将大脑图像进行高位特征提取,并根据提取到的特征对图像的肿瘤部位进行定位。首先,从基于残差学习的三维骨干网络中获得特征图像;然后根据特征图像,进行卷积,对三维图像块是否包含肿瘤进行预测。最后通过三维图像块的预测,整合成整体的肿瘤块。具体地说,本发明方法包括如下步骤:The method for automatically locating a brain magnetic resonance image tumor in a grid network provided by the invention extracts high-level features from the brain image, and locates the tumor part of the image according to the extracted features. First, a feature image is obtained from the 3D backbone network based on residual learning; then, according to the feature image, convolution is performed to predict whether the 3D image block contains a tumor. Finally, through the prediction of the three-dimensional image block, the tumor block is integrated into the whole. Specifically, the inventive method comprises the steps:

步骤1,从基于残差网络的三维深度卷积神经网络的骨干网络中获得图像的特征:Step 1, obtain the features of the image from the backbone network of the 3D deep convolutional neural network based on the residual network:

1-1,基于残差学习的三维骨干网络结构如图1所示,将输入的三维MRI图像数据X:(L,W,H,1)进行步长为2,kernel(卷积核)为[3,3,3]的三维卷积操作,三维卷积的卷积核数量可以指定一个合适的值C1,生成数据Y1:(L/2,W/2,H/2,C1)。L,W,H分别为原始图像的长,宽,高。1-1. The structure of the 3D backbone network based on residual learning is shown in Figure 1. The input 3D MRI image data X: (L, W, H, 1) has a step size of 2, and the kernel (convolution kernel) is For the three-dimensional convolution operation of [3, 3, 3], the number of convolution kernels of the three-dimensional convolution can specify a suitable value C1 to generate data Y1: (L/2, W/2, H/2, C1). L, W, H are the length, width and height of the original image, respectively.

三维卷积操作通过如下公式实现:The three-dimensional convolution operation is implemented by the following formula:

Figure GDA0003524025670000041
Figure GDA0003524025670000041

公式描述的是对上一层的每一个通道通过一个三维卷积核

Figure GDA0003524025670000042
加上一个偏置项(bias)
Figure GDA0003524025670000043
并使用非线性的激活函数f得到一个新的特征图。每一个卷积核
Figure GDA0003524025670000044
本质上都是一个通过学习得到的参数
Figure GDA0003524025670000045
当l=0时,
Figure GDA0003524025670000046
就对应着输入的第一层,对应着原始输入图片的不同通道。The formula describes that each channel of the previous layer is passed through a three-dimensional convolution kernel
Figure GDA0003524025670000042
add a bias
Figure GDA0003524025670000043
And use the nonlinear activation function f to get a new feature map. Each convolution kernel
Figure GDA0003524025670000044
In essence, it is a parameter obtained by learning
Figure GDA0003524025670000045
When l=0,
Figure GDA0003524025670000046
It corresponds to the first layer of the input, corresponding to the different channels of the original input picture.

1-2,定义一个步长为1,kernel为[3,3,3]的三维卷积SSCNN(Single StrideConvolution Neural Network单步卷积神经网络)。1-2, define a three-dimensional convolutional SSCNN (Single StrideConvolution Neural Network) with a stride of 1 and a kernel of [3, 3, 3].

1-3,将数据Y1:(L/2,W/2,H/2,C1)进行一次SSCNN卷积,卷积核的数量必须设定为C1,生成卷积后的结果Y1_1:(L/2,W/2,H/2,C1),再进行一次SSCNN卷积,卷积核的数量必须设定为C1,生成卷积后的结果Y1_2:(L/2,W/2,H/2,C1),最后将Y1和Y1_2每一个元素相加操作(Element-wisedAddition),生成数据Y2:(L/2,W/2,H/2,C1)。1-3, perform an SSCNN convolution on the data Y1: (L/2, W/2, H/2, C1), the number of convolution kernels must be set to C1, and generate the convolution result Y1_1: (L /2,W/2,H/2,C1), perform another SSCNN convolution, the number of convolution kernels must be set to C1, and generate the convolution result Y1_2: (L/2,W/2,H /2, C1), and finally add each element of Y1 and Y1_2 (Element-wisedAddition) to generate data Y2: (L/2, W/2, H/2, C1).

1-4,将数据Y2:(L/2,W/2,H/2,C1)进行一次SSCNN卷积,卷积核的数量必须设定为C1,生成卷积后的结果Y2_1:(L/2,W/2,H/2,C1),再进行一次SSCNN卷积,卷积核的数量必须设定为C1,生成卷积后的结果Y2_2:(L/2,W/2,H/2,C1),最后将Y2和Y2_2每一个元素相加操作(Element-wisedAddition),生成数据Y3:(L/2,W/2,H/2,C1)。将步骤1-3和步骤1-4中的操作包含2个SSCNN,1个元素相加操作,2个SSCNN,1个元素相加操作定义成一个子网络结构,Pyramidal Level(金字塔层)。1-4, perform a SSCNN convolution on the data Y2: (L/2, W/2, H/2, C1), the number of convolution kernels must be set to C1, and generate the convolution result Y2_1: (L /2,W/2,H/2,C1), perform another SSCNN convolution, the number of convolution kernels must be set to C1, and generate the convolution result Y2_2: (L/2,W/2,H /2, C1), and finally add each element of Y2 and Y2_2 (Element-wisedAddition) to generate data Y3: (L/2, W/2, H/2, C1). The operations in steps 1-3 and 1-4 include 2 SSCNNs, 1 element addition operation, 2 SSCNNs, and 1 element addition operation to define a sub-network structure, Pyramidal Level (pyramid layer).

1-5,将数据Y3:(L/2,W/2,H/2,C1)进行步长为2,kernel为[3,3,3]的三维卷积操作,三维卷积的卷积核数量可以指定一个合适的值C2,生成数据Y4:(L/4,W/4,H/4,C2)1-5, perform a three-dimensional convolution operation with a step size of 2 and a kernel of [3, 3, 3] for the data Y3: (L/2, W/2, H/2, C1), and the convolution of the three-dimensional convolution The number of cores can be specified with a suitable value C2 to generate data Y4: (L/4, W/4, H/4, C2)

1-6,重复步骤1-3,1-4,1-5两次,最后执行一次步骤1-3,1-4得到根据图像提取的特征Y:(L/16,W/16,H/16,C)。其中C为最后重复1-4中的卷积核数量。1-6, repeat steps 1-3, 1-4, 1-5 twice, and finally perform steps 1-3, 1-4 to get the feature Y extracted from the image: (L/16, W/16, H/ 16, C). where C is the number of convolution kernels in the last repetitions 1-4.

步骤2,基于骨干网络获得特征图像进行大脑肿瘤定位,具体包括:Step 2, obtaining feature images based on the backbone network for brain tumor localization, specifically including:

2-1,网格肿瘤定位网络如图2所示,将骨干网络输出的特征图Y:(L/16,W/16,H/16,C)进行一次SSCNN卷积,卷积核的数量设定合适的值C3,得到数据G1(L/16,W/16,H/16,C3)。2-1. The grid tumor localization network is shown in Figure 2. The feature map Y: (L/16, W/16, H/16, C) output by the backbone network is subjected to an SSCNN convolution, and the number of convolution kernels Set the appropriate value C3 to obtain data G1 (L/16, W/16, H/16, C3).

2-2,再将数据G1(L/16,W/16,H/16,C3)进行一次kernel为1×1×1,在长宽高上步长分别为(L/16)/N,(W/16)/M,(H/16)/K的三维卷积操作,卷积核的数量必须设定为2,得到数据G:(N,M,K,2),其中N,M,K分别表示设定的原始图像在长宽高上的划分个数。2-2, the data G1 (L/16, W/16, H/16, C3) is processed once again. The kernel is 1×1×1, and the step size is (L/16)/N in the length, width and height, respectively. For the three-dimensional convolution operation of (W/16)/M, (H/16)/K, the number of convolution kernels must be set to 2 to obtain data G: (N, M, K, 2), where N, M , and K respectively represent the number of divisions of the set original image in terms of length, width and height.

2-3,将得到数据G:(N,M,K,2)通过Softmax操作,转化成原始图像每个网格包含肿瘤块和不包含肿瘤块的类别概率G1:(N,M,K,2),通过类别概率的大小,将类别概率值大的类别赋予当前的图像块,最后将所有包含肿瘤的图像块整合起来,形成完整的肿瘤块的定位。2-3, the obtained data G: (N, M, K, 2) is transformed into the original image through Softmax operation, and each grid contains tumor blocks and does not contain tumor blocks. Class probability G1: (N, M, K, 2) According to the size of the class probability, assign the class with a large class probability value to the current image block, and finally integrate all the image blocks containing the tumor to form a complete localization of the tumor block.

Softmax操作通过如下公式实现:The Softmax operation is implemented by the following formula:

Figure GDA0003524025670000051
Figure GDA0003524025670000051

其中P表示对于输入X预测类别c的为正例的伪概率,

Figure GDA0003524025670000052
则是对于输入X在第c层的特征图的分类激活响应。where P represents the pseudo-probability of being a positive example of the predicted class c for the input X,
Figure GDA0003524025670000052
is the classification activation response to the feature map of the input X at layer c.

步骤3,利用步骤1,2定义的网络,对三维大脑MRI图像进行肿瘤定位的全部过程,具体包括:Step 3: Use the network defined in Steps 1 and 2 to perform the entire process of tumor localization on the three-dimensional brain MRI image, specifically including:

3-1,将三维核磁共振图像分为N×M×K个三维图像块,每个三维图像块的大小就是(H/N)×(W/M)×(L/K),H,W,L分别表示原始三维核磁共振图像的长,宽,高。3-1. Divide the 3D MRI image into N×M×K 3D image blocks, the size of each 3D image block is (H/N)×(W/M)×(L/K), H,W , L represent the length, width and height of the original 3D MRI image, respectively.

3-2,对三维核磁共振图像进行网格化的标签。网格的类别包括包含肿瘤的图像块,不包含肿瘤的图像块。设定一个阈值,称这个阈值为“网格正例阈值”,同时设图像块中,标记为肿瘤的三维像素点占三维图像块像素的比例为“肿瘤比例”。当肿瘤比例大于或等于网格正例阈值的时候,我们标定这个图像块为包含肿瘤的图像块,当肿瘤比例小于网格正例阈值的时候,我们标定这个图像块为不包含肿瘤的图像块。3-2, labeling gridded 3D MRI images. The categories of the grid include image patches that contain tumors and image patches that do not. A threshold is set, which is called "grid positive threshold", and the proportion of three-dimensional pixels marked as tumors in the three-dimensional image block pixels in the image block is set as "tumor ratio". When the tumor proportion is greater than or equal to the grid positive threshold, we demarcate the image block as the image block containing the tumor, and when the tumor proportion is less than the grid positive threshold, we demarcate the image block as the image block not containing the tumor .

3-2,将三维核磁共振图像放入步骤1的网络中,产生(L/16)×(W/16)×(H/16)×C的特征图。3-2, put the three-dimensional nuclear magnetic resonance image into the network of step 1, and generate a feature map of (L/16)×(W/16)×(H/16)×C.

3-3,将获得的特征图像放入步骤2的网络结构中,产生N×M×K×2的每个网格分类,分类为包含肿瘤块和不包含肿瘤块两种。对肿瘤完成定位。3-3, put the obtained feature image into the network structure of step 2, and generate each grid classification of N×M×K×2, which is classified into two types including tumor blocks and non-tumor blocks. Complete localization of the tumor.

下面以Brats 2015数据集数据为例,来说明本发明的核磁共振图像的肿瘤自动定位方法。The following takes the data of the Brats 2015 dataset as an example to illustrate the automatic tumor localization method of the nuclear magnetic resonance image of the present invention.

实验条件:现选取一台计算机进行实验,该计算机的配置有英特尔处理器(3.4GHz)和10GB随机存取存储器,64位操作系统,编程语言用的是Python。Experimental conditions: Now select a computer for the experiment, the computer is equipped with an Intel processor (3.4GHz) and 10GB random access memory, a 64-bit operating system, and the programming language is Python.

实验数据为Brats 2015数据集的大脑磁共振图像。Brats脑部肿瘤图像分割挑战是和MICCAI会议一起从2012年起每年举行的和此公正图像分割挑战赛。这个Brats脑部肿瘤分割挑战赛的数据集提供了高质量的手动分割标注以及使用不同成像方法产生的核磁共振图像。The experimental data are brain magnetic resonance images from the Brats 2015 dataset. The Brats Brain Tumor Image Segmentation Challenge has been held annually since 2012 in conjunction with the MICCAI conference and this Unbiased Image Segmentation Challenge. This dataset for the Brats Brain Tumor Segmentation Challenge provides high-quality manual segmentation annotations as well as MRI images generated using different imaging methods.

在标记上,Brats提供了5种标签,分别是:1:坏死组织(necrosis),2:水肿组织(edema),3:非增强肿瘤区域(non-enhanced regions oftumors),4:增强肿瘤区域(enhancedregions of tumors)和5:健康肿瘤组织(healthy brain tissue)。其中代表病人完整肿瘤的是标签1,标签2,标签3和标签4代表的部分,合成数据的完整肿瘤标签是标签1和标签2;病人肿瘤核心(Tumor Core)部分使用的是标签1,标签3和标签4代表,合成数据的肿瘤核心是标签2;病人的增强肿瘤(Enhancing Tumor)部分为标签4,这一部分没有合成数据的样本。数据集采用的成像方法分别为:预对比T1(pre-contrast T1),后对比T1(postcontrast T1),T2和T2 FLAIR方法。所有的图像都是利用解剖学样本对齐的并将图像通过线性插值法缩放到每一个三维像素点对应到1〖mm〗^3的尺度,原始数据集解析度在(155,240,240)。在我们使用的Brats2015数据集中有220个HGG(high grade)训练集和54个LGG(low grade)训练集。测试集则由53张混合了HGG和LGG的图片组成。图3为采用本发明方法通过对brats_tcia_pat463_0001样本,在网格正例阈值为0.3时,使用resnet_small为骨干网络的定位结果可视化。图4为本发明的肿瘤定位结果,(a)为0.1的网格正例阈值定位结果可视化,(b)为0.2的网格正例阈值定位结果可视化,(c)为0.3的网格正例阈值定位结果可视化,(d)为0.4的网格正例阈值定位结果可视化。很显然,基于本发明方法,能够对三维核磁共振图像中的肿瘤和非肿瘤区域进行分类,从而实现对肿瘤的定位。使用网格网络作为语义分割的区域建议区域生成方法,可以适当提高网格正例阈值来减少生成的RoI(Regionofinterest感兴趣区域)区域的预测来减少需要进行分割的区域,降低计算资源,或者是降低网格正例阈值来增加RoI区域数量来提高分割的精度。调低网格阈值,增加正样本即包含肿瘤快的个数,调高网格阈值,较少正样本即包含肿瘤快的个数,因此,通过网格正例阈值的设置,有助于解决样本包含肿瘤块和不包含肿瘤块分布不均的问题。On the label, Brats provides 5 labels, namely: 1: necrosis, 2: edema, 3: non-enhanced regions of tumors, 4: enhanced tumor regions ( enhanced regions of tumors) and 5: healthy brain tissue. Among them, the complete tumor of the patient is represented by label 1, label 2, label 3 and label 4, and the complete tumor label of the synthetic data is label 1 and label 2; the tumor core of the patient (Tumor Core) part uses label 1, label 3 and label 4 represent that the tumor core of the synthetic data is label 2; the enhancing tumor part of the patient is label 4, and this part has no samples of synthetic data. The imaging methods used in the dataset are: pre-contrast T1, post-contrast T1, T2 and T2 FLAIR methods. All images were aligned using anatomical samples and scaled by linear interpolation to a scale corresponding to 1〖mm〗^3 for each voxel point, and the original dataset resolution was (155, 240, 240). There are 220 HGG (high grade) training sets and 54 LGG (low grade) training sets in the Brats2015 dataset we use. The test set consists of 53 images that mix HGG and LGG. FIG. 3 is a visualization of the positioning result of using resnet_small as the backbone network by using the method of the present invention for the brats_tcia_pat463_0001 sample, when the grid positive example threshold is 0.3. Figure 4 is the tumor localization result of the present invention, (a) the visualization of the grid positive threshold value of 0.1, (b) the visualization of the grid positive threshold of 0.2, and (c) the grid positive example of 0.3 Visualization of the threshold localization results, (d) visualization of the grid positive example threshold localization results of 0.4. Obviously, based on the method of the present invention, the tumor and non-tumor regions in the three-dimensional nuclear magnetic resonance image can be classified, thereby realizing the localization of the tumor. Using the grid network as the region proposal region generation method for semantic segmentation, the grid positive threshold can be appropriately increased to reduce the prediction of the generated RoI (Region of Interest) region to reduce the regions that need to be segmented and reduce computing resources, or Decrease the grid positive threshold to increase the number of RoI regions to improve segmentation accuracy. Decrease the grid threshold, increase the number of positive samples including the number of tumors, and increase the grid threshold, and reduce the number of positive samples to include the number of tumors. Therefore, the setting of the grid positive threshold is helpful to solve the problem. Samples that contain tumor masses and do not contain uneven distribution of tumor masses.

本发明方案所公开的技术手段不仅限于上述实施方式所公开的技术手段,还包括由以上技术特征任意组合所组成的技术方案。应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也视为本发明的保护范围。The technical means disclosed in the solution of the present invention are not limited to the technical means disclosed in the above embodiments, but also include technical solutions composed of any combination of the above technical features. It should be pointed out that for those skilled in the art, without departing from the principle of the present invention, several improvements and modifications can be made, and these improvements and modifications are also regarded as the protection scope of the present invention.

Claims (4)

1.一种网格网络的大脑磁共振图像肿瘤自动定位方法,其特征在于,包括如下步骤:1. a brain magnetic resonance image tumor automatic localization method of grid network, is characterized in that, comprises the steps: 步骤1,定义基于残差网络的三维深度卷积神经网络的骨干网络,具体包括如下子步骤:Step 1, define the backbone network of the three-dimensional deep convolutional neural network based on the residual network, which specifically includes the following sub-steps: 1-1,将输入的三维MRI图像数据X:(L,W,H,1)进行步长为2,kernel为[3,3,3]的三维卷积操作,三维卷积的卷积核数量设置为C1,生成数据Y1:(L/2,W/2,H/2,C1),其中,L,W,H分别为原始图像的长,宽,高;1-1, perform a three-dimensional convolution operation with a step size of 2 and a kernel of [3, 3, 3] on the input three-dimensional MRI image data X: (L, W, H, 1), and the convolution kernel of the three-dimensional convolution The number is set to C1, and the generated data Y1: (L/2, W/2, H/2, C1), where L, W, and H are the length, width, and height of the original image, respectively; 1-2,定义一个步长为1,kernel为[3,3,3]的三维卷积SSCNN;1-2, define a three-dimensional convolutional SSCNN with a step size of 1 and a kernel of [3, 3, 3]; 1-3,将数据Y1:(L/2,W/2,H/2,C1)进行一次SSCNN卷积,卷积核的数量设定为C1,生成卷积后的结果Y1_1:(L/2,W/2,H/2,C1),再进行一次SSCNN卷积,卷积核的数量设定为C1,生成卷积后的结果Y1_2:(L/2,W/2,H/2,C1),最后将Y1和Y1_2每一个元素相加操作,生成数据Y2:(L/2,W/2,H/2,C1);1-3, perform an SSCNN convolution on the data Y1: (L/2, W/2, H/2, C1), set the number of convolution kernels to C1, and generate the convolutional result Y1_1: (L/ 2, W/2, H/2, C1), perform another SSCNN convolution, set the number of convolution kernels to C1, and generate the convolution result Y1_2: (L/2, W/2, H/2 , C1), and finally add each element of Y1 and Y1_2 to generate data Y2: (L/2, W/2, H/2, C1); 1-4,将数据Y2:(L/2,W/2,H/2,C1)进行一次SSCNN卷积,卷积核的数量必须设定为C1,生成卷积后的结果Y2_1:(L/2,W/2,H/2,C1),再进行一次SSCNN卷积,卷积核的数量必须设定为C1,生成卷积后的结果Y2_2:(L/2,W/2,H/2,C1),最后将Y2和Y2_2每一个元素相加操作,生成数据Y3:(L/2,W/2,H/2,C1);1-4, perform a SSCNN convolution on the data Y2: (L/2, W/2, H/2, C1), the number of convolution kernels must be set to C1, and generate the convolution result Y2_1: (L /2,W/2,H/2,C1), perform another SSCNN convolution, the number of convolution kernels must be set to C1, and generate the convolution result Y2_2: (L/2,W/2,H /2, C1), and finally add each element of Y2 and Y2_2 to generate data Y3: (L/2, W/2, H/2, C1); 1-5,将数据Y3:(L/2,W/2,H/2,C1)进行步长为2,kernel为[3,3,3]的三维卷积操作,三维卷积的卷积核数量指定为C2,生成数据Y4:(L/4,W/4,H/4,C2);1-5, perform a three-dimensional convolution operation with a step size of 2 and a kernel of [3, 3, 3] for the data Y3: (L/2, W/2, H/2, C1), and the convolution of the three-dimensional convolution The number of cores is designated as C2, and the generated data Y4: (L/4, W/4, H/4, C2); 1-6,重复步骤1-3,1-4,1-5两次,最后执行一次步骤1-3,1-4得到根据图像提取的特征Y:(L/16,W/16,H/16,C),其中C为最后重复1-4中的卷积核数量;1-6, repeat steps 1-3, 1-4, 1-5 twice, and finally perform steps 1-3, 1-4 to get the feature Y extracted from the image: (L/16, W/16, H/ 16, C), where C is the number of convolution kernels in the last repetitions 1-4; 步骤2,定义网格肿瘤定位网络,具体包括如下子步骤:Step 2, define a grid tumor localization network, which specifically includes the following sub-steps: 2-1,将骨干网络输出的特征图Y:(L/16,W/16,H/16,C)进行一次SSCNN卷积,卷积核的数量设定为C3,得到数据G1(L/16,W/16,H/16,C3);2-1. Perform a SSCNN convolution on the feature map Y: (L/16, W/16, H/16, C) output by the backbone network, and set the number of convolution kernels to C3 to obtain the data G1 (L/ 16,W/16,H/16,C3); 2-2,再将数据G1(L/16,W/16,H/16,C3)进行一次kernel为1×1×1,在长、宽、高上步长分别为(L/16)/N,(W/16)/M,(H/16)/K的三维卷积操作,卷积核的数量设定为2,得到数据G:(N,M,K,2),其中N,M,K分别表示设定的原始图像在长宽高上的划分个数;2-2, run the data G1 (L/16, W/16, H/16, C3) once again. The kernel is 1×1×1, and the length, width and height are respectively (L/16)/ The three-dimensional convolution operation of N, (W/16)/M, (H/16)/K, the number of convolution kernels is set to 2, and the data G is obtained: (N, M, K, 2), where N, M and K respectively represent the number of divisions of the set original image in terms of length, width and height; 2-3,将得到数据G:(N,M,K,2)通过Softmax操作,转化成原始图像每个网格包含肿瘤块和不包含肿瘤块的类别概率G1:(N,M,K,2),通过类别概率的大小,将类别概率值大的类别赋予当前的图像块,最后将所有包含肿瘤的图像块整合起来,形成完整的肿瘤块的定位;2-3, the obtained data G: (N, M, K, 2) is transformed into the original image through Softmax operation, and each grid contains tumor blocks and does not contain tumor blocks. Class probability G1: (N, M, K, 2), according to the size of the category probability, assign the category with a large category probability value to the current image block, and finally integrate all the image blocks containing the tumor to form a complete tumor block location; 步骤3,对三维大脑MRI图像进行肿瘤定位,具体包括如下子步骤:Step 3, performing tumor localization on the three-dimensional brain MRI image, which specifically includes the following sub-steps: 3-1,将三维核磁共振图像分为N×M×K个三维图像块,每个三维图像块的大小为(H/N)×(W/M)×(L/K),其中,H,W,L分别表示原始三维核磁共振图像的长,宽,高;3-1. Divide the three-dimensional nuclear magnetic resonance image into N×M×K three-dimensional image blocks, and the size of each three-dimensional image block is (H/N)×(W/M)×(L/K), where H , W, L represent the length, width and height of the original 3D MRI image, respectively; 3-2,对三维核磁共振图像进行网格化的标签,网格的类别包括包含肿瘤的图像块,以及不包含肿瘤的图像块;3-2, labeling gridded 3D MRI images, and the grid categories include image blocks that contain tumors and image blocks that do not contain tumors; 3-2,将三维核磁共振图像放入步骤1的网络中,产生(L/16)×(W/16)×(H/16)×C的特征图;3-2, put the three-dimensional nuclear magnetic resonance image into the network of step 1, and generate a feature map of (L/16)×(W/16)×(H/16)×C; 3-3,将获得的特征图像放入步骤2的网络结构中,产生N×M×K×2的每个网格分类,分类为包含肿瘤块和不包含肿瘤块两种,对肿瘤完成定位。3-3, put the obtained feature image into the network structure of step 2, and generate each grid classification of N×M×K×2, which is classified into two types that include tumor blocks and those that do not contain tumor blocks, and complete the localization of tumors. . 2.根据权利要求1所述的网格网络的大脑磁共振图像肿瘤自动定位方法,其特征在于,所述步骤3-2中还包括如下过程:设定一个阈值,称这个阈值为“网格正例阈值”,同时设图像块中,标记为肿瘤的三维像素点占三维图像块像素的比例为“肿瘤比例”;当肿瘤比例大于或等于网格正例阈值的时候,标定这个图像块为包含肿瘤的图像块,当肿瘤比例小于网格正例阈值的时候,标定这个图像块为不包含肿瘤的图像块。2. The brain magnetic resonance image tumor automatic localization method of grid network according to claim 1, is characterized in that, in described step 3-2, also comprises the following process: set a threshold, call this threshold as "grid. "Positive threshold", and set the proportion of the three-dimensional pixels marked as tumor in the image block to the pixels of the three-dimensional image block as "tumor proportion"; when the tumor proportion is greater than or equal to the grid positive threshold, the image block is demarcated as The image block containing the tumor, when the proportion of the tumor is less than the grid positive threshold, the image block is marked as the image block that does not contain the tumor. 3.根据权利要求1所述的网格网络的大脑磁共振图像肿瘤自动定位方法,其特征在于,所述步骤1中三维卷积操作通过如下公式实现:3. The brain magnetic resonance image tumor automatic localization method of grid network according to claim 1, is characterized in that, in described step 1, three-dimensional convolution operation is realized by following formula:
Figure FDA0003524025660000021
Figure FDA0003524025660000021
Figure FDA0003524025660000022
为三维卷积核,
Figure FDA0003524025660000023
为偏置项,f为非线性的激活函数。
Figure FDA0003524025660000022
is the three-dimensional convolution kernel,
Figure FDA0003524025660000023
is the bias term, and f is the nonlinear activation function.
4.根据权利要求1所述的网格网络的大脑磁共振图像肿瘤自动定位方法,其特征在于,所述步骤2中Softmax操作通过如下公式完成:4. the brain magnetic resonance image tumor automatic localization method of grid network according to claim 1, is characterized in that, in described step 2, Softmax operation is completed by following formula:
Figure FDA0003524025660000024
Figure FDA0003524025660000024
其中P表示对于输入X预测类别c的为正例的伪概率,
Figure FDA0003524025660000025
表示对于输入X在第C 层的特征图的分类激活响应。
where P represents the pseudo-probability of being a positive example of the predicted class c for the input X,
Figure FDA0003524025660000025
represents the categorical activation response of the feature map at layer C for input X.
CN201910874099.0A 2019-09-17 2019-09-17 Method for positioning tumor in brain magnetic resonance image of grid network Active CN110706209B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910874099.0A CN110706209B (en) 2019-09-17 2019-09-17 Method for positioning tumor in brain magnetic resonance image of grid network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910874099.0A CN110706209B (en) 2019-09-17 2019-09-17 Method for positioning tumor in brain magnetic resonance image of grid network

Publications (2)

Publication Number Publication Date
CN110706209A CN110706209A (en) 2020-01-17
CN110706209B true CN110706209B (en) 2022-04-29

Family

ID=69196101

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910874099.0A Active CN110706209B (en) 2019-09-17 2019-09-17 Method for positioning tumor in brain magnetic resonance image of grid network

Country Status (1)

Country Link
CN (1) CN110706209B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112053342A (en) * 2020-09-02 2020-12-08 陈燕铭 Method and device for extracting and identifying pituitary magnetic resonance image based on artificial intelligence
CN117853871B (en) * 2024-01-15 2024-12-27 重庆理工大学 Brain tumor detection method based on deep learning

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105160361A (en) * 2015-09-30 2015-12-16 东软集团股份有限公司 Image identification method and apparatus
CN106056595A (en) * 2015-11-30 2016-10-26 浙江德尚韵兴图像科技有限公司 Method for automatically identifying whether thyroid nodule is benign or malignant based on deep convolutional neural network
CN107680082A (en) * 2017-09-11 2018-02-09 宁夏医科大学 Lung tumor identification method based on depth convolutional neural networks and global characteristics
US10304193B1 (en) * 2018-08-17 2019-05-28 12 Sigma Technologies Image segmentation and object detection using fully convolutional neural network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105160361A (en) * 2015-09-30 2015-12-16 东软集团股份有限公司 Image identification method and apparatus
CN106056595A (en) * 2015-11-30 2016-10-26 浙江德尚韵兴图像科技有限公司 Method for automatically identifying whether thyroid nodule is benign or malignant based on deep convolutional neural network
CN107680082A (en) * 2017-09-11 2018-02-09 宁夏医科大学 Lung tumor identification method based on depth convolutional neural networks and global characteristics
US10304193B1 (en) * 2018-08-17 2019-05-28 12 Sigma Technologies Image segmentation and object detection using fully convolutional neural network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Brain Tumor Segmentation Using Multi-Cascaded Convolutional Neural Networks and Conditional Random Field;KAI HU等;《IEEE Access》;20190726;第7卷;第92615-92627页 *

Also Published As

Publication number Publication date
CN110706209A (en) 2020-01-17

Similar Documents

Publication Publication Date Title
Dong et al. Inception v3 based cervical cell classification combined with artificially extracted features
CN110930416B (en) A U-shaped network-based MRI image prostate segmentation method
CN106296653B (en) Brain CT image hemorrhagic areas dividing method and system based on semi-supervised learning
CN110120033A (en) Based on improved U-Net neural network three-dimensional brain tumor image partition method
CN110570352B (en) Image labeling method, device and system and cell labeling method
US11935213B2 (en) Laparoscopic image smoke removal method based on generative adversarial network
CN113706486B (en) Pancreatic tumor image segmentation method based on dense connection network migration learning
Shan et al. SCA-Net: A spatial and channel attention network for medical image segmentation
Deng et al. Combining residual attention mechanisms and generative adversarial networks for hippocampus segmentation
CN110415253A (en) A point-based interactive medical image segmentation method based on deep neural network
WO2023205896A1 (en) Systems and methods for detecting structures in 3d images
CN110706209B (en) Method for positioning tumor in brain magnetic resonance image of grid network
CN118334336A (en) Colposcope image segmentation model construction method, image classification method and device
Liu et al. 3-D prostate MR and TRUS images detection and segmentation for puncture biopsy
CN115908438A (en) CT image focus segmentation method, system and equipment based on deep supervised ensemble learning
CN112508860B (en) Artificial intelligence interpretation method and system for positive check of immunohistochemical image
Jing et al. A Novel 3D Reconstruction Algorithm of Motion‐Blurred CT Image
CN117611596A (en) Semi-supervised lung cancer medical image segmentation method and device based on LesionMix and entropy minimization
CN117745736A (en) Cross-domain small sample CT image semantic segmentation system and method based on meta-learning
CN112508844B (en) A Weakly Supervised Brain Magnetic Resonance Image Segmentation Method
CN115578400A (en) Image processing method, image segmentation network training method and device
Jiang et al. Pedestrian Tracking Based on HSV Color Features and Reconstruction by Contributions
Zhang et al. Segmentation preprocessing and deep learning based classification of skin lesions
Li et al. Uncertainty quantification in medical image segmentation
Jia et al. Three-dimensional segmentation of hippocampus in brain MRI images based on 3CN-net

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant