CN110706209B - Method for positioning tumor in brain magnetic resonance image of grid network - Google Patents

Method for positioning tumor in brain magnetic resonance image of grid network Download PDF

Info

Publication number
CN110706209B
CN110706209B CN201910874099.0A CN201910874099A CN110706209B CN 110706209 B CN110706209 B CN 110706209B CN 201910874099 A CN201910874099 A CN 201910874099A CN 110706209 B CN110706209 B CN 110706209B
Authority
CN
China
Prior art keywords
tumor
convolution
dimensional
image
magnetic resonance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910874099.0A
Other languages
Chinese (zh)
Other versions
CN110706209A (en
Inventor
舒华忠
王如梦
谢展鹏
伍家松
孔佑勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN201910874099.0A priority Critical patent/CN110706209B/en
Publication of CN110706209A publication Critical patent/CN110706209A/en
Application granted granted Critical
Publication of CN110706209B publication Critical patent/CN110706209B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

The invention provides a method for automatically positioning a brain magnetic resonance image tumor of a grid network, and provides a novel three-dimensional object detection method. The invention comprises the following steps: and obtaining the characteristics of the image from the backbone network of the three-dimensional depth convolution neural network based on the residual error network, and carrying out brain tumor localization based on the characteristic image obtained by the backbone network. The method can be better applied to the brain nuclear magnetic resonance image, realizes the tumor region positioning in the three-dimensional nuclear magnetic resonance image, has accurate positioning result and lower calculation resource cost.

Description

Method for positioning tumor in brain magnetic resonance image of grid network
Technical Field
The invention belongs to the field of digital images, relates to a magnetic resonance image processing method, and particularly relates to a brain magnetic resonance image tumor automatic positioning method of a grid network.
Background
With the continuous development of computer vision technology, many technologies have been applied to daily lives of people by means of widely existing cameras. Although two-dimensional images are more widely available at present, three-dimensional images can bring a more realistic reflection of the real world in some specific scenes. For example, the state information of the internal organs of the organism can be obtained by means of the nuclear magnetic resonance technology, the safety of automatic driving can be further improved by means of RGB-D images, and the like, so that the research on a high-performance three-dimensional visual model has high practical significance.
The nuclear magnetic resonance imaging is an imaging technology aiming at the internal structure of a human body, and has high value for the research of the nuclear magnetic resonance imaging: firstly, the automatic positioning of the tumor can reduce the workload of doctors to a certain extent, and the research of the technology can lead more patients to have the opportunity of obtaining diagnosis at the present of shortage of medical resources; secondly, the nuclear magnetic resonance image examination and the doctor examination are combined through the algorithm, so that the risk of misjudgment or missed judgment can be reduced, if missed judgment occurs in the tumor diagnosis process, a patient is likely to miss the optimal treatment opportunity, and the consequences are very serious. And the three-dimensional image is used for segmenting and positioning the tumor, and the three-dimensional image has richer space information, so that a high-precision result can be generated by more grasping.
A range of visual challenges such as ImageNet has led to a number of excellent two-dimensional visual models such as VGG, ResNet, and others. The high-performance two-dimensional models have the problem of high requirement on computing resources, and although many algorithms can run at a high speed by means of the parallel computing capability of the GPU, the direct conversion of the models into corresponding three-dimensional models inevitably causes tens of times, even hundreds of times, of increase in computing cost. In addition, the storage of intermediate results and the increase of model parameters bring huge display and storage overhead. At the same time, the rapidly growing number of model parameters will make the model in
Disclosure of Invention
In order to solve the problems, the invention provides a novel three-dimensional object detection method by exploring the existing deep learning model of the three-dimensional image and continuously optimizing and adjusting the network structure, wherein a shallow three-dimensional convolution neural network model is adopted to extract image characteristics, and a grid mode is adopted to carry out classification and positioning.
In order to achieve the purpose, the invention provides the following technical scheme:
a method for automatically positioning a tumor in a brain magnetic resonance image of a mesh network comprises the following steps:
step 1, defining a backbone network of a three-dimensional depth convolution neural network based on a residual error network, and specifically comprising the following substeps:
1-1, inputting three-dimensional MRI image data X: (L, W, H,1) performing a three-dimensional convolution operation with a step size of 2 and a kernel of [3,3,3], setting the number of convolution kernels of the three-dimensional convolution to C1, and generating data Y1: (L/2, W/2, H/2, C1), wherein L, W and H are the length, width and height of the original image respectively;
1-2, defining a three-dimensional convolution SSCNN with the step size of 1 and the kernel of [3,3,3 ];
1-3, data Y1: (L/2, W/2, H/2, C1) SSCNN convolution is performed once, the number of convolution kernels is set to C1, and a result Y1_1 after the convolution is generated: (L/2, W/2, H/2, C1), performing SSCNN convolution again, setting the number of convolution kernels to be C1, generating a result Y1_2 after convolution, (L/2, W/2, H/2, C1), and finally adding each element of Y1 and Y1_2 to generate data Y2: (L/2, W/2, H/2, C1);
1-4, data Y2: (L/2, W/2, H/2, C1) performing SSCNN convolution once, wherein the number of convolution kernels needs to be set to C1, and the result after convolution Y2_1 is generated: (L/2, W/2, H/2, C1), performing SSCNN convolution again, setting the number of convolution kernels to be C1, generating a result Y2_2 after convolution, (L/2, W/2, H/2, C1), and finally adding each element of Y2 and Y2_2 to generate data Y3: (L/2, W/2, H/2, C1);
1-5, data Y3: (L/2, W/2, H/2, C1) performing a three-dimensional convolution operation with a step size of 2 and a kernel of [3,3,3], the number of convolution kernels of the three-dimensional convolution being designated as C2, and generating data Y4: (L/4, W/4, H/4, C2);
1-6, repeating the steps 1-3, 1-4 and 1-5 twice, and finally executing the steps 1-3 and 1-4 once to obtain the feature Y extracted according to the image: (L/16, W/16, H/16, C), where C is the number of convolution kernels in the last iteration 1-4;
step 2, defining a grid tumor positioning network, and specifically comprising the following substeps:
2-1, outputting a characteristic diagram Y of the backbone network: performing SSCNN convolution once (L/16, W/16, H/16, C), setting the number of convolution kernels as C3, and obtaining data G1(L/16, W/16, H/16, C3);
2-2, performing three-dimensional convolution operation on the data G1(L/16, W/16, H/16, C3) once with a kernel of 1 × 1 × 1, and steps of (L/16)/N, (W/16)/M, (H/16)/K in length, width and height, respectively, wherein the number of convolution kernels is set to be 2, and obtaining data G: (N, M, K,2), wherein N, M, K respectively represent the number of divisions of the set original image in length, width and height;
2-3, data G: (N, M, K,2) by means of Softmax operation, the class probabilities G1 for each grid containing and not containing tumor masses of the original image are translated into: (N, M, K,2), assigning the category with the high category probability value to the current image block according to the size of the category probability, and finally integrating all the image blocks containing the tumor to form the complete positioning of the tumor block;
step 3, tumor localization is carried out on the three-dimensional brain MRI image, and the method specifically comprises the following substeps:
3-1, dividing the three-dimensional nuclear magnetic resonance image into N × M × K three-dimensional image blocks, wherein the size of each three-dimensional image block is (H/N) × (W/M) × (L/K), and H, W and L respectively represent the length, width and height of the original three-dimensional nuclear magnetic resonance image;
3-2, carrying out gridding label on the three-dimensional nuclear magnetic resonance image, wherein the grid type comprises an image block containing a tumor and an image block not containing the tumor;
3-2, putting the three-dimensional nuclear magnetic resonance image into the network in the step 1 to generate a characteristic diagram of (L/16) × (W/16) × (H/16) × C;
3-3, putting the obtained characteristic image into the network structure in the step 2, generating each grid classification of NxMxKx2, classifying the grid classification into two types including tumor blocks and tumor blocks, assigning the two types with high probability including the tumor blocks and the tumor blocks to the current tumor blocks, forming a final result of NxMxK, and positioning the tumor.
Further, the step 3-2 further comprises the following steps: setting a threshold, namely the threshold is a grid regular case threshold, and meanwhile, setting the proportion of three-dimensional pixel points marked as tumors in the image block in the three-dimensional image block as a tumor proportion; when the tumor proportion is larger than or equal to the grid positive case threshold, the image block is marked as an image block containing tumors, and when the tumor proportion is smaller than the grid positive case threshold, the image block is marked as an image block not containing tumors.
Further, the three-dimensional convolution operation in step 1 is implemented by the following formula:
Figure GDA0003524025670000031
Figure GDA0003524025670000032
in the form of a three-dimensional convolution kernel,
Figure GDA0003524025670000033
for the bias term, f is a nonlinear activation function.
Further, the Softmax operation in step 2 is completed by the following formula:
Figure GDA0003524025670000034
where P represents the positive pseudo probability for the input X prediction class c,
Figure GDA0003524025670000035
representing the classification activation response for the input X at the level c profile.
Compared with the prior art, the invention has the following advantages and beneficial effects:
the method can be better applied to the brain nuclear magnetic resonance image, realizes the tumor region positioning in the three-dimensional nuclear magnetic resonance image, has accurate positioning result and lower calculation resource cost. Through the setting of the grid regular case threshold, the calculation resource can be reduced, the segmentation precision is improved, and the problem that the sample contains tumor blocks and does not contain the tumor blocks and is not distributed uniformly is solved.
Drawings
FIG. 1 is a three-dimensional backbone network structure based on residual learning according to the present invention
Fig. 2 is a grid tumor localization network provided by the present invention.
Fig. 3 shows the tumor localization result obtained by the method of the present invention, wherein the grid positive case threshold value of brats _ tcia _ pat463_0001 at 0.3 is used, and the localization result of the backbone network is visualized by using resnet _ small.
Fig. 4 shows tumor localization results using the method of the present invention, wherein (a) grid normative threshold localization results are visualized as 0.1, (b) grid normative threshold localization results are visualized as 0.2, (c) grid normative threshold localization results are visualized as 0.3, and (d) grid normative threshold localization results are visualized as 0.4.
Detailed Description
The technical solutions provided by the present invention will be described in detail below with reference to specific examples, and it should be understood that the following specific embodiments are only illustrative of the present invention and are not intended to limit the scope of the present invention.
The method for automatically positioning the brain magnetic resonance image tumor of the grid network extracts the high-position characteristics of the brain image and positions the tumor part of the image according to the extracted characteristics. Firstly, obtaining a characteristic image from a three-dimensional backbone network based on residual error learning; and then carrying out convolution according to the characteristic image, and predicting whether the three-dimensional image block contains tumors or not. And finally, integrating the three-dimensional image blocks into an integral tumor block through prediction of the three-dimensional image blocks. Specifically, the method of the invention comprises the following steps:
step 1, obtaining image characteristics from a backbone network of a three-dimensional depth convolution neural network based on a residual error network:
1-1, a three-dimensional backbone network structure based on residual learning is shown in fig. 1, and input three-dimensional MRI image data X: (L, W, H,1) performing a three-dimensional convolution operation with a step size of 2 and kernel (convolution kernel) of [3,3,3], where the number of convolution kernels of the three-dimensional convolution can specify an appropriate value C1, and generating data Y1: (L/2, W/2, H/2, C1). L, W and H are the length, width and height of the original image respectively.
The three-dimensional convolution operation is implemented by the following formula:
Figure GDA0003524025670000041
the formula describes each layer of the previous layerOne channel passing through a three-dimensional convolution kernel
Figure GDA0003524025670000042
Plus a bias term (bias)
Figure GDA0003524025670000043
And a new feature map is obtained using the nonlinear activation function f. Each convolution kernel
Figure GDA0003524025670000044
Essentially a parameter obtained by learning
Figure GDA0003524025670000045
When l is equal to 0, the ratio of the total of the two,
Figure GDA0003524025670000046
corresponds to the first layer of the input, corresponding to the different channels of the original input picture.
1-2, defining a three-dimensional Convolution SSCNN (Single Stride Convolition Neural Network) with step size of 1 and kernel of [3,3,3 ].
1-3, data Y1: (L/2, W/2, H/2, C1) performing SSCNN convolution once, wherein the number of convolution kernels needs to be set to C1, and the result after convolution Y1_1 is generated: (L/2, W/2, H/2, C1), performing SSCNN convolution again, setting the number of convolution kernels to be C1, generating a result Y1_2 after convolution, (L/2, W/2, H/2, C1), and finally adding each Element of Y1 and Y1_2 (Element-wise extension), generating data Y2: (L/2, W/2, H/2, C1).
1-4, data Y2: (L/2, W/2, H/2, C1) performing SSCNN convolution once, wherein the number of convolution kernels needs to be set to C1, and the result after convolution Y2_1 is generated: (L/2, W/2, H/2, C1), performing SSCNN convolution again, setting the number of convolution kernels to be C1, generating a result Y2_2 after convolution, (L/2, W/2, H/2, C1), and finally adding each Element of Y2 and Y2_2 (Element-wise extension), generating data Y3: (L/2, W/2, H/2, C1). The operations in steps 1-3 and 1-4 include 2 SSCNN, 1 element addition operation, 2 SSCNN, 1 element addition operation are defined as a sub-network structure, pyramid Level.
1-5, data Y3: (L/2, W/2, H/2, C1) performing a three-dimensional convolution operation with a step size of 2 and a kernel of [3,3,3], wherein the number of convolution kernels of the three-dimensional convolution can specify an appropriate value C2, and generating data Y4: (L/4, W/4, H/4, C2)
1-6, repeating the steps 1-3, 1-4 and 1-5 twice, and finally executing the steps 1-3 and 1-4 once to obtain the feature Y extracted according to the image: (L/16, W/16, H/16, C). Where C is the number of convolution kernels in the last iteration 1-4.
Step 2, obtaining a characteristic image based on the backbone network to carry out brain tumor positioning, and specifically comprises the following steps:
2-1, the grid tumor localization network is shown in fig. 2, and the feature graph Y output by the backbone network is: (L/16, W/16, H/16, C) SSCNN convolution is performed once, and the number of convolution kernels is set to an appropriate value C3, thereby obtaining data G1(L/16, W/16, H/16, C3).
2-2, performing three-dimensional convolution operation on the data G1(L/16, W/16, H/16, C3) once with a kernel of 1 × 1 × 1, and the step sizes of (L/16)/N, (W/16)/M, (H/16)/K on the aspect ratio, the number of convolution kernels must be set to 2, and obtaining data G: (N, M, K,2), wherein N, M, K respectively represent the division number of the set original image in length, width and height.
2-3, data G: (N, M, K,2) by means of Softmax operation, the class probabilities G1 for each grid containing and not containing tumor masses of the original image are translated into: (N, M, K,2), the category with the large category probability value is assigned to the current image block according to the size of the category probability, and finally all the image blocks containing the tumor are integrated to form the complete tumor block positioning.
The Softmax operation is implemented by the following equation:
Figure GDA0003524025670000051
where P represents the positive pseudo probability for the input X prediction class c,
Figure GDA0003524025670000052
then it is the classification activation response for the input X at the level c feature map.
And 3, carrying out the whole process of tumor localization on the three-dimensional brain MRI image by using the network defined in the steps 1 and 2, and specifically comprising the following steps:
3-1, dividing the three-dimensional nuclear magnetic resonance image into N × M × K three-dimensional image blocks, wherein the size of each three-dimensional image block is (H/N) × (W/M) × (L/K), and H, W and L respectively represent the length, width and height of the original three-dimensional nuclear magnetic resonance image.
And 3-2, carrying out gridding labeling on the three-dimensional nuclear magnetic resonance image. The categories of the grid include image blocks containing tumors, image blocks not containing tumors. Setting a threshold, which is called as a grid regular case threshold, and setting the proportion of three-dimensional pixel points marked as tumors in the image block in the three-dimensional image block as tumor proportion. When the tumor proportion is larger than or equal to the grid positive case threshold, the image block is marked as an image block containing tumors, and when the tumor proportion is smaller than the grid positive case threshold, the image block is marked as an image block not containing tumors.
And 3-2, putting the three-dimensional nuclear magnetic resonance image into the network in the step 1, and generating a characteristic diagram of (L/16) × (W/16) × (H/16) × C.
3-3, putting the obtained characteristic images into the network structure in the step 2, and generating each grid classification of NxMxKx2, wherein the classification includes tumor blocks and tumor blocks are not included. And (5) completing the positioning of the tumor.
The automatic tumor localization method of magnetic resonance imaging according to the present invention will be described below by taking the data of the Brats2015 dataset as an example.
The experimental conditions are as follows: a computer with an Intel processor (3.4GHz) and 10GB RAM, 64-bit OS, Python for programming language, was selected for the experiments.
The experimental data are brain magnetic resonance images of the cats 2015 dataset. The brachs brain tumor image segmentation challenge is a fair image segmentation challenge that is held annually from 2012 onwards along with the MICCAI conference. This dataset of brain tumor segmentation challenges provides high quality manual segmentation labeling as well as magnetic resonance images generated using different imaging methods.
On the label, Brats provides 5 labels, respectively: 1: necrotic tissue (necrosis), 2: edematous tissue (edema), 3: non-enhanced regions of the tumor (non-enhanced regions of the tumor), 4: enhancement of tumor areas (enhanced tumors of tumors) and 5: healthy tumor tissue (health brain tissue). Wherein the complete tumor of the patient is represented by tag 1, tag 2, tag 3 and tag 4, and the complete tumor tag of the composite data is tag 1 and tag 2; the patient Tumor Core (Tumor Core) part uses tag 1, tag 3 and tag 4 to represent, and the Tumor Core of the synthetic data is tag 2; the patient's enhanced Tumor (Enhancing Tumor) fraction was labeled 4, which fraction had no samples of synthetic data. The imaging methods adopted by the data set are respectively as follows: pre-contrast T1(pre-contrast T1), post-contrast T1(post contrast T1), T2 and T2 FLAIR methods. All images are aligned using anatomical samples and scaled by linear interpolation to a scale of 1 [ (mm) and ^3 for each voxel point, with the original data set resolution at (155,240,240). There are 220 HGG (high grade) training sets and 54 LGG (low grade) training sets in the Brats2015 data set we use. The test set consisted of 53 pictures mixed HGG and LGG. Fig. 3 is a diagram illustrating the method of the present invention, which is adopted to visualize the positioning result of the backbone network by using the resnet _ small when the grid positive case threshold is 0.3 for the swats _ tcia _ pat463_0001 sample. Fig. 4 shows the tumor localization results of the present invention, (a) visualization of grid normative threshold localization results of 0.1, (b) visualization of grid normative threshold localization results of 0.2, (c) visualization of grid normative threshold localization results of 0.3, and (d) visualization of grid normative threshold localization results of 0.4. Obviously, based on the method of the invention, the tumor and the non-tumor area in the three-dimensional nuclear magnetic resonance image can be classified, thereby realizing the positioning of the tumor. By using the mesh network as a method for generating the semantic segmentation Region proposal Region, the mesh rule threshold value can be appropriately increased to reduce the prediction of the generated RoI (Region of interest) Region to reduce the Region to be segmented, the computing resource can be reduced, or the mesh rule threshold value can be decreased to increase the number of RoI regions to improve the segmentation precision. The grid threshold is adjusted down, the number of the positive samples containing the tumor blocks is increased, the grid threshold is adjusted up, and fewer positive samples containing the number of the tumor blocks are increased.
The technical means disclosed in the invention scheme are not limited to the technical means disclosed in the above embodiments, but also include the technical scheme formed by any combination of the above technical features. It should be noted that those skilled in the art can make various improvements and modifications without departing from the principle of the present invention, and such improvements and modifications are also considered to be within the scope of the present invention.

Claims (4)

1. A method for automatically positioning a tumor in a brain magnetic resonance image of a mesh network is characterized by comprising the following steps:
step 1, defining a backbone network of a three-dimensional depth convolution neural network based on a residual error network, and specifically comprising the following substeps:
1-1, inputting three-dimensional MRI image data X: (L, W, H,1) performing a three-dimensional convolution operation with a step size of 2 and a kernel of [3,3,3], setting the number of convolution kernels of the three-dimensional convolution to C1, and generating data Y1: (L/2, W/2, H/2, C1), wherein L, W and H are the length, width and height of the original image respectively;
1-2, defining a three-dimensional convolution SSCNN with the step size of 1 and the kernel of [3,3,3 ];
1-3, data Y1: (L/2, W/2, H/2, C1) SSCNN convolution is performed once, the number of convolution kernels is set to C1, and a result Y1_1 after the convolution is generated: (L/2, W/2, H/2, C1), performing SSCNN convolution again, setting the number of convolution kernels to be C1, generating a result Y1_2 after convolution, (L/2, W/2, H/2, C1), and finally adding each element of Y1 and Y1_2 to generate data Y2: (L/2, W/2, H/2, C1);
1-4, data Y2: (L/2, W/2, H/2, C1) performing SSCNN convolution once, wherein the number of convolution kernels needs to be set to C1, and the result after convolution Y2_1 is generated: (L/2, W/2, H/2, C1), performing SSCNN convolution again, setting the number of convolution kernels to be C1, generating a result Y2_2 after convolution, (L/2, W/2, H/2, C1), and finally adding each element of Y2 and Y2_2 to generate data Y3: (L/2, W/2, H/2, C1);
1-5, data Y3: (L/2, W/2, H/2, C1) performing a three-dimensional convolution operation with a step size of 2 and a kernel of [3,3,3], the number of convolution kernels of the three-dimensional convolution being designated as C2, and generating data Y4: (L/4, W/4, H/4, C2);
1-6, repeating the steps 1-3, 1-4 and 1-5 twice, and finally executing the steps 1-3 and 1-4 once to obtain the feature Y extracted according to the image: (L/16, W/16, H/16, C), where C is the number of convolution kernels in the last iteration 1-4;
step 2, defining a grid tumor positioning network, and specifically comprising the following substeps:
2-1, outputting a characteristic diagram Y of the backbone network: performing SSCNN convolution once (L/16, W/16, H/16, C), setting the number of convolution kernels as C3, and obtaining data G1(L/16, W/16, H/16, C3);
2-2, performing three-dimensional convolution operation on the data G1(L/16, W/16, H/16, C3) once with a kernel of 1 × 1 × 1, and steps of (L/16)/N, (W/16)/M, (H/16)/K in length, width and height, respectively, wherein the number of convolution kernels is set to be 2, and obtaining data G: (N, M, K,2), wherein N, M, K respectively represent the number of divisions of the set original image in length, width and height;
2-3, data G: (N, M, K,2) by means of Softmax operation, the class probabilities G1 for each grid containing and not containing tumor masses of the original image are translated into: (N, M, K,2), assigning the category with the high category probability value to the current image block according to the size of the category probability, and finally integrating all the image blocks containing the tumor to form the complete positioning of the tumor block;
step 3, tumor localization is carried out on the three-dimensional brain MRI image, and the method specifically comprises the following substeps:
3-1, dividing the three-dimensional nuclear magnetic resonance image into N × M × K three-dimensional image blocks, wherein the size of each three-dimensional image block is (H/N) × (W/M) × (L/K), and H, W and L respectively represent the length, width and height of the original three-dimensional nuclear magnetic resonance image;
3-2, carrying out gridding label on the three-dimensional nuclear magnetic resonance image, wherein the grid type comprises an image block containing a tumor and an image block not containing the tumor;
3-2, putting the three-dimensional nuclear magnetic resonance image into the network in the step 1 to generate a characteristic diagram of (L/16) × (W/16) × (H/16) × C;
and 3-3, putting the obtained characteristic image into the network structure in the step 2, generating each grid classification of NxMxKx2, classifying the grid classification into two types including tumor blocks and tumor blocks, and positioning the tumor.
2. The method for automatically locating tumor in magnetic resonance image of brain of mesh network according to claim 1, wherein the step 3-2 further comprises the following steps: setting a threshold, namely the threshold is a grid regular case threshold, and meanwhile, setting the proportion of three-dimensional pixel points marked as tumors in the image block in the three-dimensional image block as a tumor proportion; when the tumor proportion is larger than or equal to the grid positive case threshold, the image block is marked as an image block containing tumors, and when the tumor proportion is smaller than the grid positive case threshold, the image block is marked as an image block not containing tumors.
3. The method for automatically locating tumor in magnetic resonance image of brain of mesh network according to claim 1, wherein the three-dimensional convolution operation in step 1 is implemented by the following formula:
Figure FDA0003524025660000021
Figure FDA0003524025660000022
in the form of a three-dimensional convolution kernel,
Figure FDA0003524025660000023
for the bias term, f is a nonlinear activation function.
4. The method for automatically locating tumor in magnetic resonance image of brain of mesh network according to claim 1, wherein Softmax operation in step 2 is completed by the following formula:
Figure FDA0003524025660000024
where P represents the positive pseudo probability for the input X prediction class c,
Figure FDA0003524025660000025
representing a classification activation response to the input X at the level C profile.
CN201910874099.0A 2019-09-17 2019-09-17 Method for positioning tumor in brain magnetic resonance image of grid network Active CN110706209B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910874099.0A CN110706209B (en) 2019-09-17 2019-09-17 Method for positioning tumor in brain magnetic resonance image of grid network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910874099.0A CN110706209B (en) 2019-09-17 2019-09-17 Method for positioning tumor in brain magnetic resonance image of grid network

Publications (2)

Publication Number Publication Date
CN110706209A CN110706209A (en) 2020-01-17
CN110706209B true CN110706209B (en) 2022-04-29

Family

ID=69196101

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910874099.0A Active CN110706209B (en) 2019-09-17 2019-09-17 Method for positioning tumor in brain magnetic resonance image of grid network

Country Status (1)

Country Link
CN (1) CN110706209B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112053342A (en) * 2020-09-02 2020-12-08 陈燕铭 Method and device for extracting and identifying pituitary magnetic resonance image based on artificial intelligence

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105160361A (en) * 2015-09-30 2015-12-16 东软集团股份有限公司 Image identification method and apparatus
CN106056595A (en) * 2015-11-30 2016-10-26 浙江德尚韵兴图像科技有限公司 Method for automatically identifying whether thyroid nodule is benign or malignant based on deep convolutional neural network
CN107680082A (en) * 2017-09-11 2018-02-09 宁夏医科大学 Lung tumor identification method based on depth convolutional neural networks and global characteristics
US10304193B1 (en) * 2018-08-17 2019-05-28 12 Sigma Technologies Image segmentation and object detection using fully convolutional neural network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105160361A (en) * 2015-09-30 2015-12-16 东软集团股份有限公司 Image identification method and apparatus
CN106056595A (en) * 2015-11-30 2016-10-26 浙江德尚韵兴图像科技有限公司 Method for automatically identifying whether thyroid nodule is benign or malignant based on deep convolutional neural network
CN107680082A (en) * 2017-09-11 2018-02-09 宁夏医科大学 Lung tumor identification method based on depth convolutional neural networks and global characteristics
US10304193B1 (en) * 2018-08-17 2019-05-28 12 Sigma Technologies Image segmentation and object detection using fully convolutional neural network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Brain Tumor Segmentation Using Multi-Cascaded Convolutional Neural Networks and Conditional Random Field;KAI HU等;《IEEE Access》;20190726;第7卷;第92615-92627页 *

Also Published As

Publication number Publication date
CN110706209A (en) 2020-01-17

Similar Documents

Publication Publication Date Title
CN111476292B (en) Small sample element learning training method for medical image classification processing artificial intelligence
WO2022199143A1 (en) Medical image segmentation method based on u-shaped network
CN111931811B (en) Calculation method based on super-pixel image similarity
CN112150428A (en) Medical image segmentation method based on deep learning
CN106296653A (en) Brain CT image hemorrhagic areas dividing method based on semi-supervised learning and system
CN111932529B (en) Image classification and segmentation method, device and system
Pan et al. Mitosis detection techniques in H&E stained breast cancer pathological images: A comprehensive review
CN109389129A (en) A kind of image processing method, electronic equipment and storage medium
CN109920538B (en) Zero sample learning method based on data enhancement
Ye et al. Medical image diagnosis of prostate tumor based on PSP-Net+ VGG16 deep learning network
CN114693933A (en) Medical image segmentation device based on generation of confrontation network and multi-scale feature fusion
Qin et al. Large-scale tissue histopathology image segmentation based on feature pyramid
Qian et al. Multi-scale context UNet-like network with redesigned skip connections for medical image segmentation
Mamdouh et al. A New Model for Image Segmentation Based on Deep Learning.
CN113781387A (en) Model training method, image processing method, device, equipment and storage medium
Banerjee et al. A CADe system for gliomas in brain MRI using convolutional neural networks
CN110706209B (en) Method for positioning tumor in brain magnetic resonance image of grid network
Liu et al. Multi-Scale Contourlet Knowledge Guide Learning Segmentation
CN109816665A (en) A kind of fast partition method and device of optical coherence tomographic image
You et al. A cGAN-based tumor segmentation method for breast ultrasound images
Guo et al. Double U-Nets for image segmentation by integrating the region and boundary information
Xian et al. Automatic tongue image quality assessment using a multi-task deep learning model
Salini et al. Deepfakes on retinal images using GAN
Nasim et al. Review on multimodality of different medical image fusion techniques
Alrais et al. Support vector machine (SVM) for medical image classification of tumorous

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant