CN112767407B - CT image kidney tumor segmentation method based on cascade gating 3DUnet model - Google Patents

CT image kidney tumor segmentation method based on cascade gating 3DUnet model Download PDF

Info

Publication number
CN112767407B
CN112767407B CN202110141339.3A CN202110141339A CN112767407B CN 112767407 B CN112767407 B CN 112767407B CN 202110141339 A CN202110141339 A CN 202110141339A CN 112767407 B CN112767407 B CN 112767407B
Authority
CN
China
Prior art keywords
layer
stage
dataset
tumor
gating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110141339.3A
Other languages
Chinese (zh)
Other versions
CN112767407A (en
Inventor
孙玉宝
吴敏
徐宏伟
刘青山
辛宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Information Science and Technology
Original Assignee
Nanjing University of Information Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Information Science and Technology filed Critical Nanjing University of Information Science and Technology
Priority to CN202110141339.3A priority Critical patent/CN112767407B/en
Publication of CN112767407A publication Critical patent/CN112767407A/en
Application granted granted Critical
Publication of CN112767407B publication Critical patent/CN112767407B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4007Interpolation-based scaling, e.g. bilinear interpolation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30084Kidney; Renal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Abstract

The invention discloses a CT image kidney tumor segmentation method based on a cascade gating 3DUnet model, which comprises the following steps: collecting image sequences containing kidneys in abdominal CT scanning, labeling the kidneys and tumors in each image sequence, generating corresponding labeling masks, and constructing a Dataset Dataset; performing P1 pretreatment operation on the Dataset, and constructing a U-shaped deep network model M1 for kidney (including tumor) segmentation; cutting the image sequence in the Dataset and the corresponding labeling mask, taking out the voxel part with only kidney (or tumor), performing P2 preprocessing operation, and constructing a depth network segmentation model M2 for tumor segmentation based on the gating convolution layer; training the models M1 and M2 respectively; kidney tumor areas were segmented by a cascade of models M1 and M2. According to the invention, a two-stage segmentation model of network cascade is combined with a gating convolution layer to construct a depth network model for tumor segmentation, so that robustness can be maintained to the shape change of cancerous kidneys, and tumors with different sizes can be effectively segmented.

Description

CT image kidney tumor segmentation method based on cascade gating 3DUnet model
Technical Field
The invention relates to the fields of computers, software and artificial intelligence technologies, in particular to a CT image kidney tumor segmentation method based on a cascade gating 3DUnet model.
Background
Medical image imaging technology has made great progress in the rapid development of scientific technology, and the technology integrates many technologies such as modern medicine, physics, electronic information and computer technology, and has become an indispensable important means in the medical field. Computer tomography (Computed Tomography, CT for short) is a medical imaging mode widely used at present, and can accurately and comprehensively describe detailed characteristics of organs, tissues and lesions of a human body, so that a professional doctor can more directly and clearly carry out non-invasive observation on the lesion part so as to timely formulate an effective treatment scheme, thereby improving the diagnosis efficiency and cure rate of diseases. Kidneys are key components of the urinary system and play an important role in regulating acid-base balance, metabolism and the like, and various kidney diseases are continuously increased due to the acceleration of the life rhythm of people and the increase of working pressure. Kidney tumors are common kidney diseases, namely "kidney cancer", also called renal cell carcinoma, malignant tumors originating from the kidney epithelium, and have complex pathological types and distinct clinical manifestations. Compared with other medical imaging modes, the CT image can better present and distinguish the pathological change characteristics of the kidney cancer, and becomes an important basis for doctors to carry out preliminary diagnosis and follow-up on the kidney disease.
In clinical treatment, accurate segmentation of kidney and lesion areas is important for diagnosis of disease, functional assessment and therapeutic decision. The early segmentation work is manually sketched by experienced doctors, and the segmentation mode has strong subjectivity, low efficiency and unable to reproduce segmentation results, can not well meet clinical requirements, and can not meet current quantitative diagnosis requirements. With the development of modern science and technology, it is possible to realize medical image segmentation by using computer technology, and researchers begin to explore automatic segmentation methods in a dispute. However, there are difficulties in accurately and reliably segmenting the kidneys in CT images, such as: CT images have low contrast, boundaries between kidneys and adjacent organs and tissues are fuzzy, individual shapes are different, and noise, cavities and the like are easily caused by water and air in the kidneys. For patients with kidney cancer, the boundaries are difficult to distinguish due to the variety of tumor sizes and the close pixel values of normal kidney tissues, so that the segmentation of kidney tumors in CT images has a plurality of challenges. Therefore, developing a fully automatic segmentation algorithm for renal tumors in CT images has very practical research significance.
At present, deep learning achieves remarkable results in various fields, the superiority of the deep learning is benefited by the capability of the convolutional neural network to automatically extract features, so that the deep learning model can be universally applicable to different tasks, and the superiority is more and more obvious with the continuous deep expression of related theory, so that the deep learning model is rapidly developed into a mainstream technology in a big data age. In recent years, deep learning models for medical image segmentation are increasingly emerging in current research. Although the convolutional neural network can automatically extract effective features, aiming at the problem of kidney tumor segmentation of CT images, the problems of insufficient robustness to kidney morphological changes, insufficient accuracy of tumor segmentation and the like still exist. The accurate segmentation model of the kidney tumor of the image can provide quantitative diagnosis basis for clinic, assist doctors in decision making, and has important clinical significance and good application prospect.
The invention aims to:
the invention aims to solve the technical problems of CT image kidneys and tumor segmentation thereof, and provides a kidney tumor segmentation algorithm based on a cascade gating three-dimensional full convolution network 3DUnet model, so as to realize accurate segmentation of kidney tumor areas in a CT image sequence.
The technical scheme is as follows:
in order to solve the technical problems, the invention provides a kidney tumor segmentation algorithm based on a cascade gate control 3DUnet model, which has the following technical scheme:
a CT image kidney tumor segmentation method based on a cascade gate control 3DUnet model comprises the following specific steps:
s101, acquiring images containing kidneys in abdominal CT scanning, taking out slice images containing kidneys or tumors to form an image sequence, marking the kidneys and tumors in each slice image by using marking software, generating corresponding marking masks, and constructing a Dataset Dataset;
s102, performing P1 pretreatment on the Dataset, dividing the Dataset subjected to P1 pretreatment into an M1 training set and an M1 testing set, and training and testing the constructed 3DUnet to obtain a kidney segmentation depth network model M1;
s103, performing P2 pretreatment on the Dataset, dividing the Dataset after P2 pretreatment into an M2 training set and an M2 testing set, and training and testing the constructed three-dimensional gate control residual total convolution network to obtain a depth network segmentation model M2 for tumor segmentation;
s104, after the image sequence to be segmented is subjected to P1 pretreatment, a kidney region is segmented by using M1, and segmentation results are spliced; cutting the spliced segmentation result, taking out voxels with only kidneys or tumors, performing P2 pretreatment, and segmenting out tumor areas by using M2.
Further, in S101, the kidneys and tumors in the slice images are labeled by the ITK-SNAP medical image labeling software, corresponding labeling masks are generated, and the slice images and the corresponding labeling masks form a Dataset.
Further, in S102, the P1 preprocessing is performed on the Dataset, which specifically includes: interpolating all slice images in the Dataset and corresponding labeling masks to voxel spacing with the same resolution; randomly cutting each interpolated slice image and the corresponding marking mask thereof to obtain voxel small blocks, and carrying out normalization processing on the voxel small blocks; the resolution after interpolation is lower than before interpolation.
Further, a 3DUnet is constructed from the size of the voxel patches for obtaining a segmentation mask of the kidney region.
Further, in S103, P2 preprocessing is performed on the Dataset, which specifically includes: interpolating all slice images in the Dataset and the corresponding annotation masks into voxel spacing with the same resolution, acquiring tumor boundaries by utilizing edge detection on the interpolated annotation masks, randomly cutting each slice image after interpolation and the corresponding annotation masks and the tumor boundaries to acquire voxel small blocks, and carrying out normalization processing on the voxel small blocks; the resolution after interpolation is higher than before interpolation.
Further, constructing a three-dimensional gating residual full convolution network according to the size of the voxel small block, wherein the three-dimensional gating residual full convolution network comprises a main network and a tumor shape branch network; the backbone network is 3DUnet, wherein the encoder and the decoder are connected in a jumping manner between feature graphs corresponding to the same resolution; the tumor-shaped branch network comprises three-stage cascade gating convolutional layers, 3x3x3 convolutional layers, three-linear interpolation and 1x1x1 convolutional layers, wherein the output of a first-stage deconvolution layer of a decoder is used as the input of the first-stage gating convolutional layer after passing through the 1x1x1 convolutional layers, the output of a second-stage deconvolution layer of the decoder is used as the input of the first-stage gating convolutional layer after passing through the 1x1x1 convolutional layers, the output of the first-stage gating convolutional layer is used as the input of the second-stage gating convolutional layers after passing through the 3x3x3 convolutional layers and three-linear interpolation, the output of a third-stage deconvolution layer of the decoder is used as the input of the second-stage gating convolutional layers after passing through the 1x1x 1x1 convolutional layers, the output of a fourth-stage deconvolution layer of the decoder is used as the input of the third-stage gating convolutional layers after passing through the 3x3x3 convolutional layers and three-linear interpolation, and the output of the third-stage deconvolution layer of the decoder is connected with the output of a full-scale map 1x1x 1x1 layer after passing through the first-stage deconvolution layer and the full-scale map is connected with the output of the full-scale map 1x1x 1x1 layer; the output of the third-stage gating convolution layer is input as a sigmoid function after passing through the 1x1x1 convolution layer, and the sigmoid function outputs a final prediction mask.
Compared with the prior art, the invention has the following advantages:
the two-stage segmentation model based on network cascading is provided, and a random clipping strategy is adopted, so that the influence of class unbalance and small targets is reduced to a certain extent. Aiming at the problem that the tumor boundary is difficult to distinguish, a 3DUnet is taken as a main network, a tumor shape branch network is constructed based on a gating convolution layer, and the tumor boundary can be predicted, so that the tumor segmentation performance is improved.
Drawings
FIG. 1 is a control flow diagram of the method of the present invention;
FIG. 2 is a schematic diagram of the predictive operation of the method of the present invention;
FIG. 3 is a diagram of the structure of M1 model in the present invention;
FIG. 4 is a diagram of the structure of M2 model in the present invention.
Detailed Description
In order to make the technical problems, technical schemes and beneficial effects to be solved more clear and obvious, the invention is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
The kidney tumor segmentation algorithm based on the cascade gate control 3DUnet model provided by the embodiment of the invention is adopted to segment kidney tumors in CT images, as shown in figures 1 and 2, and comprises the following specific steps:
s101, acquiring images containing kidneys in abdominal CT scanning, taking out slice images containing kidneys or tumors to form an image sequence, marking the kidneys and tumors in each slice image by using marking software, generating corresponding marking masks, and constructing a Dataset Dataset.
The data of this embodiment is from the KiTS19 race dataset, the raw data are both plain CT sequence data, the image and corresponding artificial annotation mask are provided in anonymous NIFTI format, the shape of which is (number of slices, height, width). Wherein the number of slices corresponds to the axial view and is supine during image acquisition for all patients as the slice index increases from top to bottom. When the patient has a plurality of scan images, the side with the smallest slice thickness is selected, and the slice thickness of the data set is 1mm to 5mm.
S102, performing P1 preprocessing operation on the Dataset, dividing the Dataset into a training set and a testing set, and constructing a deep network model M1 (shown in FIG. 3) of kidney segmentation based on 3 DUnet.
And P1 pretreatment operation: resampling the sequence images of all samples to 3.22×1.62×1.62mm (lowest resolution) voxel spacing as standard spacing by using tri-linear interpolation, sampling the artificial labeling mask of all samples to the same voxel spacing by using nearest neighbor interpolation, then taking out small blocks of voxels with the size of 80×160×160 (median) in each sample by a random clipping (50% overlap ratio, namely (40,80,80)) method, limiting the CT value of the sequence images of each sample to the range of [ -79,304] (5% -95% of all pixel ranges) so as to eliminate abnormal intensity values caused by certain substances, and using z-score normalization processing due to the property of weight initialization in a network: the mean value is subtracted from each pixel value and divided by the variance so that the pixels of the image are in a range that is more easily handled by the CNN. The 3DUnet is designed according to the size of the voxel tile, including the encoder and decoder. The parameter settings of the encoder are shown in table 1, and the decoder adopts a symmetrical structure.
Table 1 parameter settings of encoder
Type(s) Size/step size Output size
Convolution layer (x 2) 3x3x3/1 80x160x160x30
Maximum pooling layer 1x2x2/1(2,2) 80x80x80x30
Convolution layer (x 2) 3x3x3/1 80x80x80x60
Maximum pooling layer 2x2x2/2 40x40x40x60
Convolution layer (x 2) 3x3x3/1 40x40x40x120
Maximum pooling layer 2x2x2/2 20x20x20x120
Convolution layer (x 2) 3x3x3/1 20x20x20x240
Maximum pooling layer 2x2x2/2 10x10x10x240
Convolution layer (x 2) 3x3x3/1 10x10x10x320
Maximum pooling layer 2x2x2/2 5x5x5x320
S103, cutting the image sequence in the Dataset data set and the corresponding labeling mask, taking out the voxel part with only kidneys (or tumors), performing P2 preprocessing operation, dividing the data into a training set and a test set, and constructing a depth network segmentation model M2 for tumor segmentation based on a three-dimensional gating residual error full convolution network.
And P2 pretreatment operation: resampling the voxel spacing of 3×0.78×0.78mm (median value) of all samples, the manual labeling mask and the tumor boundary mask by adopting the same method as a standard, taking out the VOI regions containing kidneys (or tumors) according to the manual labeling information, resetting the values of pixels except the kidneys (or tumors) to 0, taking out small voxel blocks with the size of 48×128×128 (median value) in each VOI region by a random clipping (50% overlapping rate and a step size of (24,64,64)) method, limiting the CT value of the sequence image of each sample within the range of [ -79,304] (5% -95% of all pixel ranges) to eliminate abnormal intensity values caused by certain substances, and adopting z-score normalization processing due to the property of weight initialization in a network: the mean is subtracted from each pixel value and divided by the variance. The U-shaped full convolution neural network is designed by taking voxel small blocks as network inputs, and a tumor shape branch network is formed by cascading a gating convolution layer, a 1x1x1 convolution layer, tri-linear interpolation and a 1x1x1 convolution layer.
As shown in fig. 4, the three-dimensional gated residual full convolution network includes a backbone network and a tumor-shaped branch network; the backbone network is 3DUnet, wherein the encoder and the decoder are connected with feature graphs corresponding to the same resolution in a jumping way, then the channel number dimension reduction is carried out through 1x1x1 convolution, and finally the probability map is output through a Softmax classifier. The tumor-shaped branch network comprises three-stage cascade gating convolutional layers, 3x3x3 convolutional layers, three-linear interpolation and 1x1x1 convolutional layers, wherein the output of a first-stage deconvolution layer of a decoder is used as the input of the first-stage gating convolutional layer after passing through the 1x1x1 convolutional layers, the output of a second-stage deconvolution layer of the decoder is used as the input of the first-stage gating convolutional layer after passing through the 1x1x1 convolutional layers, the output of the first-stage gating convolutional layer is used as the input of the second-stage gating convolutional layers after passing through the 3x3x3 convolutional layers and three-linear interpolation, the output of a third-stage deconvolution layer of the decoder is used as the input of the second-stage gating convolutional layers after passing through the 1x1x 1x1 convolutional layers, the output of a fourth-stage deconvolution layer of the decoder is used as the input of the third-stage gating convolutional layers after passing through the 3x3x3 convolutional layers and three-linear interpolation, and the output of the third-stage deconvolution layer of the decoder is connected with the output of a full-scale map 1x1x 1x1 layer after passing through the first-stage deconvolution layer and the full-scale map is connected with the output of the full-scale map 1x1x 1x1 layer; the output of the third-stage gating convolution layer is input as a sigmoid function after passing through the 1x1x1 convolution layer, and the sigmoid function outputs a final prediction mask. The backbone encoder structure still adopts the settings of table 1. Wherein the 3x3x3 convolution layer is used for extracting boundary features, the tri-linear interpolation is used for adjusting the size of the feature map, and the 1x1x1 convolution is used for reducing the dimension of the channel number.
The three-dimensional gate residual total convolution network sequentially passes through a downsampling part and an upsampling part which are included by an encoder and a decoder of a main network and are used for extracting characteristic information, then a tumor area shape is extracted through a tumor shape branch network, and an output boundary diagram of a shape branch is expressed as s epsilon R H×W×C The output characteristic diagram of the backbone network is z epsilon R H×W×C And performing tandem operation on the characteristic diagram of the shape branch network and the characteristic diagram output by the trunk network, and finally outputting a final prediction mask through a 1x1x1 convolution sum and soft-max. The patent can integrate the semantic features of the backbone network and the boundary features of the shape branch networkMore accurate segmentation results can be produced.
S104, selecting a proper optimization learning method, setting relevant super parameters, and respectively training the models M1 and M2.
The data set constructed by the model M1 using S102 is trained, and the data set constructed by the model M2 using S103 is trained, respectively. And carrying out loss optimization by using an Adam optimizer, and taking the best average index on the verification set as the optimal result of the model after each training. The following super parameter settings are adopted: the batch size is set to 2, 300 batches are iterated to one epoch, and a total of 150 epochs are iterated; the initial learning rate was set to 10 -3 Automatically reducing by a factor of 0.1 when training the 80 th and 120 th epochs; momentum is set to 0.95, and the weight attenuation coefficient is constant to 10 -4
S105, selecting CT image sequences from the test set, and segmenting out kidney tumor areas through models M1 and M2 after preprocessing operation.
And randomly selecting a CT flat scan image of a kidney cancer patient case in a test set, selecting CT slice images containing kidneys or tumors, performing P1 pretreatment operation, predicting by using a trained M1 model, splicing the results according to the reverse sequence of random cutting, cutting out voxel small blocks containing only kidneys or tumors according to the spliced results, performing P2 pretreatment operation, predicting by using a trained M2 model, and splicing the segmentation results according to the reverse sequence of random cutting to form a final segmentation result.
TABLE 2 evaluation index values of different depth network models (%)
Figure BDA0002928792860000061
Table 2 shows evaluation indexes of different depth network models, wherein the two-dimensional full convolution network is formed by overlapping the results of dividing each slice image, and the 3DUnet and the V-net adopt the structures of the original models. From this it can be seen that: the segmentation accuracy of the algorithm of the patent on the kidney and the tumor is higher than that of other comparison algorithms, and particularly, the quantitative index of tumor segmentation is remarkably improved due to the fact that a tumor gating shape branch network is added to the tumor target.
The foregoing is merely illustrative of the embodiments of the present invention, and the scope of the present invention is not limited thereto, and any person skilled in the art will appreciate that modifications and substitutions are within the scope of the present invention, and the scope of the present invention is defined by the appended claims.

Claims (4)

1. A CT image kidney tumor segmentation method based on a cascade gate control 3DUnet model is characterized by comprising the following specific steps:
s101, acquiring images containing kidneys in abdominal CT scanning, taking out slice images containing kidneys or tumors to form an image sequence, marking the kidneys and tumors in each slice image by using marking software, generating corresponding marking masks, and constructing a Dataset Dataset;
s102, performing P1 pretreatment on the Dataset, dividing the Dataset subjected to P1 pretreatment into an M1 training set and an M1 testing set, and training and testing the constructed three-dimensional full convolution network 3DUnet to obtain a kidney segmentation depth network model M1;
s103, performing P2 pretreatment on the Dataset, dividing the Dataset after P2 pretreatment into an M2 training set and an M2 testing set, and training and testing the constructed three-dimensional gate control residual total convolution network to obtain a depth network segmentation model M2 for tumor segmentation;
s104, after the image sequence to be segmented is subjected to P1 pretreatment, a kidney region is segmented by using M1, and segmentation results are spliced; cutting the spliced segmentation result, taking out voxels with only kidneys or tumors, performing P2 pretreatment, and segmenting out tumor areas by using M2;
in S102, the P1 preprocessing on the Dataset specifically includes: interpolating all slice images in the Dataset and corresponding labeling masks to voxel spacing with the same resolution; randomly cutting each interpolated slice image and the corresponding marking mask thereof to obtain voxel small blocks, and carrying out normalization processing on the voxel small blocks; the resolution after interpolation is lower than before interpolation;
in S103, the P2 preprocessing is performed on the Dataset, which specifically includes: interpolating all slice images in the Dataset and the corresponding annotation masks into voxel spacing with the same resolution, acquiring tumor boundaries by utilizing edge detection on the interpolated annotation masks, randomly cutting each slice image after interpolation and the corresponding annotation masks and the tumor boundaries to acquire voxel small blocks, and carrying out normalization processing on the voxel small blocks; the resolution after interpolation is higher than before interpolation;
the three-dimensional gating residual error full convolution network comprises a main network and a tumor shape branch network; the backbone network is 3DUnet, and the tumor-shaped branch network comprises three-stage cascade gating convolution layers.
2. The method for segmenting kidney tumors in CT images based on a cascade gate control 3DUnet model as set forth in claim 1, wherein in S101, the kidneys and tumors in the slice images are marked by using ITK-SNAP medical image marking software to generate corresponding marking masks, and the slice images and the corresponding marking masks form a Dataset.
3. The method for segmenting kidney tumors in CT images based on a cascade-gated 3DUnet model according to claim 1, wherein 3DUnet is constructed according to the size of voxel patches for obtaining segmentation masks of kidney regions.
4. The method for segmenting the kidney tumor of the CT image based on the cascade gating 3DUnet model as claimed in claim 1, wherein a three-dimensional gating residual error full convolution network is constructed according to the size of the voxel small block, and the encoder and the decoder in the backbone network are in jump connection with feature images corresponding to the same resolution; the tumor-shaped branch network further comprises a 3x3x3 convolution layer, a tri-linear interpolation layer and a 1x1x1 convolution layer, wherein the output of the first-stage deconvolution layer of the decoder is used as the input of the first-stage gating convolution layer after passing through the 1x1x1 convolution layer, the output of the second-stage deconvolution layer of the decoder is used as the input of the first-stage gating convolution layer after passing through the 1x1x1 convolution layer, the output of the first-stage gating convolution layer is used as the input of the second-stage gating convolution layer after passing through the 3x3 convolution layer and the tri-linear interpolation layer, the output of the third-stage deconvolution layer of the decoder is used as the input of the second-stage gating convolution layer after passing through the 1x1x1 convolution layer, the output of the second-stage deconvolution layer of the decoder is used as the input of the third-stage gating convolution layer after passing through the 3x3x3 convolution layer and the tri-linear interpolation layer, and the output of the fourth-stage deconvolution layer of the decoder is used as the input of the third-stage gating convolution layer after passing through the 1x1x1 convolution layer, and the output of the third-stage deconvolution layer is connected with the full-stage deconvolution layer after passing through the full-layer 1x1x layer and the full-layer is connected with the output of the full-stage map; the output of the third-stage gating convolution layer is input as a sigmoid function after passing through the 1x1x1 convolution layer, and the sigmoid function outputs a final prediction mask.
CN202110141339.3A 2021-02-02 2021-02-02 CT image kidney tumor segmentation method based on cascade gating 3DUnet model Active CN112767407B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110141339.3A CN112767407B (en) 2021-02-02 2021-02-02 CT image kidney tumor segmentation method based on cascade gating 3DUnet model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110141339.3A CN112767407B (en) 2021-02-02 2021-02-02 CT image kidney tumor segmentation method based on cascade gating 3DUnet model

Publications (2)

Publication Number Publication Date
CN112767407A CN112767407A (en) 2021-05-07
CN112767407B true CN112767407B (en) 2023-07-07

Family

ID=75704619

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110141339.3A Active CN112767407B (en) 2021-02-02 2021-02-02 CT image kidney tumor segmentation method based on cascade gating 3DUnet model

Country Status (1)

Country Link
CN (1) CN112767407B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113469941B (en) * 2021-05-27 2022-11-08 武汉楚精灵医疗科技有限公司 Method for measuring width of bile-pancreatic duct in ultrasonic bile-pancreatic duct examination
CN113436204A (en) * 2021-06-10 2021-09-24 中国地质大学(武汉) High-resolution remote sensing image weak supervision building extraction method
CN113436173B (en) * 2021-06-30 2023-06-27 陕西大智慧医疗科技股份有限公司 Abdominal multi-organ segmentation modeling and segmentation method and system based on edge perception
CN117237394B (en) * 2023-11-07 2024-02-27 万里云医疗信息科技(北京)有限公司 Multi-attention-based lightweight image segmentation method, device and storage medium

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109035197A (en) * 2018-05-31 2018-12-18 东南大学 CT contrastographic picture tumor of kidney dividing method and system based on Three dimensional convolution neural network
CN109829918A (en) * 2019-01-02 2019-05-31 安徽工程大学 A kind of liver image dividing method based on dense feature pyramid network
US10482603B1 (en) * 2019-06-25 2019-11-19 Artificial Intelligence, Ltd. Medical image segmentation using an integrated edge guidance module and object segmentation network
CN110570432A (en) * 2019-08-23 2019-12-13 北京工业大学 CT image liver tumor segmentation method based on deep learning
CN110599500A (en) * 2019-09-03 2019-12-20 南京邮电大学 Tumor region segmentation method and system of liver CT image based on cascaded full convolution network
CN110675406A (en) * 2019-09-16 2020-01-10 南京信息工程大学 CT image kidney segmentation algorithm based on residual double-attention depth network
CN110689543A (en) * 2019-09-19 2020-01-14 天津大学 Improved convolutional neural network brain tumor image segmentation method based on attention mechanism
WO2020077202A1 (en) * 2018-10-12 2020-04-16 The Medical College Of Wisconsin, Inc. Medical image segmentation using deep learning models trained with random dropout and/or standardized inputs
CN111311592A (en) * 2020-03-13 2020-06-19 中南大学 Three-dimensional medical image automatic segmentation method based on deep learning
CN111354002A (en) * 2020-02-07 2020-06-30 天津大学 Kidney and kidney tumor segmentation method based on deep neural network
CN111563897A (en) * 2020-04-13 2020-08-21 北京理工大学 Breast nuclear magnetic image tumor segmentation method and device based on weak supervised learning
CN111627019A (en) * 2020-06-03 2020-09-04 西安理工大学 Liver tumor segmentation method and system based on convolutional neural network
CN111627024A (en) * 2020-05-14 2020-09-04 辽宁工程技术大学 U-net improved kidney tumor segmentation method
CN111798462A (en) * 2020-06-30 2020-10-20 电子科技大学 Automatic delineation method for nasopharyngeal carcinoma radiotherapy target area based on CT image
CN112085743A (en) * 2020-09-04 2020-12-15 厦门大学 Image segmentation method for renal tumor
CN112258526A (en) * 2020-10-30 2021-01-22 南京信息工程大学 CT (computed tomography) kidney region cascade segmentation method based on dual attention mechanism

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020097100A1 (en) * 2018-11-05 2020-05-14 HealthMyne, Inc. Systems and methods for semi-automatic tumor segmentation
CN109598728B (en) * 2018-11-30 2019-12-27 腾讯科技(深圳)有限公司 Image segmentation method, image segmentation device, diagnostic system, and storage medium

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109035197A (en) * 2018-05-31 2018-12-18 东南大学 CT contrastographic picture tumor of kidney dividing method and system based on Three dimensional convolution neural network
WO2020077202A1 (en) * 2018-10-12 2020-04-16 The Medical College Of Wisconsin, Inc. Medical image segmentation using deep learning models trained with random dropout and/or standardized inputs
CN109829918A (en) * 2019-01-02 2019-05-31 安徽工程大学 A kind of liver image dividing method based on dense feature pyramid network
US10482603B1 (en) * 2019-06-25 2019-11-19 Artificial Intelligence, Ltd. Medical image segmentation using an integrated edge guidance module and object segmentation network
CN110570432A (en) * 2019-08-23 2019-12-13 北京工业大学 CT image liver tumor segmentation method based on deep learning
CN110599500A (en) * 2019-09-03 2019-12-20 南京邮电大学 Tumor region segmentation method and system of liver CT image based on cascaded full convolution network
CN110675406A (en) * 2019-09-16 2020-01-10 南京信息工程大学 CT image kidney segmentation algorithm based on residual double-attention depth network
CN110689543A (en) * 2019-09-19 2020-01-14 天津大学 Improved convolutional neural network brain tumor image segmentation method based on attention mechanism
CN111354002A (en) * 2020-02-07 2020-06-30 天津大学 Kidney and kidney tumor segmentation method based on deep neural network
CN111311592A (en) * 2020-03-13 2020-06-19 中南大学 Three-dimensional medical image automatic segmentation method based on deep learning
CN111563897A (en) * 2020-04-13 2020-08-21 北京理工大学 Breast nuclear magnetic image tumor segmentation method and device based on weak supervised learning
CN111627024A (en) * 2020-05-14 2020-09-04 辽宁工程技术大学 U-net improved kidney tumor segmentation method
CN111627019A (en) * 2020-06-03 2020-09-04 西安理工大学 Liver tumor segmentation method and system based on convolutional neural network
CN111798462A (en) * 2020-06-30 2020-10-20 电子科技大学 Automatic delineation method for nasopharyngeal carcinoma radiotherapy target area based on CT image
CN112085743A (en) * 2020-09-04 2020-12-15 厦门大学 Image segmentation method for renal tumor
CN112258526A (en) * 2020-10-30 2021-01-22 南京信息工程大学 CT (computed tomography) kidney region cascade segmentation method based on dual attention mechanism

Non-Patent Citations (8)

* Cited by examiner, † Cited by third party
Title
Cascaded atrous dual attention U-Net for tumor segmentation;Yu-Cheng Liu 等;《Multimedia Tools and Applications》;第80卷(第2021期);30007-30031 *
一种基于级联卷积网络的三维脑肿瘤精细分割;褚晶辉 等;《激光与光电子学进展》;第56卷(第10期);75-84 *
基于新型深度全卷积网络的肝脏CT影像三维区域自动分割;孙明建 等;《中国生物医学工程学报》;第37卷(第04期);385-393 *
基于残差双注意力U-Net模型的CT图像囊肿肾脏自动分割;徐宏伟 等;《计算机应用研究》;第37卷(第07期);2237-2240 *
基于深度学习的器官自动分割研究进展;郭雯 等;《医疗卫生装备》;第41卷(第01期);85-94 *
平扫CT图像肾脏分割的深度学习算法研究;徐宏伟;《中国优秀硕士学位论文全文数据库医药卫生科技辑》(第(2021)02期);E076-27 *
深度学习结合影像组学的肝脏肿瘤CT分割;刘云鹏 等;《中国图象图形学报》;第25卷(第10期);2128-2141 *
门控多层融合的实时语义分割;张灿龙 等;《计算机辅助设计与图形学学报》;第32卷(第09期);1442-1449 *

Also Published As

Publication number Publication date
CN112767407A (en) 2021-05-07

Similar Documents

Publication Publication Date Title
CN112767407B (en) CT image kidney tumor segmentation method based on cascade gating 3DUnet model
Xie et al. Dynamic adaptive residual network for liver CT image segmentation
CN111192245B (en) Brain tumor segmentation network and method based on U-Net network
Wang et al. Breast cancer detection using extreme learning machine based on feature fusion with CNN deep features
Ahmad et al. Deep belief network modeling for automatic liver segmentation
Arora et al. Deep feature–based automatic classification of mammograms
CN108898160B (en) Breast cancer histopathology grading method based on CNN and imaging omics feature fusion
CN107886514A (en) Breast molybdenum target image lump semantic segmentation method based on depth residual error network
Dutande et al. Deep residual separable convolutional neural network for lung tumor segmentation
CN112990344B (en) Multi-view classification method for pulmonary nodules
CN112785598A (en) Ultrasonic breast tumor automatic segmentation method based on attention enhancement improved U-shaped network
CN114998265A (en) Liver tumor segmentation method based on improved U-Net
Wang et al. A data augmentation method for fully automatic brain tumor segmentation
CN112348794A (en) Ultrasonic breast tumor automatic segmentation method based on attention-enhanced U-shaped network
Akkar et al. Diagnosis of lung cancer disease based on back-propagation artificial neural network algorithm
Hong et al. Automatic liver and tumor segmentation based on deep learning and globally optimized refinement
Raj et al. Automatic psoriasis lesion segmentation from raw color images using deep learning
Chen et al. MS-FANet: multi-scale feature attention network for liver tumor segmentation
AU2016201298A1 (en) Computer analysis of mammograms
CN113487568A (en) Liver surface smoothness measuring method based on differential curvature
Mastouri et al. A morphological operation-based approach for Sub-pleural lung nodule detection from CT images
Lv et al. An improved residual U-Net with morphological-based loss function for automatic liver segmentation in computed tomography
CN114565786A (en) Tomography image classification device and method based on channel attention mechanism
Zhang et al. ASE-Net: A tumor segmentation method based on image pseudo enhancement and adaptive-scale attention supervision module
CN114387282A (en) Accurate automatic segmentation method and system for medical image organs

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant