CN112767407A - CT image kidney tumor segmentation method based on cascade gating 3DUnet model - Google Patents

CT image kidney tumor segmentation method based on cascade gating 3DUnet model Download PDF

Info

Publication number
CN112767407A
CN112767407A CN202110141339.3A CN202110141339A CN112767407A CN 112767407 A CN112767407 A CN 112767407A CN 202110141339 A CN202110141339 A CN 202110141339A CN 112767407 A CN112767407 A CN 112767407A
Authority
CN
China
Prior art keywords
tumor
convolutional layer
kidney
segmentation
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110141339.3A
Other languages
Chinese (zh)
Other versions
CN112767407B (en
Inventor
孙玉宝
吴敏
徐宏伟
刘青山
辛宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Information Science and Technology
Original Assignee
Nanjing University of Information Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Information Science and Technology filed Critical Nanjing University of Information Science and Technology
Priority to CN202110141339.3A priority Critical patent/CN112767407B/en
Publication of CN112767407A publication Critical patent/CN112767407A/en
Application granted granted Critical
Publication of CN112767407B publication Critical patent/CN112767407B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4007Interpolation-based scaling, e.g. bilinear interpolation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30084Kidney; Renal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Abstract

The invention discloses a CT image kidney tumor segmentation method based on a cascade gating 3DUnet model, which comprises the following steps: acquiring image sequences including kidneys in abdominal CT scanning, labeling the kidneys and tumors in each image sequence, generating corresponding labeling masks, and constructing a data set; performing P1 pretreatment operation on Dataset, and constructing a U-shaped depth network model M1 of kidney (including tumor) segmentation; cutting the image sequence in the Dataset and the corresponding label mask, taking out a voxel part only of the kidney (or tumor), performing P2 preprocessing operation, and constructing a tumor segmentation depth network segmentation model M2 based on a gated convolution layer; training models M1 and M2 respectively; the renal tumor regions were segmented by a cascade of model M1 and M2. According to the invention, a deep network model for tumor segmentation is constructed by a two-stage segmentation model of network cascade and combining gate-controlled convolution layers, so that the shape change of the cancerous kidney can be kept robust, and tumors with different sizes can be segmented effectively.

Description

CT image kidney tumor segmentation method based on cascade gating 3DUnet model
Technical Field
The invention relates to the fields of computers, software, artificial intelligence technology and the like, in particular to a CT image kidney tumor segmentation method based on a cascade gating 3DUnet model.
Background
Medical image imaging technology has made a great progress in the rapid development of science and technology, and the technology integrates a plurality of technologies such as modern medicine, physics, electronic information, computer technology and the like, and has become an indispensable important means in the medical field. Computed Tomography (CT) is a widely used medical imaging method, and can accurately and comprehensively describe detailed characteristics of organs, tissues and lesions of a human body, so that a professional physician can directly and clearly perform non-invasive observation on a lesion part so as to make an effective treatment scheme in time, thereby improving the diagnosis determining efficiency and cure rate of diseases. The kidney is a key component of the urinary system, plays an important role in regulating acid-base balance, metabolism and the like, and various kidney diseases are increased continuously due to the acceleration of life rhythm and the increase of working pressure of people. Renal tumor is a common kidney disease, namely kidney cancer, also called renal cell carcinoma, and a malignant tumor originated from renal epithelium, and has complex pathological types and very different clinical manifestations. Compared with other medical imaging modes, the CT image can better present and distinguish the pathological change characteristics of the kidney cancer, and becomes an important basis for doctors to perform initial diagnosis and follow-up on the kidney diseases.
In clinical treatment, accurate segmentation of the kidney and the diseased region is very important for disease diagnosis, functional assessment and treatment decision. Early segmentation work is manually outlined by an experienced doctor, the segmentation mode has strong subjectivity and low efficiency, and a segmentation result cannot be reproduced, so that the clinical requirements cannot be well met, and the current quantitative diagnosis requirements cannot be met. With the development of modern science and technology, it is possible to realize medical image segmentation by using computer technology, and researchers begin to explore automatic segmentation methods. However, accurately and reliably segmenting the kidney in CT images presents some difficulties, such as: CT images have low contrast, blurred boundaries between the kidney and adjacent organs and tissues, individual shape differences, and noise, cavities, and the like are easily caused by water and air inside the kidney. For renal cancer patients, the segmentation of renal tumors in CT images presents a number of challenges due to the fact that the tumors are of various sizes and have similar pixel values to normal kidney tissue, so that the boundaries are difficult to distinguish. Therefore, it is of practical research interest to develop a fully automatic segmentation algorithm for kidney tumors in CT images.
At present, deep learning obtains remarkable achievements in various fields, the superiority of the deep learning is benefited by the capability of a convolutional neural network capable of automatically extracting features, so that a deep learning model can be generally suitable for different tasks, and the superiority is more and more obvious along with the continuous deep expression of a relevant theory, so that the deep learning model is rapidly developed into a mainstream technology of a big data era. In recent years, deep learning models for medical image segmentation are gradually emerging in current research. Although the convolutional neural network can automatically extract effective features, the problems of the kidney tumor segmentation of the CT image still exist, such as insufficient robustness for the kidney morphological change, insufficient accuracy for the tumor segmentation, and the like. The accurate segmentation model of the image kidney tumor is established, so that quantitative diagnosis basis can be provided for clinic, decision of doctors is assisted, and the method has important clinical significance and good application prospect.
The purpose of the invention is as follows:
the invention aims to solve the technical problem of CT image kidney and tumor segmentation thereof, and provides a kidney tumor segmentation algorithm based on a cascade gating three-dimensional full convolution network 3DUnet model, so as to realize accurate segmentation of a kidney tumor region in a CT image sequence.
The technical scheme is as follows:
in order to solve the technical problems, the invention provides a kidney tumor segmentation algorithm based on a cascade gating 3DUnet model, which has the following technical scheme:
a CT image kidney tumor segmentation method based on a cascade gating 3DUnet model comprises the following specific steps:
s101, acquiring an image containing a kidney in abdominal CT scanning, taking out slice images containing the kidney or a tumor to form an image sequence, labeling the kidney and the tumor in each slice image by using labeling software, generating a corresponding labeling mask, and constructing a data set Dataset;
s102, performing P1 pretreatment on the Dataset, dividing the Dataset subjected to P1 pretreatment into an M1 training set and an M1 testing set, and training and testing the constructed 3DUnet to obtain a kidney segmentation depth network model M1;
s103, performing P2 pretreatment on the Dataset, dividing the Dataset after the P2 pretreatment into an M2 training set and an M2 testing set, and training and testing the constructed three-dimensional gating residual full convolution network to obtain a tumor segmentation depth network segmentation model M2;
s104, after the image sequence to be segmented is subjected to P1 preprocessing, a kidney region is segmented by using M1, and segmentation results are spliced; and (4) cutting the spliced segmentation result, taking out voxels only containing the kidney or the tumor, performing P2 pretreatment, and segmenting the tumor region by using M2.
Further, in the step S101, the kidney and the tumor in the slice image are labeled by using ITK-SNAP medical image labeling software to generate a corresponding labeling mask, and a Dataset is formed by the slice image and the corresponding labeling mask.
Further, in S102, performing P1 pretreatment on Dataset, specifically including: interpolating all slice images in the Dataset and the corresponding label masks into voxel intervals with the same resolution; randomly cutting each slice image after interpolation and the corresponding label mask to obtain voxel small blocks, and performing normalization processing on the voxel small blocks; the resolution after interpolation is lower than before interpolation.
Further, 3DUnet is constructed according to the size of the voxel small block for obtaining a segmentation mask of the kidney region.
Further, in S103, performing P2 pretreatment on Dataset, specifically including: interpolating all slice images in the Dataset and the corresponding marking masks thereof into voxel intervals with the same resolution, acquiring tumor boundaries by using edge detection on the interpolated marking masks, randomly cutting each interpolated slice image and the corresponding marking masks and tumor boundaries to acquire voxel small blocks, and performing normalization processing on the voxel small blocks; the resolution after interpolation is higher than before interpolation.
Further, a three-dimensional gating residual full convolution network is constructed according to the size of the voxel small block, and the three-dimensional gating residual full convolution network comprises a main network and a tumor shape branch network; the backbone network is 3DUnet, wherein, the encoder and the decoder are connected in a jumping way corresponding to the feature maps with the same resolution; the tumor shape branch network comprises three cascade gated convolutional layers, 3x3x3 convolutional layers, tri-linear interpolation and 1x1x1 convolutional layers, the output of the first stage deconvolution layer of a decoder is used as the input of the first stage gated convolutional layer after passing through 1x1x1 convolutional layer, 3x3x3 convolutional layer and tri-linear interpolation, the output of the second stage deconvolution layer of the decoder is used as the input of the first stage gated convolutional layer after passing through 1x1x1 convolutional layer, the output of the first stage gated convolutional layer is used as the input of the second stage gated convolutional layer after passing through 3x3x3 convolutional layer and tri-linear interpolation, the output of the third stage deconvolution layer of the decoder is used as the input of the second stage gated convolutional layer after passing through 1x1x1 convolutional layer, the output of the second stage gated convolutional layer is used as the input of the third stage convolutional layer after passing through 3x3x3 convolutional layer and tri-linear interpolation, the output of the fourth stage gated convolutional layer of the decoder is used as the input of the third stage gated convolutional layer after passing through 1x1x1, the output of the third-level gating convolutional layer is used as the input of a full-link layer together with the output of a decoder after passing through a 1x1x1 convolutional layer, and the output of the full-link layer sequentially passes through a 1x1x1 convolutional layer and a Softmax classifier and then outputs a probability map; and the output of the third-level gated convolutional layer is used as the input of a sigmoid function after passing through a 1x1x1 convolutional layer, and the sigmoid function outputs a final prediction mask.
Compared with the prior art, the invention has the following advantages:
a two-stage segmentation model based on network cascade is provided, and the influence of class imbalance and small targets is reduced to a certain extent by adopting a random cutting strategy. Aiming at the problem that tumor boundaries are difficult to distinguish, 3DUnet is used as a main network, a tumor shape branch network is constructed based on a gated convolution layer, the tumor boundaries can be predicted, and therefore the tumor segmentation performance is improved.
Drawings
FIG. 1 is a control flow diagram of the method of the present invention;
FIG. 2 is a schematic representation of the predictive operation of the method of the present invention;
FIG. 3 is a diagram of the M1 model architecture in accordance with the present invention;
FIG. 4 is a diagram showing the structure of the M2 model in the present invention.
Detailed Description
In order to make the technical problems, technical solutions and advantageous effects to be solved by the present invention clearer and clearer, the present invention is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In this embodiment, the kidney tumor segmentation algorithm based on the cascade-gated 3DUnet model provided by the present invention is used for segmenting a kidney tumor in a CT image, as shown in fig. 1 and 2, and includes the following specific steps:
s101, acquiring an image containing a kidney in abdominal CT scanning, taking out slice images containing the kidney or a tumor to form an image sequence, labeling the kidney and the tumor in each slice image by using labeling software, generating a corresponding labeling mask, and constructing a data set Dataset.
This example data is from the KiTS19 game dataset, the raw data is flat scan CT sequence data, and the images and corresponding artificial annotation masks are provided in anonymous NIFTI format in the shape (slice number, height, width). Where the number of slices corresponds to an axial view and as the slice index increases from top to bottom, all patients are supine during image acquisition. When the patient has multiple scan images, the smallest slice thickness is selected, and the slice thickness of the data set is 1mm to 5 mm.
S102, performing P1 preprocessing operation on the Dataset, dividing the Dataset into a training set and a testing set, and constructing a deep network model M1 of kidney segmentation based on 3DUnet (as shown in figure 3).
Carrying out P1 pretreatment operation: the sequence images of all samples are resampled to a voxel spacing of 3.22 × 1.62 × 1.62mm (lowest resolution) by utilizing trilinear interpolation as a standard spacing, the artificial labeling masks of all samples are resampled to the same voxel spacing by utilizing nearest neighbor interpolation, voxel small blocks with the size of 80 × 160 × 160 (median) in each sample are fetched by a random cutting method (50% overlap ratio, namely step size is (40,80,80)), the CT value of the sequence images of each sample is limited to a range of [ -79,304] (5% -95% of all pixel ranges) so as to eliminate abnormal intensity values caused by certain substances, and z-score normalization processing is used due to the property of weight initialization in a network: the mean is subtracted from each pixel value and divided by the square difference so that the pixels of the image are in a range that is more easily processed by CNN. The 3d net is designed according to the size of the voxel small block, including the encoder and decoder. The parameter settings of the encoder are shown in table 1, and the decoder adopts a symmetrical structure.
TABLE 1 parameter settings for an encoder
Type (B) Size/step size Output size
Convolutional layer (x2) 3x3x3/1 80x160x160x30
Maximum pooling layer 1x2x2/1(2,2) 80x80x80x30
Convolutional layer (x2) 3x3x3/1 80x80x80x60
Maximum pooling layer 2x2x2/2 40x40x40x60
Convolutional layer (x2) 3x3x3/1 40x40x40x120
Maximum pooling layer 2x2x2/2 20x20x20x120
Convolutional layer (x2) 3x3x3/1 20x20x20x240
Maximum pooling layer 2x2x2/2 10x10x10x240
Convolutional layer (x2) 3x3x3/1 10x10x10x320
Maximum pooling layer 2x2x2/2 5x5x5x320
S103, cutting an image sequence in the Dataset and a corresponding label mask, taking out a voxel part only of a kidney (or a tumor), carrying out P2 preprocessing operation, dividing data into a training set and a testing set, and constructing a tumor segmentation depth network segmentation model M2 based on a three-dimensional gating residual full convolution network.
Carrying out P2 pretreatment operation: taking as a standard the voxel spacing of 3 × 0.78 × 0.78mm (median) of all samples, the artificial labeling mask and the tumor boundary mask, taking out the VOI region containing the kidney (or tumor) according to the artificially labeled information, resetting the value of the pixels except the kidney (or tumor) to 0, and then taking out the voxel small blocks with the size of 48 × 128 × 128 (median) in each VOI region by means of random clipping (50% overlap rate, step size (24,64,64)), limiting the CT value of the sequence image of each sample to the range of [ -79,304] (5% -95% of all pixel ranges) to eliminate abnormal intensity values caused by certain substances, and using z-score normalization process due to the property of weight initialization in the network: the mean value is subtracted from each pixel value and divided by the square difference. A U-shaped full convolution neural network is designed by taking a small voxel block as network input, and a tumor-shaped branch network is formed by cascade gating convolution layers, 1x1x1 convolution layers, trilinear interpolation and 1x1x1 convolution layers.
As shown in fig. 4, the three-dimensional gated residual full convolution network includes a main network and a tumor-shaped branch network; the backbone network is 3DUnet, wherein the encoder and the decoder are in jump connection with the feature maps with the same resolution, then channel number dimension reduction is carried out through 1x1x1 convolution, and finally a probability map is output through a Softmax classifier. The tumor shape branch network comprises three cascade gated convolutional layers, 3x3x3 convolutional layers, tri-linear interpolation and 1x1x1 convolutional layers, the output of the first stage deconvolution layer of a decoder is used as the input of the first stage gated convolutional layer after passing through 1x1x1 convolutional layer, 3x3x3 convolutional layer and tri-linear interpolation, the output of the second stage deconvolution layer of the decoder is used as the input of the first stage gated convolutional layer after passing through 1x1x1 convolutional layer, the output of the first stage gated convolutional layer is used as the input of the second stage gated convolutional layer after passing through 3x3x3 convolutional layer and tri-linear interpolation, the output of the third stage deconvolution layer of the decoder is used as the input of the second stage gated convolutional layer after passing through 1x1x1 convolutional layer, the output of the second stage gated convolutional layer is used as the input of the third stage convolutional layer after passing through 3x3x3 convolutional layer and tri-linear interpolation, the output of the fourth stage gated convolutional layer of the decoder is used as the input of the third stage gated convolutional layer after passing through 1x1x1, the output of the third-level gating convolutional layer is used as the input of a full-link layer together with the output of a decoder after passing through a 1x1x1 convolutional layer, and the output of the full-link layer sequentially passes through a 1x1x1 convolutional layer and a Softmax classifier and then outputs a probability map; and the output of the third-level gated convolutional layer is used as the input of a sigmoid function after passing through a 1x1x1 convolutional layer, and the sigmoid function outputs a final prediction mask. The backbone network encoder architecture still employs the setup of table 1. The 3x3x3 convolution layer is used for extracting boundary features, the trilinear interpolation layer is used for adjusting the feature diagram size, and the 1x1x1 convolution layer is used for reducing the dimension of the channel number.
The three-dimensional gating residual full convolution network sequentially passes through down-sampling and up-sampling parts included by an encoder and a decoder of a main network for input CT images to extract characteristic information, then passes through a tumor shape branch network to extract the shape of a tumor region, and represents the output boundary graph of the shape branch as s belonging to RH×W×CThe output characteristic diagram of the backbone network is z epsilon RH×W×CAnd performing tandem operation on the feature map of the shape branch network and the feature map output by the main network, and finally outputting a final prediction mask through convolution of 1x1x1 and soft-max. The method can generate more accurate segmentation results by fusing the semantic features of the main network and the boundary features of the shape branch network.
S104, selecting a proper optimization learning method, setting related hyper-parameters, and respectively training the models M1 and M2.
The data set constructed by the model M1 in S102 is trained, and the data set constructed by the model M2 in S103 is trained. And (4) performing loss optimization by using an Adam optimizer, and taking the best average index on the verification set as the optimal result of the model at the end of each training. The following hyper-parameter settings were used: the batch size is set to 2, 300 batches are iterated for one epoch, for a total of 150 epochs; initial learning rate settingIs 10-3Automatically reducing by 0.1 times when training at 80 th and 120 th epochs; momentum was set to 0.95 and the weight attenuation coefficient was constant at 10-4
S105, after optional CT image sequences in the test set are subjected to preprocessing operation, the kidney tumor region is segmented through models M1 and M2.
Randomly selecting a renal cancer patient case CT flat scan image in a test set, selecting a CT slice image containing a kidney or a tumor, performing P1 preprocessing operation, predicting by using a trained M1 model, splicing results according to the reverse order of random cutting, cutting out voxel small blocks only containing the kidney or the tumor according to the spliced result, performing P2 preprocessing operation, predicting by using a trained M2 model, and splicing the segmentation results according to the reverse order of random cutting to form a final segmentation result.
TABLE 2 evaluation index value (%) of different depth network model
Figure BDA0002928792860000061
Table 2 shows evaluation indexes of different depth network models, in which the two-dimensional full convolution network is composed by superimposing the results after segmenting each slice image, and 3DUnet and V-net adopt the structure of the original model. It can be seen that: the segmentation precision of the algorithm of the patent on the kidney and the tumor is higher than that of other comparison algorithms, especially for a tumor target, and due to the fact that a tumor gating shape branch network is added, quantitative indexes of tumor segmentation are remarkably improved.
The above description is only an embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can understand that the modifications or substitutions within the technical scope of the present invention are included in the scope of the present invention, and therefore, the scope of the present invention should be subject to the protection scope of the claims.

Claims (6)

1. A CT image kidney tumor segmentation method based on a cascade gating 3DUnet model is characterized by comprising the following specific steps:
s101, acquiring an image containing a kidney in abdominal CT scanning, taking out slice images containing the kidney or a tumor to form an image sequence, labeling the kidney and the tumor in each slice image by using labeling software, generating a corresponding labeling mask, and constructing a data set Dataset;
s102, performing P1 pretreatment on the Dataset, dividing the Dataset subjected to P1 pretreatment into an M1 training set and an M1 testing set, and training and testing the constructed three-dimensional full-convolution network 3DUnet to obtain a kidney segmentation depth network model M1;
s103, performing P2 pretreatment on the Dataset, dividing the Dataset after the P2 pretreatment into an M2 training set and an M2 testing set, and training and testing the constructed three-dimensional gating residual full convolution network to obtain a tumor segmentation depth network segmentation model M2;
s104, after the image sequence to be segmented is subjected to P1 preprocessing, a kidney region is segmented by using M1, and segmentation results are spliced; and (4) cutting the spliced segmentation result, taking out voxels only containing the kidney or the tumor, performing P2 pretreatment, and segmenting the tumor region by using M2.
2. The method of claim 1, wherein in step S101, the kidney and the tumor in the slice image are labeled by ITK-SNAP medical image labeling software to generate a corresponding labeling mask, and a Dataset is formed by the slice image and the corresponding labeling mask.
3. The method for CT image renal tumor segmentation based on the cascade-gated 3d netmodel as claimed in claim 1, wherein the preprocessing of Dataset P1 in S102 specifically includes: interpolating all slice images in the Dataset and the corresponding label masks into voxel intervals with the same resolution; randomly cutting each slice image after interpolation and the corresponding label mask to obtain voxel small blocks, and performing normalization processing on the voxel small blocks; the resolution after interpolation is lower than before interpolation.
4. The method of claim 3, wherein the 3D pnet is constructed according to the size of voxel small block for obtaining the segmentation mask of kidney region.
5. The method for segmenting the kidney tumor by the CT image based on the cascade-gated 3D netmodel as claimed in claim 1, wherein the preprocessing of P2 is performed on Dataset in S103, and specifically comprises the following steps: interpolating all slice images in the Dataset and the corresponding marking masks thereof into voxel intervals with the same resolution, acquiring tumor boundaries by using edge detection on the interpolated marking masks, randomly cutting each interpolated slice image and the corresponding marking masks and tumor boundaries to acquire voxel small blocks, and performing normalization processing on the voxel small blocks; the resolution after interpolation is higher than before interpolation.
6. The method for segmenting the kidney tumor by the CT image based on the cascade gating 3DUnet model as claimed in claim 5, wherein a three-dimensional gating residual full convolution network is constructed according to the size of a voxel small block, and the three-dimensional gating residual full convolution network comprises a main network and a tumor shape branch network; the backbone network is 3DUnet, wherein, the encoder and the decoder are connected in a jumping way corresponding to the feature maps with the same resolution; the tumor shape branch network comprises three cascade gated convolutional layers, 3x3x3 convolutional layers, tri-linear interpolation and 1x1x1 convolutional layers, the output of the first stage deconvolution layer of a decoder is used as the input of the first stage gated convolutional layer after passing through 1x1x1 convolutional layer, 3x3x3 convolutional layer and tri-linear interpolation, the output of the second stage deconvolution layer of the decoder is used as the input of the first stage gated convolutional layer after passing through 1x1x1 convolutional layer, the output of the first stage gated convolutional layer is used as the input of the second stage gated convolutional layer after passing through 3x3x3 convolutional layer and tri-linear interpolation, the output of the third stage deconvolution layer of the decoder is used as the input of the second stage gated convolutional layer after passing through 1x1x1 convolutional layer, the output of the second stage gated convolutional layer is used as the input of the third stage convolutional layer after passing through 3x3x3 convolutional layer and tri-linear interpolation, the output of the fourth stage gated convolutional layer of the decoder is used as the input of the third stage gated convolutional layer after passing through 1x1x1, the output of the third-level gating convolutional layer is used as the input of a full-link layer together with the output of a decoder after passing through a 1x1x1 convolutional layer, and the output of the full-link layer sequentially passes through a 1x1x1 convolutional layer and a Softmax classifier and then outputs a probability map; and the output of the third-level gated convolutional layer is used as the input of a sigmoid function after passing through a 1x1x1 convolutional layer, and the sigmoid function outputs a final prediction mask.
CN202110141339.3A 2021-02-02 2021-02-02 CT image kidney tumor segmentation method based on cascade gating 3DUnet model Active CN112767407B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110141339.3A CN112767407B (en) 2021-02-02 2021-02-02 CT image kidney tumor segmentation method based on cascade gating 3DUnet model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110141339.3A CN112767407B (en) 2021-02-02 2021-02-02 CT image kidney tumor segmentation method based on cascade gating 3DUnet model

Publications (2)

Publication Number Publication Date
CN112767407A true CN112767407A (en) 2021-05-07
CN112767407B CN112767407B (en) 2023-07-07

Family

ID=75704619

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110141339.3A Active CN112767407B (en) 2021-02-02 2021-02-02 CT image kidney tumor segmentation method based on cascade gating 3DUnet model

Country Status (1)

Country Link
CN (1) CN112767407B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113436204A (en) * 2021-06-10 2021-09-24 中国地质大学(武汉) High-resolution remote sensing image weak supervision building extraction method
CN113436173A (en) * 2021-06-30 2021-09-24 陕西大智慧医疗科技股份有限公司 Abdomen multi-organ segmentation modeling and segmentation method and system based on edge perception
CN113469941A (en) * 2021-05-27 2021-10-01 武汉楚精灵医疗科技有限公司 Method for measuring width of bile-pancreatic duct in ultrasonic bile-pancreatic duct examination
CN117237394A (en) * 2023-11-07 2023-12-15 万里云医疗信息科技(北京)有限公司 Multi-attention-based lightweight image segmentation method, device and storage medium

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109035197A (en) * 2018-05-31 2018-12-18 东南大学 CT contrastographic picture tumor of kidney dividing method and system based on Three dimensional convolution neural network
CN109829918A (en) * 2019-01-02 2019-05-31 安徽工程大学 A kind of liver image dividing method based on dense feature pyramid network
US10482603B1 (en) * 2019-06-25 2019-11-19 Artificial Intelligence, Ltd. Medical image segmentation using an integrated edge guidance module and object segmentation network
CN110570432A (en) * 2019-08-23 2019-12-13 北京工业大学 CT image liver tumor segmentation method based on deep learning
CN110599500A (en) * 2019-09-03 2019-12-20 南京邮电大学 Tumor region segmentation method and system of liver CT image based on cascaded full convolution network
CN110675406A (en) * 2019-09-16 2020-01-10 南京信息工程大学 CT image kidney segmentation algorithm based on residual double-attention depth network
CN110689543A (en) * 2019-09-19 2020-01-14 天津大学 Improved convolutional neural network brain tumor image segmentation method based on attention mechanism
WO2020077202A1 (en) * 2018-10-12 2020-04-16 The Medical College Of Wisconsin, Inc. Medical image segmentation using deep learning models trained with random dropout and/or standardized inputs
US20200143934A1 (en) * 2018-11-05 2020-05-07 HealthMyne, Inc. Systems and methods for semi-automatic tumor segmentation
CN111311592A (en) * 2020-03-13 2020-06-19 中南大学 Three-dimensional medical image automatic segmentation method based on deep learning
CN111354002A (en) * 2020-02-07 2020-06-30 天津大学 Kidney and kidney tumor segmentation method based on deep neural network
CN111563897A (en) * 2020-04-13 2020-08-21 北京理工大学 Breast nuclear magnetic image tumor segmentation method and device based on weak supervised learning
CN111627019A (en) * 2020-06-03 2020-09-04 西安理工大学 Liver tumor segmentation method and system based on convolutional neural network
CN111627024A (en) * 2020-05-14 2020-09-04 辽宁工程技术大学 U-net improved kidney tumor segmentation method
CN111798462A (en) * 2020-06-30 2020-10-20 电子科技大学 Automatic delineation method for nasopharyngeal carcinoma radiotherapy target area based on CT image
CN112085743A (en) * 2020-09-04 2020-12-15 厦门大学 Image segmentation method for renal tumor
CN112258526A (en) * 2020-10-30 2021-01-22 南京信息工程大学 CT (computed tomography) kidney region cascade segmentation method based on dual attention mechanism
US20210241027A1 (en) * 2018-11-30 2021-08-05 Tencent Technology (Shenzhen) Company Limited Image segmentation method and apparatus, diagnosis system, storage medium, and computer device

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109035197A (en) * 2018-05-31 2018-12-18 东南大学 CT contrastographic picture tumor of kidney dividing method and system based on Three dimensional convolution neural network
WO2020077202A1 (en) * 2018-10-12 2020-04-16 The Medical College Of Wisconsin, Inc. Medical image segmentation using deep learning models trained with random dropout and/or standardized inputs
US20200143934A1 (en) * 2018-11-05 2020-05-07 HealthMyne, Inc. Systems and methods for semi-automatic tumor segmentation
US20210241027A1 (en) * 2018-11-30 2021-08-05 Tencent Technology (Shenzhen) Company Limited Image segmentation method and apparatus, diagnosis system, storage medium, and computer device
CN109829918A (en) * 2019-01-02 2019-05-31 安徽工程大学 A kind of liver image dividing method based on dense feature pyramid network
US10482603B1 (en) * 2019-06-25 2019-11-19 Artificial Intelligence, Ltd. Medical image segmentation using an integrated edge guidance module and object segmentation network
CN110570432A (en) * 2019-08-23 2019-12-13 北京工业大学 CT image liver tumor segmentation method based on deep learning
CN110599500A (en) * 2019-09-03 2019-12-20 南京邮电大学 Tumor region segmentation method and system of liver CT image based on cascaded full convolution network
CN110675406A (en) * 2019-09-16 2020-01-10 南京信息工程大学 CT image kidney segmentation algorithm based on residual double-attention depth network
CN110689543A (en) * 2019-09-19 2020-01-14 天津大学 Improved convolutional neural network brain tumor image segmentation method based on attention mechanism
CN111354002A (en) * 2020-02-07 2020-06-30 天津大学 Kidney and kidney tumor segmentation method based on deep neural network
CN111311592A (en) * 2020-03-13 2020-06-19 中南大学 Three-dimensional medical image automatic segmentation method based on deep learning
CN111563897A (en) * 2020-04-13 2020-08-21 北京理工大学 Breast nuclear magnetic image tumor segmentation method and device based on weak supervised learning
CN111627024A (en) * 2020-05-14 2020-09-04 辽宁工程技术大学 U-net improved kidney tumor segmentation method
CN111627019A (en) * 2020-06-03 2020-09-04 西安理工大学 Liver tumor segmentation method and system based on convolutional neural network
CN111798462A (en) * 2020-06-30 2020-10-20 电子科技大学 Automatic delineation method for nasopharyngeal carcinoma radiotherapy target area based on CT image
CN112085743A (en) * 2020-09-04 2020-12-15 厦门大学 Image segmentation method for renal tumor
CN112258526A (en) * 2020-10-30 2021-01-22 南京信息工程大学 CT (computed tomography) kidney region cascade segmentation method based on dual attention mechanism

Non-Patent Citations (8)

* Cited by examiner, † Cited by third party
Title
YU-CHENG LIU 等: "Cascaded atrous dual attention U-Net for tumor segmentation", 《MULTIMEDIA TOOLS AND APPLICATIONS》 *
刘云鹏 等: "深度学习结合影像组学的肝脏肿瘤CT分割", 《中国图象图形学报》 *
孙明建 等: "基于新型深度全卷积网络的肝脏CT影像三维区域自动分割", 《中国生物医学工程学报》 *
张灿龙 等: "门控多层融合的实时语义分割", 《计算机辅助设计与图形学学报》 *
徐宏伟 等: "基于残差双注意力U-Net模型的CT图像囊肿肾脏自动分割", 《计算机应用研究》 *
徐宏伟: "平扫CT图像肾脏分割的深度学习算法研究", 《中国优秀硕士学位论文全文数据库医药卫生科技辑》 *
褚晶辉 等: "一种基于级联卷积网络的三维脑肿瘤精细分割", 《激光与光电子学进展》 *
郭雯 等: "基于深度学习的器官自动分割研究进展", 《医疗卫生装备》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113469941A (en) * 2021-05-27 2021-10-01 武汉楚精灵医疗科技有限公司 Method for measuring width of bile-pancreatic duct in ultrasonic bile-pancreatic duct examination
CN113469941B (en) * 2021-05-27 2022-11-08 武汉楚精灵医疗科技有限公司 Method for measuring width of bile-pancreatic duct in ultrasonic bile-pancreatic duct examination
CN113436204A (en) * 2021-06-10 2021-09-24 中国地质大学(武汉) High-resolution remote sensing image weak supervision building extraction method
CN113436173A (en) * 2021-06-30 2021-09-24 陕西大智慧医疗科技股份有限公司 Abdomen multi-organ segmentation modeling and segmentation method and system based on edge perception
CN113436173B (en) * 2021-06-30 2023-06-27 陕西大智慧医疗科技股份有限公司 Abdominal multi-organ segmentation modeling and segmentation method and system based on edge perception
CN117237394A (en) * 2023-11-07 2023-12-15 万里云医疗信息科技(北京)有限公司 Multi-attention-based lightweight image segmentation method, device and storage medium
CN117237394B (en) * 2023-11-07 2024-02-27 万里云医疗信息科技(北京)有限公司 Multi-attention-based lightweight image segmentation method, device and storage medium

Also Published As

Publication number Publication date
CN112767407B (en) 2023-07-07

Similar Documents

Publication Publication Date Title
Xie et al. Dynamic adaptive residual network for liver CT image segmentation
CN107230206B (en) Multi-mode data-based 3D pulmonary nodule segmentation method for hyper-voxel sequence lung image
CN112767407B (en) CT image kidney tumor segmentation method based on cascade gating 3DUnet model
CN111192245A (en) Brain tumor segmentation network and method based on U-Net network
Anand Segmentation coupled textural feature classification for lung tumor prediction
Li et al. DenseX-net: an end-to-end model for lymphoma segmentation in whole-body PET/CT images
CN112927255A (en) Three-dimensional liver image semantic segmentation method based on context attention strategy
CN111179237A (en) Image segmentation method and device for liver and liver tumor
KR20120041468A (en) System for detection of interstitial lung diseases and method therefor
Dutande et al. Deep residual separable convolutional neural network for lung tumor segmentation
Hong et al. Automatic liver and tumor segmentation based on deep learning and globally optimized refinement
Midya et al. Computerized diagnosis of liver tumors from CT scans using a deep neural network approach
CN112348826B (en) Interactive liver segmentation method based on geodesic distance and V-net
Mastouri et al. A morphological operation-based approach for Sub-pleural lung nodule detection from CT images
Dandıl et al. A Mask R-CNN based Approach for Automatic Lung Segmentation in Computed Tomography Scans
Pocė et al. Pancreas segmentation in CT images: state of the art in clinical practice
Zhang et al. ASE-Net: A tumor segmentation method based on image pseudo enhancement and adaptive-scale attention supervision module
CN113850788A (en) System for judging bladder cancer muscle layer infiltration state and application thereof
Sahu et al. False positives reduction in pulmonary nodule detection using a connected component analysis-based approach
Ifty et al. Implementation of liver segmentation from computed tomography (ct) images using deep learning
Johora et al. LUNG CANCER DETECTION USING MARKER-CONTROLLED WATERSHED WITH SVM
Balaji Generative deep belief model for improved medical image segmentation
Kareem et al. Effective classification of medical images using image segmentation and machine learning
Nadeem et al. Automated detection of ribs in chest CT scans and assessment of changes in their morphology between Total Lung Capacity (TLC) and Residual Volume (RV)
CN112184728B (en) Mammary gland blood vessel automatic segmentation method based on magnetic resonance image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant