CN111640121A - Rectum CT image tumor segmentation method based on improved U-net - Google Patents

Rectum CT image tumor segmentation method based on improved U-net Download PDF

Info

Publication number
CN111640121A
CN111640121A CN202010350024.5A CN202010350024A CN111640121A CN 111640121 A CN111640121 A CN 111640121A CN 202010350024 A CN202010350024 A CN 202010350024A CN 111640121 A CN111640121 A CN 111640121A
Authority
CN
China
Prior art keywords
image
net
network
rectal
tumor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202010350024.5A
Other languages
Chinese (zh)
Inventor
郑标
蔡晨晓
刘静波
许璟
马磊
黄杰
周燕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN202010350024.5A priority Critical patent/CN111640121A/en
Publication of CN111640121A publication Critical patent/CN111640121A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30028Colon; Small intestine
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a rectum CT image tumor segmentation method based on improved U-net, which comprises the following steps: intercepting a rectum area from a rectum CT image and preprocessing the rectum area to obtain a data set; expanding the data set by using a data expansion technology based on random elastic deformation; training a YOLOv3 neural network, detecting a rectal region, and judging whether a tumor region exists in a CT image; optimizing a U-net segmentation model according to an attention mechanism and a residual learning structure; according to the detection result of the YOLOv3 network, the CT image containing the tumor region is sent into an improved U-net model for training, and the shape of the rectal tumor region is segmented. Compared with the traditional U-net segmentation network, the method can reduce the calculation amount when the segmentation task of the rectal tumor area is carried out, can improve the information utilization rate of the original data set, and obtains higher segmentation precision.

Description

Rectum CT image tumor segmentation method based on improved U-net
Technical Field
The invention belongs to the field of medical image segmentation, and particularly relates to a rectal Computed Tomography (CT) image tumor segmentation method based on improved U-net.
Background
The rectal cancer is one of common malignant tumors in China, the number of the onset people and the number of the death people are ranked 5, and the morbidity and the mortality rate are increased year by year. Research and research show that the tumor region is very important to be segmented by adopting a reasonable imaging technology, and an important reference basis is provided for later-stage clinical treatment. The traditional imaging technology relies on the study, judgment and reading of the image by a doctor, the required time is long, the workload is large, meanwhile, the traditional imaging technology has extremely strong experience and frontier technology dependence, and the requirement of quick and batch clinical diagnosis is difficult to meet. Therefore, a method capable of accurately segmenting the rectal tumor area is sought, so that the diagnosis efficiency and accuracy of doctors are improved, and the method has great social significance.
In recent years, deep learning represented by a convolutional neural network is taken as a new machine learning model, and the deep learning is applied to a rectal tumor segmentation task due to the characteristics of no need of manually extracting features and high accuracy. The document "deep learning for full-automatic Localization and Segmentation of real cancer multiproperty MR" classifies all pixel points of a Rectal MRI image by using CNN, and obtains good Segmentation effect, but the method classifies pixel by pixel, and the similarity of adjacent pixel points is very high, so that much redundancy and time consumption are caused in the calculation process. In the document "full volumetric computational networks (FCNs) -based segmentation method for the volumetric tumors on T2-weighted magnetic response images", the segmentation of the rectal tumor region by the full convolutional neural network is improved in efficiency compared with the accuracy, but false positive misdiagnosis is easy to occur, and the overall accuracy is still not high.
Disclosure of Invention
The invention aims to provide a rectum CT image tumor segmentation method based on improved U-net.
The technical solution for realizing the purpose of the invention is as follows: a rectum CT image tumor segmentation method based on improved U-net comprises the following steps:
step 1, using an image morphology method to intercept a rectum region from a rectum CT image, and preprocessing the intercepted image data to obtain a data set;
step 2, expanding the data set by using a data expansion technology based on random elastic deformation;
step 3, training a Yolov3 neural network by using the expanded data set so as to detect the rectal region and judge whether a tumor region exists in the CT image;
step 4, optimizing an original U-net segmentation model according to an attention mechanism and a residual learning structure, and further obtaining an improved U-net model;
and 5, sending the CT image containing the tumor region into an improved U-net model for training according to the detection result of the YOLOv3 network, thereby segmenting the shape of the rectal tumor region.
Compared with the prior art, the invention has the following remarkable advantages: (1) the rectum region is extracted from the CT image, the processing region is reduced, the segmentation difficulty is reduced, the calculated amount is reduced, the algorithm running time is reduced, and the segmentation efficiency is improved; (2) the data expansion operation can obtain more data which are independently and identically distributed or approximately independently and identically distributed with the original data, so that the deep learning network can learn more invariance characteristics about the training data, and the information utilization rate of the segmentation system to the original data set is improved; (3) a YOLOv3 detection network is added, so that the false positive misdiagnosis rate of the rectal tumor area is obviously reduced; (4) an attribute mechanism is added into a U-net segmentation network, so that the U-net network can perform weighting operation on features in a training process, response of irrelevant areas is suppressed, the network can be more concentrated on tumor areas to be segmented, a residual error learning idea is introduced, network depth can be increased, expression capacity of the network is improved, nonlinearity of the network can be enhanced by matching with a convolution layer of 1 x 1, information of each pixel point can be more accurately expressed by the network, and segmentation performance is improved.
The present invention is described in further detail below with reference to the attached drawing figures.
Drawings
FIG. 1 is a general flow chart of the segmentation method of the present invention.
Fig. 2 is a schematic diagram of rectal tumor region extraction according to the present invention.
FIG. 3 is a schematic diagram of the structure of the Darknet-53 network of the present invention.
Fig. 4 is a schematic structural diagram of the YOLOv3 detection network in the present invention.
Fig. 5 is a schematic diagram of the structure of the U-net network in the present invention.
FIG. 6 is a schematic structural diagram of an attention mechanism in the present invention.
Fig. 7 is a schematic diagram of the structure of the residual error network in the present invention.
Fig. 8 is a schematic structural diagram of an improved U-net network in the present invention.
Detailed Description
As shown in FIG. 1, the invention relates to a rectal CT image tumor segmentation method based on improved U-net, comprising the following steps:
step 1, using an image morphology method to intercept a rectum region from a rectum CT image, and preprocessing the intercepted image to obtain a data set;
step 2, expanding the data set by using a data expansion technology based on random elastic deformation;
step 3, training a Yolov3 neural network by using the obtained data set, thereby detecting the rectal region and judging whether a tumor region exists in the CT image;
step 4, optimizing an original U-net segmentation model according to an attention mechanism and a residual learning structure, and further obtaining an improved U-net model;
and 5, sending the CT image containing the tumor region into an improved U-net model for training according to the detection result of the YOLOv3 network, thereby specifically segmenting the shape of the rectal tumor region.
Further, the method for extracting the rectal region from the CT image in step 1 specifically includes the following steps:
step 1.1: carrying out binarization processing on the sliced rectum CT image;
step 1.2: performing expansion processing on the binarized slice image to obtain a relatively complete contour;
step 1.3: finding the maximum circumscribed rectangle of the outline, obtaining the center coordinate of the maximum circumscribed rectangle, and intercepting a rectum area according to the center coordinate;
step 1.4: and performing Gaussian filtering processing on the intercepted rectum area to improve the signal-to-noise ratio of the image, and labeling the corresponding image by using labelme software to manufacture a data set.
Further, the step 2 of expanding the rectal CT image dataset by using a data expansion technique based on random elastic deformation specifically includes:
step 2.1: dividing an input image into n multiplied by n grids, carrying out random displacement on grid points of non-image edges, and sampling displacement vectors from Gaussian distribution of pixel standard deviations;
step 2.2: and calculating the displacement of other pixels by utilizing bicubic interpolation to generate smooth deformation on the grid, thereby expanding the image data set.
Further, training a YOLOv3 neural network by using the obtained data set in step 3, detecting a rectal region, and determining whether a tumor region exists in the CT image, specifically:
step 3.1: inputting the image data set into a Darknet-53 network, and extracting image characteristics by using a convolution layer and a residual error layer;
step 3.2: and constructing detectors with three different scales, and respectively predicting on the three scales by using the obtained multilayer characteristic diagram so as to judge whether the tumor region exists in the CT image.
Further, in step 4, the original U-net segmentation model is optimized according to an attention mechanism and a residual learning structure to obtain an improved U-net model, and further the shape of the rectal tumor region is specifically segmented, specifically:
step 4.1: inputting a rectum CT image data set containing a tumor region into a U-net network, and respectively extracting shallow features for segmentation and deep features for positioning by utilizing a contraction network and an expansion network in the U-net network;
step 4.2: and (3) performing importance screening on each feature according to an attention mechanism, namely giving weight to the feature, so as to improve the original U-net network structure, wherein the obtained feature output is as follows:
attl=ψT1TxlTg)) (1)
in the formula, attlFor characteristic output, #TIs a convolution of 1 × 1, σ1For ReLU activation function, xlInputting features, wherein g is gating input, a superscript l corresponds to a layer I network, a superscript T is matrix transposition, and the weight of each feature is as follows:
αl=σ32(attl)) (2)
in the formula, αlFor the weight of each feature, σ3As a resampling function, σ2Att for Sigmoid activation functionlOutputting the characteristics;
step 4.3: and carrying out jump connection on the output of each layer of the original U-net network according to the residual learning structure so as to improve the original U-net network structure, wherein the obtained residual output is as follows:
H(y)=F(y)+y (3)
wherein, H (y) is residual output, F (y) is convolution output, and y is input of a single neural network unit;
step 4.4: and according to the obtained characteristic diagram, performing two classification operations on all pixel points in the image to determine a rectal tumor area and a non-rectal tumor area.
The present invention will be described in detail with reference to the following examples and drawings.
Examples
With reference to fig. 1, a U-net improved tumor segmentation method for rectal CT images includes the following steps:
step 1: by using an image morphology method, a rectum region is cut out from a rectum CT image, and the cut-out image is preprocessed to obtain a data set, which specifically comprises the following steps:
step 1.1: carrying out binarization processing on the sliced rectum CT image;
step 1.2: performing expansion processing on the binarized slice image to obtain a relatively complete contour;
step 1.3: finding the maximum circumscribed rectangle of the outline, obtaining the center coordinate of the maximum circumscribed rectangle, and intercepting a rectum area according to the center coordinate;
step 1.4: and performing Gaussian filtering on the intercepted rectal region to improve the signal-to-noise ratio of the image, extracting a schematic diagram of the rectal tumor region as shown in FIG. 2, and labeling the corresponding picture by using labelme software to manufacture a data set.
Step 2: the data set is expanded by using a data expansion technology based on random elastic deformation, and the method specifically comprises the following steps:
step 2.1: dividing an input image into n multiplied by n grids, carrying out random displacement on grid points of non-image edges, and sampling displacement vectors from Gaussian distribution of pixel standard deviations;
step 2.2: and calculating the displacement of other pixels by utilizing bicubic interpolation to generate smooth deformation on the grid, thereby expanding the image data set.
And step 3: training a YOLOv3 neural network by using the obtained data set so as to detect the rectal region and judge whether a tumor region exists in the CT image, wherein the method specifically comprises the following steps:
step 3.1: inputting the image data set into a Darknet-53 network, and extracting image features by using a convolution layer and a residual error layer, wherein the method specifically comprises the following steps:
a Darknet-53 classification network is used for extracting multi-layer features from the 416 × 416 images of 3 channels, wherein the structural diagram of the Darknet-53 network is shown in FIG. 3 and mainly comprises convolution layers of 3 × 3 and 1 × 1 and residual layers which are alternately combined.
Step 3.2: constructing detectors with three different scales, and respectively predicting on the three scales by using the obtained multilayer characteristic diagram so as to judge whether a tumor region exists in the CT image, wherein the method specifically comprises the following steps:
three detectors are constructed and used for respectively predicting on three scales, the receptive fields of feature maps of the three scales are respectively 13 × 13, 26 × 26, 52 × 52 and 13 × 13 are large, the detectors are used for detecting medium and large targets, and the last two scales 26 × 26 and 52 × 52 can find up-sampling features and finer-grained features in early feature mapping and are respectively used for extracting medium and small targets. A schematic diagram of the structure of the YOLOv3 detection network is shown in fig. 4.
And 4, step 4: optimizing an original U-net segmentation model according to an attention mechanism and a residual learning structure, and further obtaining an improved U-net model, wherein the method specifically comprises the following steps:
step 4.1: inputting a rectum CT image data set containing a tumor region into a U-net network, and respectively extracting shallow features for segmentation and deep features for positioning by utilizing a contraction network and an expansion network, wherein the shallow features and the deep features are as follows:
the original U-net network is schematically shown in fig. 5, which has a total of 23 convolutional layers, and includes 4 downsampling operations in the left-side shrinkage network, where each downsampling operation includes two convolution operations of 3 × 3 and a pooling operation of 2 × 2, so that the size of the image can be halved and the number of features can be doubled. The expansion network on the right contains 4 upsampling operations, each of which contains only two convolution operations of 3 × 3, so that the image is enlarged to twice the original size, i.e., restored to the size of the original input image, and the number of features is halved. And using a cutting and copying channel between the contraction path and the expansion path, cutting and copying the low-level feature map of the contraction path to the expansion path and the corresponding high-level feature map for fusion.
Step 4.2: and (3) performing importance screening on each characteristic according to an attention mechanism, namely giving a weight to improve the original U-net network structure, wherein the importance screening is as follows:
the schematic structure of the attention mechanism is shown in FIG. 6, first, x is alignedlAnd g, performing 1 × 1 convolution, performing point-by-point addition on the outputs of the two and activating through a ReLU function, performing 1 × 1 convolution on the outputs and activating through a Sigmoid function, and finally resampling the activation values to obtain the weight αlWeight αlAnd input xlAnd multiplying to obtain the weighted characteristic value. The characteristic outputs obtained are:
attl=ψT1TxlTg)) (1)
in the formula, attlFor characteristic output, #TIs a convolution of 1 × 1, σ1For ReLU activation function, xlInputting features, wherein g is gating input, a superscript l corresponds to a layer I network, a superscript T is matrix transposition, and the weight of each feature is as follows:
αl=σ32(attl)) (2)
in the formula, αlFor the weight of each feature, σ3As a resampling function, σ2Att for Sigmoid activation functionlOutputting the characteristics;
step 4.3: according to the residual error learning structure, carrying out jump connection on the output of each layer of the original U-net network, thereby improving the original U-net network structure, which comprises the following specific steps:
the schematic structural diagram of the residual error network is shown in fig. 7, the residual error network establishes an identity mapping by fitting the residual error, the deep network learning is converted into the shallow network learning problem, the problem is simplified to a certain extent, and the obtained residual error is output as follows:
H(y)=F(y)+y (3)
wherein, H (y) is residual output, F (y) is convolution output, and y is input of a single neural network unit;
step 4.4: and according to the obtained characteristic diagram, performing two classification operations on all pixel points in the image to determine a rectal tumor area and a non-rectal tumor area.
And 5: according to the detection result of the YOLOv3 network, the CT image containing the tumor region is sent into an improved U-net model for training, so as to specifically segment the shape of the rectal tumor region, which is as follows:
the schematic structure of the improved U-net network is shown in fig. 8, and the main difference from the original U-net network structure is as follows: firstly, batch normalization is carried out on convolutional layer output, so that a deep network model is easier and more stable to train, and the times of training failure can be effectively reduced in an experiment; secondly, a residual error structure is introduced, so that the network depth can be increased, the expression capacity of the network is improved, the nonlinearity of the network can be enhanced by matching with the 1 x 1 convolutional layer, and the information of each pixel point can be more accurately expressed by the network; finally, an attention mechanism is added into the network, so that the U-net network can perform weighting operation on the features in the training process, thereby inhibiting the response of irrelevant areas and enabling the network to concentrate more on the tumor areas to be segmented.
In conclusion, aiming at the specific task of rectal CT image tumor segmentation, the traditional U-net segmentation network is improved mainly by referring to an attention mechanism and a residual learning idea, so that the false positive misdiagnosis rate can be reduced and higher precision can be obtained when the segmentation task of the rectal tumor region is carried out.

Claims (6)

1. A rectum CT image tumor segmentation method based on improved U-net is characterized by comprising the following steps:
step 1, using an image morphology method to intercept a rectum region from a rectum CT image, and preprocessing the intercepted image data to obtain a data set;
step 2, expanding the data set by using a data expansion technology based on random elastic deformation;
step 3, training a Yolov3 neural network by using the expanded data set so as to detect the rectal region and judge whether a tumor region exists in the CT image;
step 4, optimizing an original U-net segmentation model according to an attention mechanism and a residual learning structure, and further obtaining an improved U-net model;
and 5, sending the CT image containing the tumor region into an improved U-net model for training according to the detection result of the YOLOv3 network, thereby segmenting the shape of the rectal tumor region.
2. The method for tumor segmentation of rectal CT image based on improved U-net according to claim 1, wherein the method for extracting rectal region in CT image in step 1 specifically comprises the following steps:
step 1.1: carrying out binarization processing on the sliced rectum CT image;
step 1.2: performing expansion processing on the binarized slice image;
step 1.3: finding the maximum circumscribed rectangle of the outline, obtaining the center coordinate of the maximum circumscribed rectangle, and intercepting a rectum area according to the center coordinate;
step 1.4: and performing Gaussian filtering processing on the intercepted rectum area, and labeling the corresponding picture by using labelme software to manufacture a data set.
3. The method for tumor segmentation in rectal CT images based on improved U-net as claimed in claim 1, wherein the step 2 of augmenting the rectal CT image dataset with a data augmentation technique based on random elastic deformation specifically comprises:
step 2.1: dividing an input image into n multiplied by n grids, carrying out random displacement on grid points of non-image edges, and sampling displacement vectors from Gaussian distribution of pixel standard deviations;
step 2.2: and calculating the displacement of other pixels by utilizing bicubic interpolation to generate smooth deformation on the grid, thereby expanding the image data set.
4. The method for rectal CT image tumor segmentation based on improved U-net as claimed in claim 1, wherein the step 3 of training YOLOv3 neural network by using the obtained data set to detect rectal region and determine whether there is tumor region in CT image, specifically:
step 3.1: inputting the image data set into a Darknet-53 network, and extracting image characteristics by using a convolution layer and a residual error layer;
step 3.2: and constructing detectors with three different scales, and respectively predicting on the three scales by using the obtained multilayer characteristic diagram so as to judge whether the tumor region exists in the CT image.
5. A rectal CT image tumor segmentation method based on improved U-net according to claim 4, characterized in that the detectors of three different scales are respectively 13 x 13, 26 x 26 and 52 x 52.
6. The rectal CT image tumor segmentation method based on improved U-net according to claim 1, wherein the step 4 optimizes the original U-net segmentation model according to an attention mechanism and a residual learning structure to obtain an improved U-net model, and further specifically segments the shape of the rectal tumor region, specifically:
step 4.1: inputting a rectum CT image data set containing a tumor region into a U-net network, and respectively extracting shallow features for segmentation and deep features for positioning by utilizing a contraction network and an expansion network in the U-net network;
step 4.2: and (3) performing importance screening on each feature according to an attention mechanism, namely giving weight to the feature, so as to improve the original U-net network structure, wherein the obtained feature output is as follows:
attl=ψT1TxlTg)) (1)
in the formula, attlFor characteristic output, #TIs a convolution of 1 × 1, σ1For ReLU activation function, xlInputting features, wherein g is gating input, a superscript l corresponds to a layer I network, a superscript T is matrix transposition, and the weight of each feature is as follows:
αl=σ32(attl)) (2)
in the formula, αlFor the weight of each feature, σ3As a resampling function, σ2Att for Sigmoid activation functionlOutputting the characteristics;
step 4.3: and carrying out jump connection on the output of each layer of the original U-net network according to the residual learning structure so as to improve the original U-net network structure, wherein the obtained residual output is as follows:
H(y)=F(y)+y (3)
wherein, H (y) is residual output, F (y) is convolution output, and y is input of a single neural network unit;
step 4.4: and according to the obtained characteristic diagram, performing two classification operations on all pixel points in the image to determine a rectal tumor area and a non-rectal tumor area.
CN202010350024.5A 2020-04-28 2020-04-28 Rectum CT image tumor segmentation method based on improved U-net Withdrawn CN111640121A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010350024.5A CN111640121A (en) 2020-04-28 2020-04-28 Rectum CT image tumor segmentation method based on improved U-net

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010350024.5A CN111640121A (en) 2020-04-28 2020-04-28 Rectum CT image tumor segmentation method based on improved U-net

Publications (1)

Publication Number Publication Date
CN111640121A true CN111640121A (en) 2020-09-08

Family

ID=72331887

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010350024.5A Withdrawn CN111640121A (en) 2020-04-28 2020-04-28 Rectum CT image tumor segmentation method based on improved U-net

Country Status (1)

Country Link
CN (1) CN111640121A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112085113A (en) * 2020-09-14 2020-12-15 四川大学华西医院 Severe tumor image recognition system and method
CN112420199A (en) * 2020-12-17 2021-02-26 中国医学科学院皮肤病医院(中国医学科学院皮肤病研究所) Curative effect evaluation method based on vitiligo chromaticity
CN112580570A (en) * 2020-12-25 2021-03-30 南通大学 Method for detecting key points of human body posture image
CN112785617A (en) * 2021-02-23 2021-05-11 青岛科技大学 Automatic segmentation method for residual UNet rectal cancer tumor magnetic resonance image
CN112950624A (en) * 2021-03-30 2021-06-11 太原理工大学 Rectal cancer T stage automatic diagnosis method and equipment based on deep convolutional neural network
CN113034461A (en) * 2021-03-22 2021-06-25 中国科学院上海营养与健康研究所 Pancreas tumor region image segmentation method and device and computer readable storage medium
CN113362350A (en) * 2021-07-26 2021-09-07 海南大学 Segmentation method and device for cancer medical record image, terminal device and storage medium
CN114638814A (en) * 2022-03-29 2022-06-17 华南农业大学 Method, system, medium and device for automatically staging colorectal cancer based on CT (computed tomography) image
CN116797794A (en) * 2023-07-10 2023-09-22 北京透彻未来科技有限公司 Intestinal cancer pathology parting system based on deep learning

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112085113A (en) * 2020-09-14 2020-12-15 四川大学华西医院 Severe tumor image recognition system and method
CN112085113B (en) * 2020-09-14 2021-05-04 四川大学华西医院 Severe tumor image recognition system and method
CN112420199A (en) * 2020-12-17 2021-02-26 中国医学科学院皮肤病医院(中国医学科学院皮肤病研究所) Curative effect evaluation method based on vitiligo chromaticity
CN112580570A (en) * 2020-12-25 2021-03-30 南通大学 Method for detecting key points of human body posture image
CN112785617A (en) * 2021-02-23 2021-05-11 青岛科技大学 Automatic segmentation method for residual UNet rectal cancer tumor magnetic resonance image
CN112785617B (en) * 2021-02-23 2022-04-15 青岛科技大学 Automatic segmentation method for residual UNet rectal cancer tumor magnetic resonance image
CN113034461A (en) * 2021-03-22 2021-06-25 中国科学院上海营养与健康研究所 Pancreas tumor region image segmentation method and device and computer readable storage medium
CN112950624A (en) * 2021-03-30 2021-06-11 太原理工大学 Rectal cancer T stage automatic diagnosis method and equipment based on deep convolutional neural network
CN113362350A (en) * 2021-07-26 2021-09-07 海南大学 Segmentation method and device for cancer medical record image, terminal device and storage medium
CN113362350B (en) * 2021-07-26 2024-04-02 海南大学 Method, device, terminal equipment and storage medium for segmenting cancer medical record image
CN114638814A (en) * 2022-03-29 2022-06-17 华南农业大学 Method, system, medium and device for automatically staging colorectal cancer based on CT (computed tomography) image
CN114638814B (en) * 2022-03-29 2024-04-16 华南农业大学 Colorectal cancer automatic staging method, system, medium and equipment based on CT image
CN116797794A (en) * 2023-07-10 2023-09-22 北京透彻未来科技有限公司 Intestinal cancer pathology parting system based on deep learning

Similar Documents

Publication Publication Date Title
CN111640121A (en) Rectum CT image tumor segmentation method based on improved U-net
Xie et al. Automated pulmonary nodule detection in CT images using deep convolutional neural networks
CN109523521B (en) Pulmonary nodule classification and lesion positioning method and system based on multi-slice CT image
Nie et al. Automatic detection of melanoma with yolo deep convolutional neural networks
WO2021203795A1 (en) Pancreas ct automatic segmentation method based on saliency dense connection expansion convolutional network
CN110942446A (en) Pulmonary nodule automatic detection method based on CT image
Jing et al. Fine building segmentation in high-resolution SAR images via selective pyramid dilated network
CN109363698A (en) A kind of method and device of breast image sign identification
CN115496771A (en) Brain tumor segmentation method based on brain three-dimensional MRI image design
CN112329871B (en) Pulmonary nodule detection method based on self-correction convolution and channel attention mechanism
CN114565761A (en) Deep learning-based method for segmenting tumor region of renal clear cell carcinoma pathological image
CN113192076B (en) MRI brain tumor image segmentation method combining classification prediction and multi-scale feature extraction
CN109363697A (en) A kind of method and device of breast image lesion identification
An et al. Transitive transfer learning-based anchor free rotatable detector for SAR target detection with few samples
CN113537357A (en) Thyroid cancer CT image classification system based on depth residual error network
Tang et al. An object fine-grained change detection method based on frequency decoupling interaction for high-resolution remote sensing images
CN113421240A (en) Mammary gland classification method and device based on ultrasonic automatic mammary gland full-volume imaging
CN116091490A (en) Lung nodule detection method based on YOLOv4-CA-CBAM-K-means++ -SIOU
CN112614093A (en) Breast pathology image classification method based on multi-scale space attention network
Zhang et al. Multi-scale aggregation networks with flexible receptive fields for melanoma segmentation
Wang et al. Accurate lung nodule segmentation with detailed representation transfer and soft mask supervision
CN118037791A (en) Construction method and application of multi-mode three-dimensional medical image segmentation registration model
Sivapriya et al. ViT-DexiNet: a vision transformer-based edge detection operator for small object detection in SAR images
CN113643308A (en) Lung image segmentation method and device, storage medium and computer equipment
Lin et al. MM-UNet: A novel cross-attention mechanism between modules and scales for brain tumor segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20200908

WW01 Invention patent application withdrawn after publication