CN112790782B - Automatic pelvic tumor CTV (computer-to-volume) delineation system based on deep learning - Google Patents
Automatic pelvic tumor CTV (computer-to-volume) delineation system based on deep learning Download PDFInfo
- Publication number
- CN112790782B CN112790782B CN202110142618.1A CN202110142618A CN112790782B CN 112790782 B CN112790782 B CN 112790782B CN 202110142618 A CN202110142618 A CN 202110142618A CN 112790782 B CN112790782 B CN 112790782B
- Authority
- CN
- China
- Prior art keywords
- ctv
- partition
- drainage
- network
- image data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000013135 deep learning Methods 0.000 title claims abstract description 21
- 206010061336 Pelvic neoplasm Diseases 0.000 title claims abstract description 11
- 238000005192 partition Methods 0.000 claims abstract description 52
- 206010008342 Cervix carcinoma Diseases 0.000 claims abstract description 32
- 208000006105 Uterine Cervical Neoplasms Diseases 0.000 claims abstract description 32
- 201000010881 cervical cancer Diseases 0.000 claims abstract description 32
- 208000015634 Rectal Neoplasms Diseases 0.000 claims abstract description 31
- 206010038038 rectal cancer Diseases 0.000 claims abstract description 31
- 201000001275 rectum cancer Diseases 0.000 claims abstract description 31
- 210000002751 lymph Anatomy 0.000 claims abstract description 22
- 238000002372 labelling Methods 0.000 claims abstract description 19
- 238000007781 pre-processing Methods 0.000 claims abstract description 9
- 238000012549 training Methods 0.000 claims abstract description 9
- 230000011218 segmentation Effects 0.000 claims abstract description 8
- 238000012545 processing Methods 0.000 claims abstract description 5
- 238000005070 sampling Methods 0.000 claims description 26
- 230000004927 fusion Effects 0.000 claims description 8
- 230000001926 lymphatic effect Effects 0.000 claims description 6
- 238000000034 method Methods 0.000 claims description 6
- 238000007796 conventional method Methods 0.000 claims description 4
- 238000011156 evaluation Methods 0.000 claims description 4
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 claims description 3
- 238000012952 Resampling Methods 0.000 claims description 3
- 238000011176 pooling Methods 0.000 claims description 3
- 238000012805 post-processing Methods 0.000 claims description 3
- 230000009466 transformation Effects 0.000 claims description 3
- 210000001015 abdomen Anatomy 0.000 claims description 2
- 230000002708 enhancing effect Effects 0.000 claims description 2
- 230000004069 differentiation Effects 0.000 claims 1
- 201000010099 disease Diseases 0.000 abstract description 3
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 abstract description 3
- 238000004364 calculation method Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 210000001165 lymph node Anatomy 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000000717 retained effect Effects 0.000 description 2
- 206010009944 Colon cancer Diseases 0.000 description 1
- 208000001333 Colorectal Neoplasms Diseases 0.000 description 1
- 206010028980 Neoplasm Diseases 0.000 description 1
- 230000003187 abdominal effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000002146 bilateral effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000007717 exclusion Effects 0.000 description 1
- PCHJSUWPFVWCPO-UHFFFAOYSA-N gold Chemical compound [Au] PCHJSUWPFVWCPO-UHFFFAOYSA-N 0.000 description 1
- 239000010931 gold Substances 0.000 description 1
- 229910052737 gold Inorganic materials 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000001959 radiotherapy Methods 0.000 description 1
- 238000011282 treatment Methods 0.000 description 1
- 238000011269 treatment regimen Methods 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/02—Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
- A61B6/03—Computed tomography [CT]
- A61B6/032—Transmission computed tomography [CT]
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/50—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/52—Devices using data or image processing specially adapted for radiation diagnosis
- A61B6/5211—Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30028—Colon; Small intestine
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30204—Marker
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Medical Informatics (AREA)
- Physics & Mathematics (AREA)
- Surgery (AREA)
- Heart & Thoracic Surgery (AREA)
- Veterinary Medicine (AREA)
- Biophysics (AREA)
- High Energy & Nuclear Physics (AREA)
- Theoretical Computer Science (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Optics & Photonics (AREA)
- Pathology (AREA)
- Radiology & Medical Imaging (AREA)
- Biomedical Technology (AREA)
- Public Health (AREA)
- Molecular Biology (AREA)
- General Health & Medical Sciences (AREA)
- Animal Behavior & Ethology (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Dentistry (AREA)
- Pulmonology (AREA)
- Apparatus For Radiation Diagnosis (AREA)
Abstract
The invention discloses an automatic pelvic tumor CTV delineation system based on deep learning, which is suitable for pelvic lymph drainage areas, cervical cancer CTV and rectal cancer CTV, and comprises the following steps: step S1: acquiring CT image data and clinician labeling drainage area partitions, and preprocessing the image data; step S2: constructing a deep learning segmentation model of the drainage distinguishing area; step S3: processing the CT image data and the clinician labeling drainage partition image in the steps S1 and S2, inputting the CT image data and the clinician labeling drainage partition image into a network, training the network, and obtaining a partition outline; step S4: automatically generating a cervical cancer CTV contour through the partition contour; and step S5: and automatically generating a rectal cancer CTV contour through the regional contour. The automatic delineation system can assist doctors to delineate cervical cancer CTV and rectal cancer CTV according to patient disease conditions and stage conditions, and the introduced dense network can effectively improve the identification capability of the pelvic lymph drainage area.
Description
Technical Field
The invention relates to the technical field of medical images and computers, in particular to a deep learning network-based automatic delineation system for pelvic cavity lymph drainage areas, cervical cancer CTV and rectal cancer CTV in CT images.
Background
In the field of radiotherapy, accurate delineation of cervical cancer CTV (tumor target area) and rectal cancer CTV has important clinical significance. During the course of cervical cancer CTV and rectal cancer CTV delineation, clinicians must refer to the pelvic lymph drainage area while performing CTV delineation in consideration of the clinical staging and treatment regimen of the patient. In addition, when the clinician delineates the target area, the clinician also needs to draw in combination with the pelvic lymph drainage area. Therefore, the automatic delineation of pelvic lymph drainage areas, cervical cancer CTV and rectal cancer CTV has very important clinical significance.
At present, the mediastinal drainage area, the cervical cancer CTV and the rectal cancer CTV are completely delineated manually by a clinician. This conventional manual method has the following disadvantages: firstly, manual delineation consumes a lot of time, often requiring several hours to delineate a patient; secondly, the sketching has objective errors easily, is difficult to be found and causes medical accidents easily. Third, the clinician's clinical experience determines the quality of the delineation. Fourth, different doctors have differences in subjective understanding considering the clinical stages and treatment plans of the same patient, resulting in inconsistent delineation styles. Therefore, the automatic delineation of the pelvic lymph drainage area, the cervical cancer CTV and the rectal cancer CTV can effectively avoid the problems, and can assist a doctor to rapidly, concisely, accurately and highly consistently delineate.
The information disclosed in this background section is only for enhancement of understanding of the general background of the invention and should not be taken as an acknowledgement or any form of suggestion that this information forms the prior art already known to a person skilled in the art.
The information disclosed in this background section is only for enhancement of understanding of the general background of the invention and should not be taken as an acknowledgement or any form of suggestion that this information forms the prior art already known to a person skilled in the art.
Disclosure of Invention
The invention aims to provide an automatic pelvic tumor CTV delineation system based on deep learning, which can assist a doctor to delineate cervical cancer CTV and rectal cancer CTV according to the patient disease condition and the staging condition, and an introduced dense network can effectively improve the identification capability of a pelvic lymph drainage area.
In order to achieve the aim, the invention provides an automatic pelvic tumor CTV delineation system based on deep learning, which is suitable for pelvic lymph drainage areas, cervical cancer CTV and rectal cancer CTV, and comprises the following steps: step S1: acquiring CT image data and clinician labeling drainage area partitions, and preprocessing the image data; step S2: constructing a drainage partition area deep learning segmentation model; step S3: processing the CT image data and the clinician labeling drainage partition image in the steps S1 and S2, inputting the CT image data and the clinician labeling drainage partition image into a network, training the network, and obtaining a partition outline; step S4: automatically generating a cervical cancer CTV contour through the partition contour; and step S5: and automatically generating a rectal cancer CTV contour through the regional contour.
In a preferred embodiment, the step of preprocessing the image data in step S1 includes the steps of: step S11: acquiring a large number of multi-modal CT three-dimensional images and corresponding clinician labeling drainage area sectional profile maps; step S12: acquiring an image body contour, and intercepting a CT image from the CT image according to the generation size of the image body contour; step S13: normalizing the pixel value of the two-dimensional CT image to an abdomen window; step S14: resampling the image to a fixed size, and normalizing; and step S15: performing data enhancement includes: random flipping, random rotation, random warping, random noise, random affine transformation.
In a preferred embodiment, step S2 includes the following steps: step S21: constructing a lymphatic drainage area division network model, firstly, constructing a sub-module basic block of the lymphatic drainage area division network model, wherein a coding block consists of a residual module and a pooling layer, and simultaneously has three inputs and three outputs, wherein the inputs are from a lower sampling feature of a parent node on the upper layer, a brother node feature on the same layer and an upper sampling feature of a son node on the lower layer, and the inputs are used for outputting an upper sampling feature, a node feature on the same layer and a lower sampling feature; step S22: constructing a pelvic cavity drainage area identification model framework, wherein three paths consisting of basic blocks comprise a down-sampling path, a middle layer path and an up-sampling path; the input of the basic block of the down-sampling path only comes from the feature graph of the parent node, the basic block of the down-sampling path only outputs the up-sampling feature, and meanwhile, a short connection exists between the down-sampling path and the up-sampling path so as to accelerate network convergence; and step S23: in a network model, expansion convolution is introduced into a first layer basic block, so that the network receptive field is enlarged, and the identification capability of a network on a large drainage area is enhanced;
in a preferred embodiment, step S3 includes the following steps: step S31: inputting a large amount of preprocessed multi-modal CT patient data, performing data enhancement to prevent overfitting, and dividing a labeled image drainage area into two groups, wherein the first group is a clinical partition, and the second group is a partition only used for assisting generation of rectal cancer CTV and cervical cancer CTV; step S32: randomly forming a group by the images after data enhancement, inputting the group into a network, training the network until the group is kept unchanged in the evaluation standard, and keeping the model; step S33: inputting new data into the stored network model, and outputting a probability map of clinical lymph partition and auxiliary lymph partition; and step S34: and (4) performing post-processing partition.
In a preferred embodiment, step S4 includes the following steps: step S41: respectively converting the clinical drainage area and the auxiliary area of the area outline into binary images; step S42: preserving or removing a portion of the clinical diversion area and the auxiliary area according to the case stage and cervical cancer characteristics; step S43: respectively fusing the clinical drainage areas with the bilateral symmetry structure into two large areas by using a traditional method; step S44: the cervical cancer aid partition and the two large regions generated by fusion are fused using conventional methods to generate cervical cancer CTV.
In a preferred embodiment, step S5 includes the following steps: step S51: respectively converting the clinical drainage area and the auxiliary area of the area outline into binary images; step S52: retaining or removing a portion of the clinical drainage zone and the auxiliary zone according to the case stage and rectal cancer characteristics; step S53: dividing the clinical subarea into a direct fusion clinical subarea and an indirect fusion clinical subarea, and fusing an auxiliary subarea of the rectal cancer and the direct fusion clinical subarea to generate a large area; and step S54: and (3) fusing the indirectly fused clinical subarea and the large area generated by fusion into one area by using a traditional method to generate the rectal cancer CTV.
Compared with the prior art, the automatic pelvic tumor CTV delineation system based on deep learning has the following beneficial effects: by introducing a dense residual error network structure, the network can extract multi-scale information at the same time, so that the network can better position and divide a small diversion area; by introducing expansion convolution, the receptive field is enlarged, so that a large drainage area can be well divided; the network can generate clinical lymph partition and auxiliary partition at the same time, so that automatic generation with controllable CTV becomes possible. The division model of the pelvic lymph drainage area can help a doctor to accurately draw a target area and lymph nodes, and simultaneously cervical cancer CTV and rectal cancer CTV are automatically generated according to stages of patients; therefore, the doctor's sketching efficiency can be greatly improved, and the whole sketching flow is improved.
Drawings
FIG. 1 is a schematic flow diagram of an automatic delineation system according to an embodiment of the invention;
fig. 2 is a schematic diagram of a deep learning network structure of an automatic delineation system according to an embodiment of the present invention.
Detailed Description
The following detailed description of the present invention is provided in conjunction with the accompanying drawings, but it should be understood that the scope of the present invention is not limited to the specific embodiments.
Throughout the specification and claims, unless explicitly stated otherwise, the word "comprise", or variations such as "comprises" or "comprising", will be understood to imply the inclusion of a stated element or component but not the exclusion of any other element or component.
As shown in fig. 1 to fig. 1, an automatic mapping system for deep learning-based pelvic tumor CTV according to a preferred embodiment of the present invention is suitable for pelvic lymphatic drainage, cervical cancer CTV and rectal cancer CTV, and comprises the following steps:
step S1: and acquiring CT image data and clinician labeling drainage area partitions, and preprocessing the image data. The preprocessing of the image data in step S1 includes the steps of: step S11: acquiring a large number of multi-modal CT three-dimensional images and corresponding clinician labeling drainage area sectional profile maps; step S12: acquiring an image body contour, and intercepting a CT image from the CT image according to the size generated by the image body contour; step S13: the two-dimensional CT image pixel values are normalized to the abdominal window.
The normalized calculation method comprises the following steps:
lower=c-w/2;
higher=c+w/2;
x[x<lower]=0;
x[x>higher]=higher;
x=(x-lower)/(higher-lower);
wherein x is the CT pixel matrix, c is the window level, and w is the window width;
step S14: resampling the image to a fixed size, and normalizing; step S15: performing data enhancement includes: random flipping, random rotation, random warping, random noise, random affine transformation.
Step S2: and constructing a deep learning segmentation model of the drainage distinguishing area. The step S2 includes the following steps:
step S21: constructing a lymphatic drainage area cutting network model. Firstly, a basic block of a sub-module is constructed, a coding block is composed of a residual module and a pooling layer, and simultaneously, the coding block has three inputs and three outputs, wherein the inputs are from a parent node downsampling characteristic of the previous layer, a brother node characteristic of the same layer and a son node upsampling characteristic of the next layer, and the inputs are used for outputting an upsampling characteristic, a brother node characteristic and a downsampling characteristic. Step S22: constructing a pelvic cavity drainage area identification model framework, wherein three paths consisting of basic blocks comprise a down-sampling path, a middle layer path and an up-sampling path; the input of the basic block of the downsampling path only has the feature graph from the parent node, and the basic block of the downsampling path only outputs the upsampling feature. While a short connection between the down-sampling path and the up-sampling path also exists to speed up network convergence. Step S23: in the network model, expansion convolution is introduced into the first layer basic block, so that the network receptive field is enlarged, and the identification capability of the network to a large drainage area is enhanced.
Step S3: and (4) processing the above steps to obtain CT image data and clinician labeling drainage partition image input networks, training the networks, and obtaining partition outlines. Step S3 includes the following steps:
step S31: inputting a large amount of preprocessed multi-modal CT patient data, enhancing the data to prevent overfitting, and dividing the labeled image into two groups in a drainage way, wherein the first group is a clinical partition; the second group is the partitions used only to aid in the generation of rectal cancer CTV and cervical cancer CTV; step S32: randomly forming a group by the images after data enhancement, inputting the group into a network, and training the network; and calculating a training error between the prediction result and the doctor label, and guiding the network to learn the supervision information of the lymph drainage area. The evaluation error is calculated until the evaluation error is preserved, and the model is preserved.
The calculation method of the training error comprises the following steps:
and N refers to N data, the ith pixel point in the prediction result image is represented, and the ith pixel point in the gold mark image is represented.
Step S33: inputting new data into the stored network model, and outputting a probability map of clinical lymph partition and auxiliary lymph partition; step S34: and (4) performing post-processing partition.
Step S4: the cervical cancer CTV contours are automatically generated by the zonal contours. The construction of the deep learning segmentation model in the step S4 includes the following steps: step S41: respectively converting the clinical drainage area and the auxiliary area of the area outline into binary images; step S42: depending on the case stage and cervical cancer characteristics, certain clinical drainage and accessory compartments are retained or removed; step S43: respectively fusing the clinical drainage areas with the bilaterally symmetrical structures into two large areas by using a traditional method; step S44: the cervical cancer CTV is generated by fusing the cervical cancer auxiliary partition and the two large domains generated by the above fusion using a conventional method.
Step S5: generating automatically a rectal cancer CTV contour by a partition contour. Step S5 includes the following steps: step S51: respectively converting the clinical drainage area and the auxiliary area of the area outline into binary images; step S52: depending on the case stage and rectal cancer characteristics, certain clinical drainage and auxiliary compartments are retained or removed; step S53: dividing the clinical zones into directly fusible clinical zones and indirectly fusible clinical zones; fusing a colorectal cancer auxiliary partition and a directly-fused clinical partition into a large area; step S54: the indirectly fusible clinical compartment and the generated large compartment are fused to a region, i.e. rectal cancer CTV, using conventional methods.
In conclusion, the automatic pelvic tumor CTV delineation system based on deep learning has the following advantages: by introducing a dense residual error network structure, the network can extract multi-scale information at the same time, so that the network can better position and divide a small diversion area; by introducing expansion convolution, the receptive field is enlarged, so that a large drainage area can be well divided; the network can generate clinical lymph partition and auxiliary partition at the same time, so that automatic generation with controllable CTV becomes possible. The division model of the pelvic lymph drainage area can help a doctor to accurately draw a target area and lymph nodes, and simultaneously cervical cancer CTV and rectal cancer CTV are automatically generated according to stages of patients; therefore, the doctor's sketching efficiency can be greatly improved, and the whole sketching flow is improved. Meanwhile, the doctor can be assisted to draw up the cervical cancer CTV and the rectal cancer CTV according to the patient disease condition and the staging condition; the introduced dense network can effectively improve the identification capability of pelvic lymph drainage areas.
The foregoing descriptions of specific exemplary embodiments of the present invention have been presented for purposes of illustration and description. It is not intended to limit the invention to the precise form disclosed, and obviously many modifications and variations are possible in light of the above teaching. The exemplary embodiments were chosen and described in order to explain certain principles of the invention and its practical application to enable one skilled in the art to make and use various exemplary embodiments of the invention and various alternatives and modifications as are suited to the particular use contemplated. It is intended that the scope of the invention be defined by the claims and their equivalents.
Claims (3)
1. An automatic pelvic tumor CTV delineation system based on deep learning is suitable for pelvic lymph drainage areas, cervical cancer CTV and rectal cancer CTV in CT images, and is characterized by comprising:
a module for collecting CT image data and clinician labeling drainage area, and preprocessing the image data;
a module for constructing a drainage partition deep learning segmentation model;
processing the acquired CT image data and the clinician labeling drainage area partition by the module for acquiring the CT image data and the clinician labeling drainage area partition, preprocessing the image data and the module for constructing the drainage area partition deep learning segmentation model to obtain the CT image data and the clinician labeling drainage area partition image, inputting the CT image data and the clinician labeling drainage area partition image into a network, and training the network to obtain a partition outline;
automatically generating a cervical cancer CTV contour by the partition contour; and
automatically generating a rectal cancer CTV contour through the partition contour;
the CT image data and the clinician labeling drainage area image are obtained by processing the module for collecting the CT image data and the clinician labeling drainage area, the module for preprocessing the image data and the module for constructing the drainage area deep learning segmentation model, the network is input, the network is trained, and the area outline is obtained by the following steps:
inputting a large amount of preprocessed multi-modal CT patient data, enhancing the data to prevent overfitting, and dividing a labeled image drainage area into two groups, wherein the first group is a clinical partition, and the second group is a partition only used for assisting generation of rectal cancer CTV and cervical cancer CTV;
randomly forming a group by the images after data enhancement, inputting the group into a network, training the network until the group is kept unchanged in an evaluation standard, and keeping the model;
inputting new data into the stored network model, and outputting a probability map of clinical lymph partition and auxiliary lymph partition; and
a post-processing partition;
wherein the automatically generating a cervical cancer CTV contour by the partition contour comprises:
converting the clinical drainage area and the auxiliary area of the area outline into binary images respectively;
retaining or removing a portion of said clinical drainage zone and said secondary partition according to case stage and cervical cancer characteristics;
respectively fusing the clinical drainage areas with the left and right symmetrical structures into two large areas by using a traditional method; and
fusing the auxiliary partition of the cervical cancer and the two large areas generated by fusion by using a traditional method to generate a cervical cancer CTV;
the automatically generating a rectal cancer CTV contour by the partition contour comprises:
converting the clinical drainage area and the auxiliary area of the area outline into binary images respectively;
retaining or removing a portion of said clinical drainage zone and said secondary partition according to case stage and rectal cancer characteristics;
dividing said clinical zone into a directly fusible clinical zone and an indirectly fusible clinical zone, fusing said auxiliary zone and said directly fusible clinical zone of rectal cancer to generate a large region; and
the indirectly fusible clinical zone and the one large domain resulting from the fusion are fused into one domain using conventional methods to generate the rectal cancer CTV.
2. The system for automatic delineation of a deep learning based pelvic tumor (CTV) according to claim 1, wherein the means for collecting CT image data and clinician labeling drainage zone and pre-processing image data comprises:
acquiring a large number of multi-modal CT three-dimensional images and corresponding clinician labeling drainage area sectional profile maps;
acquiring an image body contour, and intercepting a CT image from the CT image according to the generation size of the image body contour;
normalizing the pixel value of the two-dimensional CT image to an abdomen window;
resampling the image to a fixed size, and normalizing; and
the data enhancement comprises the following steps: random flipping, random rotation, random warping, random noise, random affine transformation.
3. The automatic delineation system of a deep learning based pelvic tumor (CTV) according to claim 1, wherein the module for constructing a drainage differentiation region deep learning segmentation model comprises:
constructing a lymphatic drainage area division network model, firstly, constructing a sub-module basic block of the lymphatic drainage area division network model, wherein a coding block consists of a residual module and a pooling layer, and simultaneously has three inputs and three outputs, wherein the inputs are from a lower sampling feature of a parent node on the upper layer, a brother node feature on the same layer and an upper sampling feature of a son node on the lower layer, and the inputs are used for outputting an upper sampling feature, a node feature on the same layer and a lower sampling feature;
constructing a pelvic cavity drainage area identification model framework, wherein three paths consisting of basic blocks comprise a down-sampling path, a middle layer path and an up-sampling path; the input of the basic block of the down-sampling path only comes from the feature graph of the parent node, the basic block of the down-sampling path only outputs the up-sampling feature, and meanwhile, a short connection exists between the down-sampling path and the up-sampling path so as to accelerate network convergence; and
in the network model, expansion convolution is introduced into the first layer basic block, so that the network receptive field is enlarged, and the identification capability of the network to a large drainage area is enhanced.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110142618.1A CN112790782B (en) | 2021-02-02 | 2021-02-02 | Automatic pelvic tumor CTV (computer-to-volume) delineation system based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110142618.1A CN112790782B (en) | 2021-02-02 | 2021-02-02 | Automatic pelvic tumor CTV (computer-to-volume) delineation system based on deep learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112790782A CN112790782A (en) | 2021-05-14 |
CN112790782B true CN112790782B (en) | 2022-06-24 |
Family
ID=75813712
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110142618.1A Active CN112790782B (en) | 2021-02-02 | 2021-02-02 | Automatic pelvic tumor CTV (computer-to-volume) delineation system based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112790782B (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113288193B (en) * | 2021-07-08 | 2022-04-01 | 广州柏视医疗科技有限公司 | Automatic delineation system of CT image breast cancer clinical target area based on deep learning |
CN113488146B (en) * | 2021-07-29 | 2022-04-01 | 广州柏视医疗科技有限公司 | Automatic delineation method for drainage area and metastatic lymph node of head and neck nasopharyngeal carcinoma |
CN113689419A (en) * | 2021-09-03 | 2021-11-23 | 电子科技大学长三角研究院(衢州) | Image segmentation processing method based on artificial intelligence |
CN114494496B (en) * | 2022-01-27 | 2022-09-20 | 深圳市铱硙医疗科技有限公司 | Automatic intracranial hemorrhage delineation method and device based on head CT flat scanning image |
CN116570848B (en) * | 2023-07-13 | 2023-09-15 | 神州医疗科技股份有限公司 | Radiotherapy clinical auxiliary decision-making system based on automatic sketching |
CN117351489B (en) * | 2023-12-06 | 2024-03-08 | 四川省肿瘤医院 | Head and neck tumor target area delineating system for whole-body PET/CT scanning |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106780491A (en) * | 2017-01-23 | 2017-05-31 | 天津大学 | The initial profile generation method used in GVF methods segmentation CT pelvis images |
CN108062754A (en) * | 2018-01-19 | 2018-05-22 | 深圳大学 | Segmentation, recognition methods and device based on dense network image |
CN109118490A (en) * | 2018-06-28 | 2019-01-01 | 厦门美图之家科技有限公司 | A kind of image segmentation network generation method and image partition method |
CN109754402A (en) * | 2018-03-15 | 2019-05-14 | 京东方科技集团股份有限公司 | Image processing method, image processing apparatus and storage medium |
CN109949309A (en) * | 2019-03-18 | 2019-06-28 | 安徽紫薇帝星数字科技有限公司 | A kind of CT image for liver dividing method based on deep learning |
CN111127444A (en) * | 2019-12-26 | 2020-05-08 | 广州柏视医疗科技有限公司 | Method for automatically identifying radiotherapy organs at risk in CT image based on depth semantic network |
CN111797779A (en) * | 2020-07-08 | 2020-10-20 | 兰州交通大学 | Remote sensing image semantic segmentation method based on regional attention multi-scale feature fusion |
CN111862021A (en) * | 2020-07-13 | 2020-10-30 | 中山大学 | Deep learning-based automatic head and neck lymph node and drainage area delineation method |
CN111968120A (en) * | 2020-07-15 | 2020-11-20 | 电子科技大学 | Tooth CT image segmentation method for 3D multi-feature fusion |
CN112057751A (en) * | 2020-08-14 | 2020-12-11 | 中南大学湘雅医院 | Automatic delineation method for organs endangered in pelvic cavity radiotherapy |
CN112150470A (en) * | 2020-09-22 | 2020-12-29 | 平安科技(深圳)有限公司 | Image segmentation method, image segmentation device, image segmentation medium, and electronic device |
-
2021
- 2021-02-02 CN CN202110142618.1A patent/CN112790782B/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106780491A (en) * | 2017-01-23 | 2017-05-31 | 天津大学 | The initial profile generation method used in GVF methods segmentation CT pelvis images |
CN108062754A (en) * | 2018-01-19 | 2018-05-22 | 深圳大学 | Segmentation, recognition methods and device based on dense network image |
CN109754402A (en) * | 2018-03-15 | 2019-05-14 | 京东方科技集团股份有限公司 | Image processing method, image processing apparatus and storage medium |
CN109118490A (en) * | 2018-06-28 | 2019-01-01 | 厦门美图之家科技有限公司 | A kind of image segmentation network generation method and image partition method |
CN109949309A (en) * | 2019-03-18 | 2019-06-28 | 安徽紫薇帝星数字科技有限公司 | A kind of CT image for liver dividing method based on deep learning |
CN111127444A (en) * | 2019-12-26 | 2020-05-08 | 广州柏视医疗科技有限公司 | Method for automatically identifying radiotherapy organs at risk in CT image based on depth semantic network |
CN111797779A (en) * | 2020-07-08 | 2020-10-20 | 兰州交通大学 | Remote sensing image semantic segmentation method based on regional attention multi-scale feature fusion |
CN111862021A (en) * | 2020-07-13 | 2020-10-30 | 中山大学 | Deep learning-based automatic head and neck lymph node and drainage area delineation method |
CN111968120A (en) * | 2020-07-15 | 2020-11-20 | 电子科技大学 | Tooth CT image segmentation method for 3D multi-feature fusion |
CN112057751A (en) * | 2020-08-14 | 2020-12-11 | 中南大学湘雅医院 | Automatic delineation method for organs endangered in pelvic cavity radiotherapy |
CN112150470A (en) * | 2020-09-22 | 2020-12-29 | 平安科技(深圳)有限公司 | Image segmentation method, image segmentation device, image segmentation medium, and electronic device |
Also Published As
Publication number | Publication date |
---|---|
CN112790782A (en) | 2021-05-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112790782B (en) | Automatic pelvic tumor CTV (computer-to-volume) delineation system based on deep learning | |
CN112950651B (en) | Automatic delineation method of mediastinal lymph drainage area based on deep learning network | |
CN108765363B (en) | Coronary artery CTA automatic post-processing system based on artificial intelligence | |
CN110706246B (en) | Blood vessel image segmentation method and device, electronic equipment and storage medium | |
Commowick et al. | An efficient locally affine framework for the smooth registration of anatomical structures | |
Tian et al. | Multi-path convolutional neural network in fundus segmentation of blood vessels | |
CN113674253B (en) | Automatic segmentation method for rectal cancer CT image based on U-transducer | |
US8682074B2 (en) | Method for checking the segmentation of a structure in image data | |
CN106683104B (en) | Prostate Magnetic Resonance Image Segmentation method based on integrated depth convolutional neural networks | |
CN108324300B (en) | Method and apparatus for vessel segmentation | |
CN108053417A (en) | A kind of lung segmenting device of the 3DU-Net networks based on mixing coarse segmentation feature | |
CN109214397A (en) | The dividing method of Lung neoplasm in a kind of lung CT image | |
CN105389811A (en) | Multi-modality medical image processing method based on multilevel threshold segmentation | |
CN109509193B (en) | Liver CT atlas segmentation method and system based on high-precision registration | |
CN110008992B (en) | Deep learning method for prostate cancer auxiliary diagnosis | |
CN111784701B (en) | Ultrasonic image segmentation method and system combining boundary feature enhancement and multi-scale information | |
CN112991365B (en) | Coronary artery segmentation method, system and storage medium | |
WO2022247218A1 (en) | Image registration method based on automatic delineation | |
CN111275712A (en) | Residual semantic network training method oriented to large-scale image data | |
CN114170244A (en) | Brain glioma segmentation method based on cascade neural network structure | |
Jin et al. | Object recognition in medical images via anatomy-guided deep learning | |
CN109919216B (en) | Counterlearning method for computer-aided diagnosis of prostate cancer | |
CN113362360B (en) | Ultrasonic carotid plaque segmentation method based on fluid velocity field | |
CN116862930B (en) | Cerebral vessel segmentation method, device, equipment and storage medium suitable for multiple modes | |
CN112258536B (en) | Integrated positioning and segmentation method for calluses and cerebellum earthworm parts |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |