WO2022166800A1 - Deep learning network-based automatic delineation method for mediastinal lymphatic drainage region - Google Patents
Deep learning network-based automatic delineation method for mediastinal lymphatic drainage region Download PDFInfo
- Publication number
- WO2022166800A1 WO2022166800A1 PCT/CN2022/074510 CN2022074510W WO2022166800A1 WO 2022166800 A1 WO2022166800 A1 WO 2022166800A1 CN 2022074510 W CN2022074510 W CN 2022074510W WO 2022166800 A1 WO2022166800 A1 WO 2022166800A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- mediastinal
- lymphatic drainage
- network
- image
- deep learning
- Prior art date
Links
- 230000001926 lymphatic effect Effects 0.000 title claims abstract description 47
- 238000000034 method Methods 0.000 title claims abstract description 31
- 238000013135 deep learning Methods 0.000 title claims abstract description 26
- 230000011218 segmentation Effects 0.000 claims abstract description 35
- 238000012549 training Methods 0.000 claims abstract description 18
- 238000012360 testing method Methods 0.000 claims abstract description 11
- 238000007781 pre-processing Methods 0.000 claims abstract description 5
- 238000005192 partition Methods 0.000 claims description 31
- 210000004072 lung Anatomy 0.000 claims description 12
- 238000004364 calculation method Methods 0.000 claims description 10
- 238000012937 correction Methods 0.000 claims description 6
- 239000000284 extract Substances 0.000 claims description 6
- 238000012795 verification Methods 0.000 claims description 6
- 238000011156 evaluation Methods 0.000 claims description 5
- 230000006870 function Effects 0.000 claims description 5
- 230000008569 process Effects 0.000 claims description 5
- 238000000605 extraction Methods 0.000 claims description 4
- PCHJSUWPFVWCPO-UHFFFAOYSA-N gold Chemical compound [Au] PCHJSUWPFVWCPO-UHFFFAOYSA-N 0.000 claims description 4
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 claims description 3
- 238000012952 Resampling Methods 0.000 claims description 3
- 230000004927 fusion Effects 0.000 claims description 3
- 230000007246 mechanism Effects 0.000 claims description 3
- 230000009466 transformation Effects 0.000 claims description 3
- 230000000877 morphologic effect Effects 0.000 claims 1
- 238000009966 trimming Methods 0.000 claims 1
- 238000010200 validation analysis Methods 0.000 abstract description 9
- 238000013434 data augmentation Methods 0.000 abstract 1
- 238000001959 radiotherapy Methods 0.000 description 4
- 210000001165 lymph node Anatomy 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000013138 pruning Methods 0.000 description 2
- 230000002685 pulmonary effect Effects 0.000 description 2
- 230000004083 survival effect Effects 0.000 description 2
- 206010058467 Lung neoplasm malignant Diseases 0.000 description 1
- 206010028980 Neoplasm Diseases 0.000 description 1
- 210000003484 anatomy Anatomy 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000021615 conjugation Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000009472 formulation Methods 0.000 description 1
- 201000005202 lung cancer Diseases 0.000 description 1
- 208000020816 lung neoplasm Diseases 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 210000005015 mediastinal lymph node Anatomy 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 231100000331 toxic Toxicity 0.000 description 1
- 230000002588 toxic effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/047—Probabilistic or stochastic networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30061—Lung
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
Definitions
- the present invention relates to the field of medical images, in particular to an automatic delineation method for the mediastinal lymphatic drainage area based on a deep learning network.
- precise tumor radiotherapy technology can effectively improve the efficacy of patients and reduce toxic and side effects, and precise radiotherapy relies on precise target contours.
- the target area must be delineated carefully with reference to the drainage area of the drainage area.
- the mediastinal lymphatic drainage area also plays a very important role in the formulation of clinical staging and treatment principles for patients with lung cancer. Therefore, the automatic delineation of the drainage area has very important clinical significance. This method is helpful for clinicians to delineate the mediastinal drainage area quickly, accurately and with high consistency.
- the delineation speed is slow, which consumes a lot of precious time of the doctor;
- the accuracy of delineation depends on the clinical experience of the doctor, and requires a lot of prior clinical knowledge; difference.
- the purpose of the present invention is to provide an automatic delineation method for the mediastinal lymphatic drainage area based on a deep learning network.
- the network can better locate and segment small drainage areas, and at the same time, the network can better The capture of distant anatomical information improves under-segmentation or over-segmentation problems.
- the present invention provides an automatic delineation method for the mediastinal lymphatic drainage area based on a deep learning network, which is suitable for CT images.
- step S4 constructing a deep learning segmentation model;
- step S5 inputting the CT image data in the training set and the images of the mediastinal lymphatic drainage area manually marked by the doctor into the deep learning that has been constructed
- the segmentation model after the training iteration converges, the segmentation model of the mediastinal lymphatic drainage area is saved, and then the mediastinal lymphatic drainage area is identified and predicted, and the probability map of each partition of the mediastinal lymphatic drainage area is obtained.
- the preprocessing CT image data in step S1 and the images of the mediastinal lymphatic drainage area manually marked by the doctor include the following steps: step S11: collecting a large number of multimodal and multidistribution CT three-dimensional images and corresponding clinical The contour map drawn manually by the doctor; Step S12: Resampling the CT three-dimensional image and the image of the mediastinal lymphatic drainage area manually marked by the doctor to generate an image with the same physical scale; Step S13: Obtaining the three-dimensional lung area and mediastinal position , crop the three-dimensional CT image to a fixed size according to the lung area and mediastinal position; and step S14: normalize the pixel values of the two-dimensional CT image, and generate a multi-distribution CT image input segmentation network according to the lung window and the mediastinal window.
- the data enhancement in step S3 includes: random flip, random rotation, random distortion, random noise, random affine transformation, and random pruning.
- step S4 includes the following steps: Step S41 : constructing a network structure sub-module of the segmentation model, including: first, convolution operations 2 times and downsampling 1 time to extract the feature map of the module , second, construct up-sampling 1 time and convolution operation 2 times to restore its original resolution, and use the skip structure to fuse feature maps of different scales; Step S42: Build the network structure attention module of the segmentation model, Including: pyramid downsampling the key values and eigenvalues in the attention module to reduce a lot of calculations, obtain multi-scale key values and eigenvalues, and then construct a convolution operation to simulate the relationship between the key value and the query value.
- the attention module can capture the long-distance pixel dependency and extract the features of the multi-scale pyramid; and step S43: construct the network segmentation model network structure, and reuse the extraction step
- the feature sub-module of S41 is performed 4 times in order to have a larger receptive field and sufficient network capacity; the attention module of step S42 is inserted into each extracted feature sub-module, so that the network can extract long-distance dependencies and expand the network receptive field.
- the multi-scale information captured by the attention module can be effectively extracted at each layer; then the recovered spatial resolution sub-model is reused 4 times; short connections are used between each module so that the network can be better reversed Propagation and feature fusion.
- step S5 includes the following steps: Step S51: After a large number of patients are processed in the steps, the obtained data-enhanced images are input into the deep learning network. During the input process, the lungs obtained by processing steps S1 to S3 The region controls the number of CT layers of the input patient to reduce the input non-pulmonary regions; Step S52: randomly input the data-enhanced images into the network in groups, until the evaluation standard on the validation set no longer fluctuates greatly, and save the data with good performance on the validation set Model; Step S53: input the cases in the test set to the deep learning segmentation network that has been trained to obtain N partitions after being processed according to steps S1 to S3, and use the softmax function to convert the feature maps of the N partitions obtained into segmentation semantic probabilities Figure, and then use a fixed threshold to generate a binary image from the probability map; and step S54: evaluate the mutual relationship of the N partitions, obtain a mutual relationship table, and correct each partition, if a partition does not meet the outline standard defined by
- the automatic delineation method of the mediastinal lymphatic drainage area based on the deep learning network of the present invention has the following beneficial effects: by introducing a multi-scale non-local attention mechanism, the network can better locate and segment the small drainage area, At the same time, the network can better capture the long-distance deconstructed structural information to improve the problem of under-segmentation or over-segmentation; the segmentation model of the mediastinal lymphatic drainage area can help doctors to delineate the target area and lymph nodes more accurately, and also confirm the clinical stage and The development of treatment plans provides a certain basis, which can greatly reduce the burden on doctors and improve the survival rate of patients.
- FIG. 1 is a schematic flowchart of an automatic delineation method according to an embodiment of the present invention
- FIG. 2 is a schematic diagram of a deep learning network structure of an automatic delineation method according to an embodiment of the present invention.
- an automatic delineation method for a mediastinal lymphatic drainage area based on a deep learning network is suitable for CT images, and the automatic delineation method includes the following steps: Step S1: Collect CT image data and The images of the mediastinal lymphatic drainage area manually annotated by the doctor, and the CT image data and the images of the mediastinal lymphatic drainage area manually annotated by the doctor are preprocessed. Step S2: Group the preprocessed CT image data to obtain a training set, a verification set and a test set. Step S3: Data enhancement is performed on the training set, the validation set and the test set. Step S4: Build a deep learning segmentation model.
- step S5 input the CT image data in the training set and the images of the mediastinal lymphatic drainage area manually marked by the doctor into the deep learning segmentation model that has been constructed, after the training iteration converges, save the segmentation model of the mediastinal lymphatic drainage area, and then perform mediastinal lymphatic drainage. Zone identification and prediction, resulting in a probability map for each zone of the mediastinal lymphatic drainage zone.
- the preprocessing of the CT image data and the images of the mediastinal lymphatic drainage area manually marked by the doctor in step S1 includes the following steps: Step S11 : collecting a large number of multi-modality and multi-distribution CT three-dimensional images and corresponding manual by the clinician Outline drawing. Step S12: Resampling the CT three-dimensional images and the images of the mediastinal lymphatic drainage area manually marked by the doctor to generate images with the same physical scale. Step S13: Acquire the three-dimensional lung area and the mediastinal position, and crop the three-dimensional CT image to a fixed size according to the lung area and the mediastinal position. And step S14: normalize the pixel values of the two-dimensional CT image, and generate a multi-distribution CT image input segmentation network according to the lung window and the mediastinal window.
- x is the CT pixel matrix
- c is the window level
- w is the window width
- the data enhancement in step S3 includes: random flip, random rotation, random distortion, random noise, random affine transformation, and random pruning.
- step S4 includes the following steps: Step S41: constructing a network structure sub-module of the segmentation model, including: first, two convolution operations and one downsampling are used to extract the feature map of the module; Second, construct upsampling 1 time and convolution operation 2 times to restore its original resolution, and use skip structure to fuse feature maps of different scales.
- Step S42 constructing a network structure attention module of the segmentation model, including: pyramid downsampling the key values and eigenvalues in the attention module to reduce a large amount of calculations, obtain multi-scale key values and eigenvalues, and then construct a convolution The operation is used to simulate the attention relationship between the key value and the query value, and finally the focused feature map is queried under the attention relationship.
- the attention module can capture the long-distance pixel dependencies and extract the features of the multi-scale pyramid.
- step S43 construct the network segmentation model network structure, and reuse the feature sub-module of extraction step S41 4 times, so as to have a larger receptive field and sufficient network capacity; insert the attention of step S42 into each extraction feature sub-module module, so that the network can extract long-distance dependencies and expand the network receptive field, and the multi-scale information captured by the attention module can be effectively extracted at each layer.
- the recovered spatial resolution submodel is then reused 4 times. Use short connections between each module so that the network can have better backpropagation and feature fusion.
- step S5 includes the following steps: Step S51: After a large number of patients are processed in the steps, the obtained data-enhanced images are input into the deep learning network. During the input process, the lung area control obtained by the processing in steps S1 to S3 Enter the number of CT slices of the patient and reduce the input of non-pulmonary areas.
- L loss L IOU +a*L AC , where a is the balance factor
- N refers to the total amount of data
- pi represents the i -th pixel in the prediction result image
- qi represents the i -th pixel in the gold standard image
- N refers to the total amount of data
- p ij represents the pixel point in the i-th row and the j-th column in the prediction result image
- n is the total number of pixels.
- Step S52 The data-enhanced images are randomly input into the network in groups, until the evaluation standard on the validation set no longer fluctuates greatly, and the model with good performance on the validation set is saved.
- N refers to the total amount of data
- pi represents the ith pixel in the prediction result image
- qi represents the ith pixel in the gold standard image.
- Step S53 the cases in the test set are processed according to steps S1 to S3 and input to the deep learning segmentation network that has been trained to obtain N partitions, and the feature maps of the N partitions obtained are converted into segmentation semantic probability maps using the softmax function, Then use a fixed threshold to make the probability map generate a binary image.
- step S54 evaluate the mutual relationship of the N partitions, obtain a mutual relationship table, and correct each partition.
- a partition does not meet the delineation standard defined by the doctor, then process the partition through a correction procedure; if a partition and If other partitions do not have the relationship in the mutual relationship table, the partition is also processed through the correction procedure; until the N partitions meet the clinician's delineation criteria, the final segmentation result of the mediastinal lymphatic drainage area is obtained.
- the automatic delineation method of the mediastinal lymphatic drainage area based on the deep learning network of the present invention has the following advantages: by introducing a multi-scale non-local attention mechanism, the network can better locate and segment the small drainage area, and at the same time, the network can Better capture of long-distance anatomical structure information to improve the problem of under-segmentation or over-segmentation; the segmentation model of the mediastinal lymphatic drainage area can help doctors to more accurately delineate the target area and lymph nodes, and also provide doctors with clinical staging and treatment plans. It can greatly reduce the burden on doctors and improve the survival rate of patients.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Probability & Statistics with Applications (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Apparatus For Radiation Diagnosis (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
Disclosed in the present invention is a deep learning network-based automatic delineation method for a mediastinal lymphatic drainage region, suitable for CT images. The automatic delineation method comprises the following steps: S1: collecting CT image data and a mediastinal lymphatic drainage region image manually marked by a doctor, and preprocessing the CT image data and the mediastinal lymphatic drainage region image manually marked by the doctor; S2: grouping the preprocessed CT image data to obtain a training set, a validation set, and a test set; S3: performing data augmentation on the raining set, the validation set, and the test set; S4: constructing a deep learning segmentation model; and S5: inputting the CT image data and the mediastinal lymphatic drainage region image manually marked by the doctor in the training set into the constructed deep learning segmentation model, after iterative convergence of training, saving the segmentation model of the mediastinal lymphatic drainage region, and then performing recognition and prediction on the mediastinal lymphatic drainage region to obtain a probability graph of each subregion of the mediastinal lymphatic drainage region. A network may locate and segment a small drainage region well.
Description
本发明是关于医疗图像领域,特别是关于一种基于深度学习网络的纵隔淋巴引流区的自动勾画方法。The present invention relates to the field of medical images, in particular to an automatic delineation method for the mediastinal lymphatic drainage area based on a deep learning network.
在放疗领域,精准的肿瘤放射放疗技术能有效提高患者的疗效,降低毒副作用,而精准的放疗依赖的是精准的靶区轮廓。在靶区勾画过程中,必须仔细的参考引流区引流范围勾画靶区。另外,纵隔淋巴引流区对于肺癌等患者的临床分期和治疗原则的制定,也有非常重要的作用。因此,引流区自动勾画具有非常重要的临床意义。本方法有助于临床医生快速、精准、高一致性地勾画纵隔引流区。In the field of radiotherapy, precise tumor radiotherapy technology can effectively improve the efficacy of patients and reduce toxic and side effects, and precise radiotherapy relies on precise target contours. During the delineation of the target area, the target area must be delineated carefully with reference to the drainage area of the drainage area. In addition, the mediastinal lymphatic drainage area also plays a very important role in the formulation of clinical staging and treatment principles for patients with lung cancer. Therefore, the automatic delineation of the drainage area has very important clinical significance. This method is helpful for clinicians to delineate the mediastinal drainage area quickly, accurately and with high consistency.
而目前临床上纵隔引流区完全是通过临床医生手动勾画。这种方法存在以下缺点:At present, the mediastinal drainage area is completely delineated manually by clinicians. This approach has the following disadvantages:
第一,勾画速度慢,消耗医生大量宝贵时间;第二,勾画准确度依赖医生临床经验,而且需要大量的先验临床知识;第三,同一个医生在不同状态下勾画出来的结果存在较大差异。第四,不可避免存在人为误差。因此,在放疗数字化基础上,如何快速精确高一致性地帮助医生勾画出淋巴引流区是极其重要的。First, the delineation speed is slow, which consumes a lot of precious time of the doctor; second, the accuracy of delineation depends on the clinical experience of the doctor, and requires a lot of prior clinical knowledge; difference. Fourth, there is inevitable human error. Therefore, on the basis of digital radiotherapy, how to quickly, accurately and consistently help doctors delineate the lymphatic drainage area is extremely important.
公开于该背景技术部分的信息仅仅旨在增加对本发明的总体背景的理解,而不应当被视为承认或以任何形式暗示该信息构成已为本领域一般技术人员所公知的现有技术。The information disclosed in this Background section is only for enhancement of understanding of the general background of the invention and should not be taken as an acknowledgement or any form of suggestion that this information forms the prior art already known to a person of ordinary skill in the art.
发明内容SUMMARY OF THE INVENTION
本发明的目的在于提供一种基于深度学习网络的纵隔淋巴引流区的自动勾画方法,其通过引入多尺度非局部注意力模块,网络可以更好的定位和分割小引流区,同时网络能更好的捕捉远距离的解剖结构信息改进欠分割或过分割的问题。The purpose of the present invention is to provide an automatic delineation method for the mediastinal lymphatic drainage area based on a deep learning network. By introducing a multi-scale non-local attention module, the network can better locate and segment small drainage areas, and at the same time, the network can better The capture of distant anatomical information improves under-segmentation or over-segmentation problems.
为实现上述目的,本发明提供了一种基于深度学习网络的纵隔淋巴引流区的自动勾画方法,其适用于CT影像,自动勾画方法包括以下步骤:步骤S1:采集CT图像数据和医生手工标注的纵隔淋巴引流区图像,并预处理CT图像数据和医生手工标注的纵隔淋巴引流区图像;步骤S2:对预处理后的CT图像数据进行分组,得到训练集、验证集和测试集;步骤S3:对训练集、验证集及测试集进行数据增强;步骤S4:构建深度学习分割模型;以及步骤S5:将训练集中的CT图像数据和医生手工标注的纵隔淋巴引流区图像输入已经构建完的深度学习分割模型,训练迭代收敛后,保存纵隔淋巴引流区的分割模型,再进行纵隔淋巴引流区识别和预测,得到纵隔淋巴引流区的每个分区的概率图。In order to achieve the above purpose, the present invention provides an automatic delineation method for the mediastinal lymphatic drainage area based on a deep learning network, which is suitable for CT images. Mediastinal lymphatic drainage area images, and preprocess CT image data and mediastinal lymphatic drainage area images manually marked by doctors; Step S2: Group the preprocessed CT image data to obtain a training set, a validation set and a test set; Step S3: Data enhancement is performed on the training set, the validation set and the test set; step S4: constructing a deep learning segmentation model; and step S5: inputting the CT image data in the training set and the images of the mediastinal lymphatic drainage area manually marked by the doctor into the deep learning that has been constructed For the segmentation model, after the training iteration converges, the segmentation model of the mediastinal lymphatic drainage area is saved, and then the mediastinal lymphatic drainage area is identified and predicted, and the probability map of each partition of the mediastinal lymphatic drainage area is obtained.
在一优选的实施方式中,步骤S1中的预处理CT图像数据和医生手工标注的纵隔淋巴引流区图像包括以下步骤:步骤S11:采集大量的多模态和多分布CT三维图像和相应的临床医生手工勾画的轮廓图;步骤S12:对CT三维图像和的医生手工标注的纵隔淋巴引流区图像进行重采样,以生成具有相同物理尺度大小的图像;步骤S13:获取三维肺部区域和纵隔位置,依照肺部区域和纵隔位置将三维CT图像裁剪为固定的大小;以及步骤S14:对二维CT图像像素值规范化,根据肺窗和纵隔窗生成多分布CT图像输入分割网络。In a preferred embodiment, the preprocessing CT image data in step S1 and the images of the mediastinal lymphatic drainage area manually marked by the doctor include the following steps: step S11: collecting a large number of multimodal and multidistribution CT three-dimensional images and corresponding clinical The contour map drawn manually by the doctor; Step S12: Resampling the CT three-dimensional image and the image of the mediastinal lymphatic drainage area manually marked by the doctor to generate an image with the same physical scale; Step S13: Obtaining the three-dimensional lung area and mediastinal position , crop the three-dimensional CT image to a fixed size according to the lung area and mediastinal position; and step S14: normalize the pixel values of the two-dimensional CT image, and generate a multi-distribution CT image input segmentation network according to the lung window and the mediastinal window.
在一优选的实施方式中,步骤S3中的数据增强包括:随机翻转、随机旋转、随机扭曲、随机噪声、随机仿射变换、随机修剪。In a preferred embodiment, the data enhancement in step S3 includes: random flip, random rotation, random distortion, random noise, random affine transformation, and random pruning.
在一优选的实施方式中,步骤S4包括以下步骤:步骤S41:构建分割模型的网络结构子模块,包括:第一,卷积操作2次和下采样1次,用来提取该模块的特征图,第二,构建上采样1次和卷积操作2次,用来恢复其原来的分辨率,并使用跳跃结构来融合不同尺度的特征图;步骤S42:构建分 割模型的网络结构注意力模块,包括:分别对注意力模块中的键值、特征值进行金字塔下采样,以减少大量计算,得到多尺度键值、特征值,再构建卷积操作,用来模拟键值和查询值之间的注意力关系,最后在注意力关系下查询被关注的特征图,注意力模块能够捕捉远距离像素依赖关系和提取多尺度金字塔的特征;以及步骤S43:构建网络分割模型网络结构,重复利用提取步骤S41的特征子模块4次,以便能有较大感受野和充足的网络容量;在每次提取特征子模块中插入步骤S42的注意力模块,以便网络提取远距离依赖关系,扩大网络感受野,同时注意力模块捕捉的多尺度信息能在每层都被有效提取特征;然后再重复利用恢复空间分辨率子模型4次;在每个模块之间使用短连接,以便网络可以更好的反向传播和特征融合。In a preferred embodiment, step S4 includes the following steps: Step S41 : constructing a network structure sub-module of the segmentation model, including: first, convolution operations 2 times and downsampling 1 time to extract the feature map of the module , second, construct up-sampling 1 time and convolution operation 2 times to restore its original resolution, and use the skip structure to fuse feature maps of different scales; Step S42: Build the network structure attention module of the segmentation model, Including: pyramid downsampling the key values and eigenvalues in the attention module to reduce a lot of calculations, obtain multi-scale key values and eigenvalues, and then construct a convolution operation to simulate the relationship between the key value and the query value. attention relationship, and finally query the concerned feature map under the attention relationship, the attention module can capture the long-distance pixel dependency and extract the features of the multi-scale pyramid; and step S43: construct the network segmentation model network structure, and reuse the extraction step The feature sub-module of S41 is performed 4 times in order to have a larger receptive field and sufficient network capacity; the attention module of step S42 is inserted into each extracted feature sub-module, so that the network can extract long-distance dependencies and expand the network receptive field. At the same time, the multi-scale information captured by the attention module can be effectively extracted at each layer; then the recovered spatial resolution sub-model is reused 4 times; short connections are used between each module so that the network can be better reversed Propagation and feature fusion.
在一优选的实施方式中,步骤S5包括以下步骤:步骤S51:大量病人经过步骤处理后,得到的数据增强图像输入到深度学习网络,在输入过程中,通过步骤S1至S3处理得到的肺部区域控制输入患者CT层数,减少输入非肺部区域;步骤S52:把数据增强图像随机按组随机输入网络,直到验证集上评价标准不再有大的波动,保存在验证集上表现好的模型;步骤S53:把测试集内病例按照步骤S1至S3处理后输入至已经训练完成的深度学习分割网络以获得N个分区,使用softmax函数将得到的N个分区的特征图转化成分割语义概率图,再使用固定阈值使概率图生成二值图像;以及步骤S54:对N个分区进行相互关系评价,得到互相关系表,对每个分区进行矫正,如果某个分区不符合医生定义的勾画标准,则通过矫正程序处理该分区;如果某个分区和其他分区没有互相关系表中的关系,则也通过矫正程序处理该分区;直到N个分区都满足临床医生勾画标准,即得到最终的纵隔淋巴引流区分割结果。In a preferred embodiment, step S5 includes the following steps: Step S51: After a large number of patients are processed in the steps, the obtained data-enhanced images are input into the deep learning network. During the input process, the lungs obtained by processing steps S1 to S3 The region controls the number of CT layers of the input patient to reduce the input non-pulmonary regions; Step S52: randomly input the data-enhanced images into the network in groups, until the evaluation standard on the validation set no longer fluctuates greatly, and save the data with good performance on the validation set Model; Step S53: input the cases in the test set to the deep learning segmentation network that has been trained to obtain N partitions after being processed according to steps S1 to S3, and use the softmax function to convert the feature maps of the N partitions obtained into segmentation semantic probabilities Figure, and then use a fixed threshold to generate a binary image from the probability map; and step S54: evaluate the mutual relationship of the N partitions, obtain a mutual relationship table, and correct each partition, if a partition does not meet the outline standard defined by the doctor , the partition is processed by the correction procedure; if there is no relationship in the relationship table between a partition and other partitions, the partition is also processed by the correction procedure; until N partitions meet the clinician's delineation criteria, the final mediastinal lymph node is obtained. Drainage area segmentation results.
与现有技术相比,本发明的基于深度学习网络的纵隔淋巴引流区的自动勾画方法具有以下有益效果:通过引入多尺度非局部注意力机制,网络可以更好的定位和分割小引流区,同时网络能更好的捕捉远距离的解刨结构信息改进欠分割或过分割的问题;通过纵隔淋巴引流区分割模型能够帮助医生更 准确的勾画靶区和淋巴结,同时也对医生确认临床分期和制定治疗方案提供一定依据,进而能够大大的减轻医生的负担,同时也能够提高患者生存率。Compared with the prior art, the automatic delineation method of the mediastinal lymphatic drainage area based on the deep learning network of the present invention has the following beneficial effects: by introducing a multi-scale non-local attention mechanism, the network can better locate and segment the small drainage area, At the same time, the network can better capture the long-distance deconstructed structural information to improve the problem of under-segmentation or over-segmentation; the segmentation model of the mediastinal lymphatic drainage area can help doctors to delineate the target area and lymph nodes more accurately, and also confirm the clinical stage and The development of treatment plans provides a certain basis, which can greatly reduce the burden on doctors and improve the survival rate of patients.
图1是根据本发明一实施方式的自动勾画方法的流程示意图;1 is a schematic flowchart of an automatic delineation method according to an embodiment of the present invention;
图2是根据本发明一实施方式的自动勾画方法的深度学习网络结构示意图。FIG. 2 is a schematic diagram of a deep learning network structure of an automatic delineation method according to an embodiment of the present invention.
下面结合附图,对本发明的具体实施方式进行详细描述,但应当理解本发明的保护范围并不受具体实施方式的限制。The specific embodiments of the present invention will be described in detail below with reference to the accompanying drawings, but it should be understood that the protection scope of the present invention is not limited by the specific embodiments.
除非另有其它明确表示,否则在整个说明书和权利要求书中,术语“包括”或其变换如“包含”或“包括有”等等将被理解为包括所陈述的元件或组成部分,而并未排除其它元件或其它组成部分。Unless expressly stated otherwise, throughout the specification and claims, the term "comprising" or its conjugations such as "comprising" or "comprising" and the like will be understood to include the stated elements or components, and Other elements or other components are not excluded.
如图1所示,根据本发明优选实施方式的一种基于深度学习网络的纵隔淋巴引流区的自动勾画方法,其适用于CT影像,自动勾画方法包括以下步骤:步骤S1:采集CT图像数据和医生手工标注的纵隔淋巴引流区图像,并预处理CT图像数据和医生手工标注的纵隔淋巴引流区图像。步骤S2:对预处理后的CT图像数据进行分组,得到训练集、验证集和测试集。步骤S3:对训练集、验证集及测试集进行数据增强。步骤S4:构建深度学习分割模型。以及步骤S5:将训练集中的CT图像数据和的医生手工标注纵隔淋巴引流区图像输入已经构建完的深度学习分割模型,训练迭代收敛后,保存纵隔淋巴引流区的分割模型,再进行纵隔淋巴引流区识别和预测,得到纵隔淋巴引流区的每个分区的概率图。As shown in FIG. 1 , according to a preferred embodiment of the present invention, an automatic delineation method for a mediastinal lymphatic drainage area based on a deep learning network is suitable for CT images, and the automatic delineation method includes the following steps: Step S1: Collect CT image data and The images of the mediastinal lymphatic drainage area manually annotated by the doctor, and the CT image data and the images of the mediastinal lymphatic drainage area manually annotated by the doctor are preprocessed. Step S2: Group the preprocessed CT image data to obtain a training set, a verification set and a test set. Step S3: Data enhancement is performed on the training set, the validation set and the test set. Step S4: Build a deep learning segmentation model. And step S5: input the CT image data in the training set and the images of the mediastinal lymphatic drainage area manually marked by the doctor into the deep learning segmentation model that has been constructed, after the training iteration converges, save the segmentation model of the mediastinal lymphatic drainage area, and then perform mediastinal lymphatic drainage. Zone identification and prediction, resulting in a probability map for each zone of the mediastinal lymphatic drainage zone.
在一些实施方式中,步骤S1中的预处理CT图像数据和医生手工标注的纵隔淋巴引流区图像包括以下步骤:步骤S11:采集大量的多模态和多分布 CT三维图像和相应的临床医生手工勾画的轮廓图。步骤S12:对CT三维图像和的医生手工标注纵隔淋巴引流区图像进行重采样,以生成具有相同物理尺度大小的图像。步骤S13:获取三维肺部区域和纵隔位置,依照肺部区域和纵隔位置将三维CT图像裁剪为固定的大小。以及步骤S14:对二维CT图像像素值规范化,根据肺窗和纵隔窗生成多分布CT图像输入分割网络。In some embodiments, the preprocessing of the CT image data and the images of the mediastinal lymphatic drainage area manually marked by the doctor in step S1 includes the following steps: Step S11 : collecting a large number of multi-modality and multi-distribution CT three-dimensional images and corresponding manual by the clinician Outline drawing. Step S12: Resampling the CT three-dimensional images and the images of the mediastinal lymphatic drainage area manually marked by the doctor to generate images with the same physical scale. Step S13: Acquire the three-dimensional lung area and the mediastinal position, and crop the three-dimensional CT image to a fixed size according to the lung area and the mediastinal position. And step S14: normalize the pixel values of the two-dimensional CT image, and generate a multi-distribution CT image input segmentation network according to the lung window and the mediastinal window.
规范化计算方法:Normalized calculation method:
lower=c-w/2;lower=c-w/2;
higher=c+w/2;higher=c+w/2;
x[x<lower]=0;x[x<lower]=0;
x[x>higher]=higher;x[x>higher]=higher;
x=(x-lower)/(higher-lower);x=(x-lower)/(higher-lower);
其中x是CT像素矩阵,c是窗位,w是窗宽。where x is the CT pixel matrix, c is the window level, and w is the window width.
在一些实施方式中,步骤S3中的数据增强包括:随机翻转、随机旋转、随机扭曲、随机噪声、随机仿射变换、随机修剪。In some embodiments, the data enhancement in step S3 includes: random flip, random rotation, random distortion, random noise, random affine transformation, and random pruning.
在一些实施方式中,步骤S4包括以下步骤:步骤S41:构建分割模型的网络结构子模块,包括:第一,卷积操作2次和下采样1次,用来提取该模块的特征图,第二,构建上采样1次和卷积操作2次,用来恢复其原来的分辨率,并使用跳跃结构来融合不同尺度的特征图。步骤S42:构建分割模型的网络结构注意力模块,包括:分别对注意力模块中的键值、特征值进行金字塔下采样,以减少大量计算,得到多尺度键值、特征值,再构建卷积操作,用来模拟键值和查询值之间的注意力关系,最后在注意力关系下查询被关注的特征图,注意力模块能够捕捉远距离像素依赖关系和提取多尺度金字塔的特征。以及步骤S43:构建网络分割模型网络结构,重复利用提取步骤S41的特征子模块4次,以便能有较大感受野和充足的网络容量;在每次提取特征子模块中插入步骤S42的注意力模块,以便网络提取远距离依赖关系, 扩大网络感受野,同时注意力模块捕捉的多尺度信息能在每层都被有效提取特征。然后再重复利用恢复空间分辨率子模型4次。在每个模块之间使用短连接,以便网络可以更好的反向传播和特征融合。In some embodiments, step S4 includes the following steps: Step S41: constructing a network structure sub-module of the segmentation model, including: first, two convolution operations and one downsampling are used to extract the feature map of the module; Second, construct upsampling 1 time and convolution operation 2 times to restore its original resolution, and use skip structure to fuse feature maps of different scales. Step S42: constructing a network structure attention module of the segmentation model, including: pyramid downsampling the key values and eigenvalues in the attention module to reduce a large amount of calculations, obtain multi-scale key values and eigenvalues, and then construct a convolution The operation is used to simulate the attention relationship between the key value and the query value, and finally the focused feature map is queried under the attention relationship. The attention module can capture the long-distance pixel dependencies and extract the features of the multi-scale pyramid. And step S43: construct the network segmentation model network structure, and reuse the feature sub-module of extraction step S41 4 times, so as to have a larger receptive field and sufficient network capacity; insert the attention of step S42 into each extraction feature sub-module module, so that the network can extract long-distance dependencies and expand the network receptive field, and the multi-scale information captured by the attention module can be effectively extracted at each layer. The recovered spatial resolution submodel is then reused 4 times. Use short connections between each module so that the network can have better backpropagation and feature fusion.
在一些实施方式中,步骤S5包括以下步骤:步骤S51:大量病人经过步骤处理后,得到的数据增强图像输入到深度学习网络,在输入过程中,通过步骤S1至S3处理得到的肺部区域控制输入患者CT层数,减少输入非肺部区域。In some embodiments, step S5 includes the following steps: Step S51: After a large number of patients are processed in the steps, the obtained data-enhanced images are input into the deep learning network. During the input process, the lung area control obtained by the processing in steps S1 to S3 Enter the number of CT slices of the patient and reduce the input of non-pulmonary areas.
训练误差的计算方法:How to calculate the training error:
L
loss=L
IOU+a*L
AC,其中a是平衡因子;
L loss =L IOU +a*L AC , where a is the balance factor;
其中N是指数据总量,p
i表示的是预测结果图像中第i个像素点,q
i表示的是金标图像中第i个像素点;
Among them, N refers to the total amount of data, pi represents the i -th pixel in the prediction result image, and qi represents the i -th pixel in the gold standard image;
其中N是指数据总量,p
ij表示的是预测结果图像中第i行第j列像素点,n是像素总个数。
Among them, N refers to the total amount of data, p ij represents the pixel point in the i-th row and the j-th column in the prediction result image, and n is the total number of pixels.
步骤S52:把数据增强图像随机按组随机输入网络,直到验证集上评价标准不再有大的波动,保存在验证集上表现好的模型。Step S52: The data-enhanced images are randomly input into the network in groups, until the evaluation standard on the validation set no longer fluctuates greatly, and the model with good performance on the validation set is saved.
评价标准计算方法:Evaluation standard calculation method:
其中N是指数据总量,p
i表示的是预测结果图像中第i个像素点,q
i表示的是金标图像中第i个像素点。
Among them, N refers to the total amount of data, pi represents the ith pixel in the prediction result image, and qi represents the ith pixel in the gold standard image.
步骤S53:把测试集内病例按照步骤S1至S3处理后输入至已经训练完成的深度学习分割网络以获得N个分区,使用softmax函数将得到的N个分 区的特征图转化成分割语义概率图,再使用固定阈值使概率图生成二值图像。以及步骤S54:对N个分区进行相互关系评价,得到互相关系表,对每个分区进行矫正,如果某个分区不符合医生定义的勾画标准,则通过矫正程序处理该分区;如果某个分区和其他分区没有互相关系表中的关系,则也通过矫正程序处理该分区;直到N个分区都满足临床医生勾画标准,即得到最终的纵隔淋巴引流区分割结果。Step S53: the cases in the test set are processed according to steps S1 to S3 and input to the deep learning segmentation network that has been trained to obtain N partitions, and the feature maps of the N partitions obtained are converted into segmentation semantic probability maps using the softmax function, Then use a fixed threshold to make the probability map generate a binary image. And step S54: evaluate the mutual relationship of the N partitions, obtain a mutual relationship table, and correct each partition. If a partition does not meet the delineation standard defined by the doctor, then process the partition through a correction procedure; if a partition and If other partitions do not have the relationship in the mutual relationship table, the partition is also processed through the correction procedure; until the N partitions meet the clinician's delineation criteria, the final segmentation result of the mediastinal lymphatic drainage area is obtained.
综上所述,本发明的基于深度学习网络的纵隔淋巴引流区的自动勾画方法具有以下优点:通过引入多尺度非局部注意力机制,网络可以更好的定位和分割小引流区,同时网络能更好的捕捉远距离的解剖结构信息改进欠分割或过分割的问题;通过纵隔淋巴引流区分割模型能够帮助医生更准确的勾画靶区和淋巴结,同时也对医生确认临床分期和制定治疗方案提供一定依据,进而能够大大的减轻医生的负担,同时也能够提高患者生存率。To sum up, the automatic delineation method of the mediastinal lymphatic drainage area based on the deep learning network of the present invention has the following advantages: by introducing a multi-scale non-local attention mechanism, the network can better locate and segment the small drainage area, and at the same time, the network can Better capture of long-distance anatomical structure information to improve the problem of under-segmentation or over-segmentation; the segmentation model of the mediastinal lymphatic drainage area can help doctors to more accurately delineate the target area and lymph nodes, and also provide doctors with clinical staging and treatment plans. It can greatly reduce the burden on doctors and improve the survival rate of patients.
前述对本发明的具体示例性实施方案的描述是为了说明和例证的目的。这些描述并非想将本发明限定为所公开的精确形式,并且很显然,根据上述教导,可以进行很多改变和变化。对示例性实施例进行选择和描述的目的在于解释本发明的特定原理及其实际应用,从而使得本领域的技术人员能够实现并利用本发明的各种不同的示例性实施方案以及各种不同的选择和改变。本发明的范围意在由权利要求书及其等同形式所限定。The foregoing descriptions of specific exemplary embodiments of the present invention have been presented for purposes of illustration and description. These descriptions are not intended to limit the invention to the precise form disclosed, and obviously many changes and modifications are possible in light of the above teachings. The exemplary embodiments were chosen and described for the purpose of explaining certain principles of the invention and their practical applications, to thereby enable one skilled in the art to make and utilize various exemplary embodiments and various different aspects of the invention. Choose and change. The scope of the invention is intended to be defined by the claims and their equivalents.
Claims (3)
- 一种基于深度学习网络的纵隔淋巴引流区的自动勾画方法,其适用于CT影像,其特征在于,所述自动勾画方法包括以下步骤:An automatic delineation method for a mediastinal lymphatic drainage area based on a deep learning network, which is suitable for CT images, wherein the automatic delineation method comprises the following steps:步骤S1:采集CT图像数据和医生手工标注的纵隔淋巴引流区图像,并预处理所述CT图像数据和医生手工标注的所述纵隔淋巴引流区图像;Step S1: collecting CT image data and images of the mediastinal lymphatic drainage area manually marked by the doctor, and preprocessing the CT image data and the images of the mediastinal lymphatic drainage area manually marked by the doctor;步骤S2:对预处理后的所述CT图像数据进行分组,得到训练集、验证集和测试集;Step S2: grouping the preprocessed CT image data to obtain a training set, a verification set and a test set;步骤S3:对所述训练集、所述验证集及所述测试集进行数据增强;Step S3: performing data enhancement on the training set, the verification set and the test set;步骤S4:构建深度学习分割模型;所述步骤S4包括以下步骤:Step S4: constructing a deep learning segmentation model; the step S4 includes the following steps:步骤S41:构建分割模型的网络结构子模块,包括:第一,卷积操作2次和下采样1次,用来提取该模块的特征图,第二,构建上采样1次和卷积操作2次,用来恢复其原来的分辨率,并使用跳跃结构来融合不同尺度的特征图;其中本网络down模块使用三线性插值方法下采样,up模块使用的带空洞的反卷积模块上采样;Step S41 : constructing a network structure sub-module of the segmentation model, including: first, 2 convolution operations and 1 downsampling to extract the feature map of the module; secondly, 1 upsampling and 2 convolution operations are constructed Second, it is used to restore its original resolution and use the jump structure to fuse feature maps of different scales; the down module of this network uses trilinear interpolation to downsample, and the up module uses a deconvolution module with holes to upsample;步骤S42:构建分割模型的网络结构注意力模块,包括:分别对所述注意力模块中的键值、特征值进行金字塔下采样,以减少大量计算,得到多尺度键值、特征值,再构建卷积操作,用来模拟键值和查询值之间的注意力关系,最后在所述注意力关系下查询被关注的特征图,所述注意力模块能够捕捉远距离像素依赖关系和提取多尺度金字塔的特征;参与计算的矩阵分别Q(Query)、K(Key)、V(Value),为了加速图像注意力机制,Q值和V值经过了多尺度下采样操作;为了加速收敛,Q值和K值在计算相似之前经过了卷积操作;相似度计算函数不一样,我们相似度计算函数如下, 以及 Step S42: constructing a network structure attention module of the segmentation model, including: pyramid downsampling the key values and eigenvalues in the attention module to reduce a large amount of calculations, obtain multi-scale key values and eigenvalues, and then construct The convolution operation is used to simulate the attention relationship between the key value and the query value, and finally the focused feature map is queried under the attention relationship. The attention module can capture long-distance pixel dependencies and extract multi-scale The characteristics of the pyramid; the matrices involved in the calculation are Q(Query), K(Key), and V(Value). In order to accelerate the image attention mechanism, the Q value and V value have undergone multi-scale downsampling operations; in order to accelerate convergence, the Q value The K value has undergone a convolution operation before calculating similarity; the similarity calculation function is different. Our similarity calculation function is as follows, as well as步骤S43:构建网络分割模型网络结构,重复利用提取所述步骤S41 所述的网络结构子模块4次,以便能有较大感受野和充足的网络容量;在每次提取的所述网络结构子模块中插入所述步骤S42的所述注意力模块,以便网络提取远距离依赖关系,扩大网络感受野,同时所述注意力模块捕捉的多尺度信息能在每层都被有效提取特征;然后再重复利用恢复空间分辨率子模型4次;在每个模块之间使用短连接,以便网络可以更好的反向传播和特征融合;以及Step S43: Construct the network structure of the network segmentation model, and repeat the extraction of the network structure sub-module described in the step S41 4 times, so as to have a larger receptive field and sufficient network capacity; The attention module of step S42 is inserted into the module, so that the network extracts long-distance dependencies, expands the network receptive field, and the multi-scale information captured by the attention module can be effectively extracted in each layer; then reuse the recovered spatial resolution submodel 4 times; use short connections between each module so that the network can have better backpropagation and feature fusion; and步骤S5:将训练集中的所述CT图像数据和医生手工标注的所述纵隔淋巴引流区图像输入已经构建完的所述深度学习分割模型,训练迭代收敛后,保存纵隔淋巴引流区的分割模型,再进行所述纵隔淋巴引流区识别和预测,得到所述纵隔淋巴引流区的每个分区的概率图,所述步骤S5包括以下步骤:Step S5: Input the CT image data in the training set and the images of the mediastinal lymphatic drainage area manually marked by the doctor into the deep learning segmentation model that has been constructed, and after the training iteration converges, save the segmentation model of the mediastinal lymphatic drainage area, Then carry out the identification and prediction of the mediastinal lymphatic drainage area, and obtain the probability map of each partition of the mediastinal lymphatic drainage area, and the step S5 includes the following steps:步骤S51:大量病人经过所述步骤处理后,得到的数据增强图像输入到深度学习网络,在输入过程中,通过所述步骤S1至S3处理得到的肺部区域控制输入患者CT层数,减少输入非肺部区域,所述训练集的训练误差计算方法为:Step S51: After a large number of patients are processed in the steps, the obtained data-enhanced images are input into the deep learning network. During the input process, the lung regions obtained through the steps S1 to S3 are controlled to input the CT layers of the patients to reduce the input Non-lung area, the training error calculation method of the training set is:L loss=L IOU+a*L AC,其中a是平衡因子; L loss =L IOU +a*L AC , where a is the balance factor;其中N是指数据总量,p i表示的是预测结果图像中第i个像素点,q i表示的是金标图像中第i个像素点; Among them, N refers to the total amount of data, pi represents the i -th pixel in the prediction result image, and qi represents the i -th pixel in the gold standard image;其中N是指数据总量,p ij表示的是预测结果图像中第i行第j列像素点,n是像素总个数; Among them, N refers to the total amount of data, p ij represents the pixel point in the i-th row and the j-th column in the prediction result image, and n is the total number of pixels;步骤S52:把所述数据增强图像随机按组随机输入网络,直到验证集上评价标准不再有大的波动,保存在验证集上表现好的模型,所述验证集的评价标准计算方法为:Step S52: Randomly input the data-enhanced images into the network in groups until the evaluation criteria on the verification set no longer fluctuate greatly, and save the model with good performance on the verification set. The calculation method of the evaluation criteria in the verification set is:其中N是指数据总量,p i表示的是预测结果图像中第i个像素点,q i表示的是金标图像中第i个像素点; Among them, N refers to the total amount of data, pi represents the i -th pixel in the prediction result image, and qi represents the i -th pixel in the gold standard image;步骤S53:把所述测试集内病例按照所述步骤S1至S3处理后输入至已经训练完成的深度学习分割网络以获得N个分区,使用softmax函数将得到的所述N个分区的特征图转化成分割语义概率图,再使用固定阈值使所述概率图生成二值图像;以及Step S53: Input the cases in the test set into the deep learning segmentation network that has been trained to obtain N partitions after being processed according to the steps S1 to S3, and use the softmax function to convert the obtained feature maps of the N partitions into a segmentation semantic probability map, and then using a fixed threshold to generate a binary image from the probability map; and步骤S54:对所述N个分区进行相互关系评价,得到互相关系表,对每个分区进行矫正,如果某个分区不符合医生定义的勾画标准,则通过矫正程序处理该分区;如果某个分区和其他分区没有所述互相关系表中的关系,则也通过矫正程序处理该分区;直到所述N个分区都满足临床医生勾画标准,即得到最终的纵隔淋巴引流区分割结果。Step S54: Evaluate the mutual relationship of the N partitions, obtain a mutual relationship table, and correct each partition. If a partition does not meet the delineation standard defined by the doctor, process the partition through a correction procedure; If there is no relationship in the mutual relationship table with other partitions, the partition is also processed through the correction procedure; until the N partitions meet the clinician's delineation criteria, the final segmentation result of the mediastinal lymphatic drainage area is obtained.
- 根据权利要求1所述的基于深度学习网络的纵隔淋巴引流区的自动勾画方法,其特征在于,所述步骤S1中的所述预处理所述CT图像数据和医生手工标注的所述纵隔淋巴引流区图像包括以下步骤:The automatic delineation method for the mediastinal lymphatic drainage area based on a deep learning network according to claim 1, wherein the preprocessing of the CT image data and the mediastinal lymphatic drainage manually marked by a doctor in the step S1 Area images include the following steps:步骤S11:采集大量的多模态和多分布CT三维图像和相应的临床医生手工勾画的轮廓图;Step S11: collecting a large number of multi-modal and multi-distribution CT three-dimensional images and corresponding contour maps manually drawn by clinicians;步骤S12:对所述CT三维图像和医生手工标注的所述纵隔淋巴引流区图像进行重采样,以生成具有相同物理尺度大小的图像;Step S12: resampling the CT three-dimensional image and the image of the mediastinal lymphatic drainage area manually marked by the doctor to generate an image with the same physical scale;步骤S13:获取三维肺部区域后,利用阈值方法分割身体区域,再基于肺部区域和身体区域通过形态学方法获得纵隔区域,利用纵隔区域将所述三维CT图像裁剪为固定的大小;以及Step S13: after obtaining the three-dimensional lung area, use the threshold method to segment the body area, then obtain the mediastinal area by morphological method based on the lung area and the body area, and use the mediastinal area to crop the three-dimensional CT image to a fixed size; and步骤S14:对二维CT图像像素值规范化,根据肺窗和纵隔窗生成多分布CT图像输入分割网络。Step S14: Normalize the pixel values of the two-dimensional CT image, and generate a multi-distribution CT image input segmentation network according to the lung window and the mediastinal window.
- 根据权利要求1所述的基于深度学习网络的纵隔淋巴引流区的自动勾 画方法,其特征在于,所述步骤S3中的所述数据增强包括:随机翻转、随机旋转、随机扭曲、随机噪声、随机仿射变换、随机修剪。The automatic delineation method for the mediastinal lymphatic drainage area based on a deep learning network according to claim 1, wherein the data enhancement in the step S3 comprises: random flip, random rotation, random distortion, random noise, random Affine transformation, random trimming.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110143272.7 | 2021-02-02 | ||
CN202110143272.7A CN112950651B (en) | 2021-02-02 | 2021-02-02 | Automatic delineation method of mediastinal lymph drainage area based on deep learning network |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022166800A1 true WO2022166800A1 (en) | 2022-08-11 |
Family
ID=76241576
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2022/074510 WO2022166800A1 (en) | 2021-02-02 | 2022-01-28 | Deep learning network-based automatic delineation method for mediastinal lymphatic drainage region |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN112950651B (en) |
WO (1) | WO2022166800A1 (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115239716A (en) * | 2022-09-22 | 2022-10-25 | 杭州影想未来科技有限公司 | Medical image segmentation method based on shape prior U-Net |
CN115810419A (en) * | 2023-02-08 | 2023-03-17 | 深圳市汇健智慧医疗有限公司 | Operation management method, device, equipment and storage medium for intelligent operating room |
CN116344001A (en) * | 2023-03-10 | 2023-06-27 | 中南大学湘雅三医院 | Medical information visual management system and method based on artificial intelligence |
CN116664590A (en) * | 2023-08-02 | 2023-08-29 | 中日友好医院(中日友好临床医学研究所) | Automatic segmentation method and device based on dynamic contrast enhancement magnetic resonance image |
CN117011245A (en) * | 2023-07-11 | 2023-11-07 | 北京医智影科技有限公司 | Automatic sketching method and device for rectal cancer tumor area fusing MR information to guide CT |
CN117476219A (en) * | 2023-12-27 | 2024-01-30 | 四川省肿瘤医院 | Auxiliary method and auxiliary system for positioning CT (computed tomography) tomographic image based on big data analysis |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112950651B (en) * | 2021-02-02 | 2022-02-01 | 广州柏视医疗科技有限公司 | Automatic delineation method of mediastinal lymph drainage area based on deep learning network |
CN113139627B (en) * | 2021-06-22 | 2021-11-05 | 北京小白世纪网络科技有限公司 | Mediastinal lump identification method, system and device |
CN113288193B (en) * | 2021-07-08 | 2022-04-01 | 广州柏视医疗科技有限公司 | Automatic delineation system of CT image breast cancer clinical target area based on deep learning |
CN113539402B (en) * | 2021-07-14 | 2022-04-01 | 广州柏视医疗科技有限公司 | Multi-mode image automatic sketching model migration method |
CN113488146B (en) * | 2021-07-29 | 2022-04-01 | 广州柏视医疗科技有限公司 | Automatic delineation method for drainage area and metastatic lymph node of head and neck nasopharyngeal carcinoma |
CN114549413B (en) * | 2022-01-19 | 2023-02-03 | 华东师范大学 | Multi-scale fusion full convolution network lymph node metastasis detection method based on CT image |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020124208A1 (en) * | 2018-12-21 | 2020-06-25 | Nova Scotia Health Authority | Systems and methods for generating cancer prediction maps from multiparametric magnetic resonance images using deep learning |
WO2020154562A1 (en) * | 2019-01-24 | 2020-07-30 | Caide Systems, Inc. | Method and system for automatic multiple lesion annotation of medical images |
CN111798464A (en) * | 2020-06-30 | 2020-10-20 | 天津深析智能科技有限公司 | Lymphoma pathological image intelligent identification method based on deep learning |
CN112950651A (en) * | 2021-02-02 | 2021-06-11 | 广州柏视医疗科技有限公司 | Automatic delineation method of mediastinal lymph drainage area based on deep learning network |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109949352A (en) * | 2019-03-22 | 2019-06-28 | 邃蓝智能科技(上海)有限公司 | A kind of radiotherapy image Target delineations method based on deep learning and delineate system |
CN110675406A (en) * | 2019-09-16 | 2020-01-10 | 南京信息工程大学 | CT image kidney segmentation algorithm based on residual double-attention depth network |
CN111445481A (en) * | 2020-03-23 | 2020-07-24 | 江南大学 | Abdominal CT multi-organ segmentation method based on scale fusion |
CN111784628B (en) * | 2020-05-11 | 2024-03-29 | 北京工业大学 | End-to-end colorectal polyp image segmentation method based on effective learning |
CN112075927B (en) * | 2020-10-15 | 2024-05-14 | 首都医科大学附属北京天坛医院 | Etiology classification method and device for cerebral apoplexy |
-
2021
- 2021-02-02 CN CN202110143272.7A patent/CN112950651B/en active Active
-
2022
- 2022-01-28 WO PCT/CN2022/074510 patent/WO2022166800A1/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020124208A1 (en) * | 2018-12-21 | 2020-06-25 | Nova Scotia Health Authority | Systems and methods for generating cancer prediction maps from multiparametric magnetic resonance images using deep learning |
WO2020154562A1 (en) * | 2019-01-24 | 2020-07-30 | Caide Systems, Inc. | Method and system for automatic multiple lesion annotation of medical images |
CN111798464A (en) * | 2020-06-30 | 2020-10-20 | 天津深析智能科技有限公司 | Lymphoma pathological image intelligent identification method based on deep learning |
CN112950651A (en) * | 2021-02-02 | 2021-06-11 | 广州柏视医疗科技有限公司 | Automatic delineation method of mediastinal lymph drainage area based on deep learning network |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115239716A (en) * | 2022-09-22 | 2022-10-25 | 杭州影想未来科技有限公司 | Medical image segmentation method based on shape prior U-Net |
CN115810419A (en) * | 2023-02-08 | 2023-03-17 | 深圳市汇健智慧医疗有限公司 | Operation management method, device, equipment and storage medium for intelligent operating room |
CN115810419B (en) * | 2023-02-08 | 2023-04-18 | 深圳市汇健智慧医疗有限公司 | Operation management method, device, equipment and storage medium for intelligent operating room |
CN116344001A (en) * | 2023-03-10 | 2023-06-27 | 中南大学湘雅三医院 | Medical information visual management system and method based on artificial intelligence |
CN116344001B (en) * | 2023-03-10 | 2023-10-24 | 中南大学湘雅三医院 | Medical information visual management system and method based on artificial intelligence |
CN117011245A (en) * | 2023-07-11 | 2023-11-07 | 北京医智影科技有限公司 | Automatic sketching method and device for rectal cancer tumor area fusing MR information to guide CT |
CN117011245B (en) * | 2023-07-11 | 2024-03-26 | 北京医智影科技有限公司 | Automatic sketching method and device for rectal cancer tumor area fusing MR information to guide CT |
CN116664590A (en) * | 2023-08-02 | 2023-08-29 | 中日友好医院(中日友好临床医学研究所) | Automatic segmentation method and device based on dynamic contrast enhancement magnetic resonance image |
CN116664590B (en) * | 2023-08-02 | 2023-10-13 | 中日友好医院(中日友好临床医学研究所) | Automatic segmentation method and device based on dynamic contrast enhancement magnetic resonance image |
CN117476219A (en) * | 2023-12-27 | 2024-01-30 | 四川省肿瘤医院 | Auxiliary method and auxiliary system for positioning CT (computed tomography) tomographic image based on big data analysis |
CN117476219B (en) * | 2023-12-27 | 2024-03-12 | 四川省肿瘤医院 | Auxiliary method and auxiliary system for positioning CT (computed tomography) tomographic image based on big data analysis |
Also Published As
Publication number | Publication date |
---|---|
CN112950651A (en) | 2021-06-11 |
CN112950651B (en) | 2022-02-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022166800A1 (en) | Deep learning network-based automatic delineation method for mediastinal lymphatic drainage region | |
CN111476292B (en) | Small sample element learning training method for medical image classification processing artificial intelligence | |
CN108776969B (en) | Breast ultrasound image tumor segmentation method based on full convolution network | |
CN109559320B (en) | Method and system for realizing visual SLAM semantic mapping function based on hole convolution deep neural network | |
Bi et al. | Automatic liver lesion detection using cascaded deep residual networks | |
CN113516659B (en) | Medical image automatic segmentation method based on deep learning | |
Tang et al. | A multi-stage framework with context information fusion structure for skin lesion segmentation | |
CN107871325B (en) | Image non-rigid registration method based on Log-Euclidean covariance matrix descriptor | |
RU2689029C2 (en) | System and method for auto-contouring in adaptive radiotherapy | |
WO2020015752A1 (en) | Object attribute identification method, apparatus and system, and computing device | |
CN111291825B (en) | Focus classification model training method, apparatus, computer device and storage medium | |
CN113728335A (en) | Method and system for classification and visualization of 3D images | |
CN112862824A (en) | Novel coronavirus pneumonia focus detection method, system, device and storage medium | |
WO2021051868A1 (en) | Target location method and apparatus, computer device, computer storage medium | |
CN110570394B (en) | Medical image segmentation method, device, equipment and storage medium | |
CN111754453A (en) | Pulmonary tuberculosis detection method and system based on chest radiography image and storage medium | |
Shu et al. | LVC-Net: Medical image segmentation with noisy label based on local visual cues | |
CN111080658A (en) | Cervical MRI image segmentation method based on deformable registration and DCNN | |
CN112102384A (en) | Non-rigid medical image registration method and system | |
CN114119635B (en) | Fatty liver CT image segmentation method based on cavity convolution | |
CN115018999A (en) | Multi-robot-cooperation dense point cloud map construction method and device | |
CN113705670A (en) | Brain image classification method and device based on magnetic resonance imaging and deep learning | |
CN113744209A (en) | Heart segmentation method based on multi-scale residual U-net network | |
CN117541652A (en) | Dynamic SLAM method based on depth LK optical flow method and D-PROSAC sampling strategy | |
CN107341189A (en) | A kind of indirect labor carries out the method and system of examination, classification and storage to image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22749079 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 22749079 Country of ref document: EP Kind code of ref document: A1 |