CN113706548B - Method for automatically segmenting anterior mediastinum focus of chest based on CT image - Google Patents

Method for automatically segmenting anterior mediastinum focus of chest based on CT image Download PDF

Info

Publication number
CN113706548B
CN113706548B CN202010388777.5A CN202010388777A CN113706548B CN 113706548 B CN113706548 B CN 113706548B CN 202010388777 A CN202010388777 A CN 202010388777A CN 113706548 B CN113706548 B CN 113706548B
Authority
CN
China
Prior art keywords
focus
image
segmentation
mediastinum
stage
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010388777.5A
Other languages
Chinese (zh)
Other versions
CN113706548A (en
Inventor
马国林
李海梅
张冰
韩小伟
刘秀秀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Kangxing Shunda Science And Trade Co ltd
Original Assignee
Beijing Kangxing Shunda Science And Trade Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Kangxing Shunda Science And Trade Co ltd filed Critical Beijing Kangxing Shunda Science And Trade Co ltd
Priority to CN202010388777.5A priority Critical patent/CN113706548B/en
Publication of CN113706548A publication Critical patent/CN113706548A/en
Application granted granted Critical
Publication of CN113706548B publication Critical patent/CN113706548B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • Public Health (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Primary Health Care (AREA)
  • Epidemiology (AREA)
  • Biophysics (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Databases & Information Systems (AREA)
  • Pathology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Processing (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The method for automatically segmenting the anterior mediastinum focus of the chest based on the CT image has a good segmentation result, can meet the quantitative feature analysis of the subsequent deep learning, and provides a methodology foundation for the automatic segmentation research of the anterior mediastinum focus. The method for automatically segmenting the anterior mediastinum focus of the chest based on the CT image comprises the following steps: (1) CT image acquisition: performing chest CT contrast enhancement scanning on all patients to obtain a thin-layer image reconstructed by a mediastinum window; (2) Designing a double-lung mask file on an original CT image by utilizing the natural density difference of lung and surrounding tissues and adopting a method for calculating lung tissue pixel values to remove areas except mediastinum; (3) adopting a V_Net network to perform preliminary segmentation on the focus; (4) Finely dividing the focus by adopting a Morphological Snakes algorithm; and (5) evaluating the segmentation result.

Description

Method for automatically segmenting anterior mediastinum focus of chest based on CT image
Technical Field
The invention relates to the technical field of medical image processing, in particular to a method for automatically dividing a breast anterior mediastinum focus based on CT images.
Background
The pre-operation imaging examination of the anterior mediastinum focus of the chest aims at primarily evaluating the malignancy of the tumor, and further can infer the histopathological change and perform risk assessment, thereby assisting the selection of a pre-operation treatment scheme and the clinical prognosis judgment. CT imaging assessment is based on the lesion itself and its relationship to adjacent surrounding tissue structures, often employing empirical, observational rather than quantitative indicators. The image group science is a computer quantitative image analysis method, which can extract high-flux characteristics of medical image images, but the traditional image group science is used for dividing focus by manual sketching, and complex sketching standards are required to be designed to ensure the accuracy of manual dividing results as much as possible. At present, on the basis of morphological identification of most tumors, manual identification and manual sketching mode segmentation are still adopted, and the process of manually sketching focuses is time-consuming and is easily influenced by artificial factors due to low automation degree. Therefore, how to develop a method for automatically identifying and ideally segmenting a lesion is one of important research contents in the field of medical image segmentation at present.
The automatic segmentation of the anterior mediastinum focus based on the CT image can improve the automation degree of the image histology flow, improve the standardization and the accuracy of focus segmentation, and be favorable for the accurate extraction of the subsequent image characteristic data and the classification performance of the prediction model. In addition, the segmentation of malignant tumor lesions is an important front-end process conventionally executed in radiotherapy, and accurate segmentation of tumors by adopting an automatic segmentation technology can improve the segmentation speed of lesions before radiotherapy and the standardization of the process, so that accurate estimation of doses in radiotherapy planning and standard guarantee of curative effect evaluation are facilitated.
The existing research shows that in the automatic segmentation technology of medical images, the automatic segmentation of focuses based on deep learning has obvious advantages compared with the traditional method, and the use of the deep convolutional neural network architecture can remarkably improve the classification recognition performance of the region of interest of the images and the accuracy of focus segmentation. Therefore, CNN and related derived network architecture are adopted to become the first choice algorithm in the current medical image focus segmentation field. Networks derived based on CNN are mainly FCN, U-net and V-net. V_Net is a full convolution neural network based on FCN and U-Net improvement and used for 3D medical image segmentation, and the full convolution neural network uses a volume convolution method and adopts a novel objective function based on a Dice coefficient to train and optimize a model during training, in addition, V_Net can also use a nonlinear transformation and histogram matching mode to enhance data, and can better meet the characteristics of small medical image data quantity and strong interpretability. However, since the incidence of anterior mediastinum lesions is relatively low, there are few studies on automatic segmentation of CT images, and thus how to accurately segment anterior mediastinum lesions still faces a great challenge.
Disclosure of Invention
In order to overcome the defects of the prior art, the technical problem to be solved by the invention is to provide a method for automatically segmenting the anterior mediastinum focus of the chest based on CT images, which has a good result of finely segmenting the focus, can meet the quantitative feature analysis of the subsequent deep learning, and provides a methodology foundation for the automatic segmentation research of the anterior mediastinum focus.
The technical scheme of the invention is as follows: the method for automatically segmenting the anterior mediastinum focus of the chest based on the CT image comprises the following steps:
(1) CT image acquisition: CT contrast enhancement scanning is carried out on all patients, and a thin layer image reconstructed on a chest mediastinum window is obtained;
(2) The method comprises the steps of designing a double-lung mask file by utilizing the natural density difference of lung and surrounding tissues on an original CT image and adopting a method for calculating lung tissue pixel values, removing the area except for mediastinum and reserving focus;
(3) Performing preliminary segmentation on the focus by adopting a V_Net network;
(4) Finely dividing the focus by adopting a Morphological Snakes algorithm;
(5) And evaluating the segmentation result.
The method is characterized in that firstly, based on an original CT image, a method for calculating lung tissue pixel values is adopted to design a double-lung tissue mask file to remove the area except mediastinum, only the mediastinum area containing the focus is reserved for preliminary segmentation of the focus in a V_Net network, so that the complexity of an input network image is greatly reduced, and the segmentation precision is improved; after the focus is initially segmented, the focus is finely segmented by adopting a Morphological Snakes algorithm, morphological Snakes is an evolution method of a contour boundary by adopting a mathematical morphology operator, and the focus segmentation edge can be optimized, so that the boundary of the final segmentation result is smoother and more accurate, and the segmentation result is efficient and stable, therefore, the result of finely segmenting the focus is better, the quantitative feature analysis of the subsequent deep learning can be satisfied, and a methodology foundation is provided for the automatic segmentation research of the front mediastinum focus.
Drawings
Fig. 1 shows a schematic diagram of a v_net network architecture.
Fig. 2 shows a Dice coefficient curve for lesion auto-segmentation. (A) The figure is a Dice coefficient curve for automatically dividing the focus of a patient in a training set sample; (B) The figure is a Dice coefficient curve for automatic segmentation of patient lesions in a validation set sample.
Fig. 3 shows a flow chart of a method for automatic segmentation of a anterior mediastinal lesion of a breast based on CT images according to the present invention.
Detailed Description
As shown in fig. 3, the method for automatically segmenting the anterior mediastinum focus of the chest based on the CT image comprises the following steps:
(1) CT image acquisition: all patients underwent chest CT enhancement scanning to acquire a thin layer image reconstructed at the mediastinum window of the chest;
(2) The method comprises the steps of designing a double-lung mask file by utilizing the natural density difference of lung and surrounding tissues on an original CT image and adopting a method for calculating lung tissue pixel values, removing the area except for mediastinum and reserving focus;
(3) Performing preliminary segmentation on the focus by using a V_Net network (Initial Segmentation); the method comprises the steps of carrying out a first treatment on the surface of the
(4) Finely segmenting the focus by adopting a Morphological Snakes algorithm (Accurate Segmentation);
(5) And evaluating the segmentation result.
The method is characterized in that firstly, based on an original CT image, a method for calculating lung tissue pixel values is adopted to design a double-lung tissue mask file to remove the area except mediastinum, only the mediastinum area containing the focus is reserved for preliminary segmentation of the focus in a V_Net network, so that the complexity of an input network image is greatly reduced, and the segmentation precision is improved; after the focus is initially segmented, the focus is finely segmented by adopting a Morphological Snakes algorithm, morphological Snakes is an evolution method of a contour boundary by adopting a mathematical morphology operator, and the focus segmentation edge can be optimized, so that the boundary of the final segmentation result is smoother and more accurate, and the segmentation result is efficient and stable, therefore, the result of finely segmenting the focus is better, the quantitative feature analysis of the subsequent deep learning can be satisfied, and a methodology foundation is provided for the automatic segmentation research of the front mediastinum focus.
Preferably, in the step (1), the patient CT scan of the first task group employs 16 rows (Canon Aquilion RXL, tokyo, japan), 320 rows of MDCT (Canon Aquilion One, tokyo, japan) and 256 rows of MDCT (GE reconstruction, massachusetts, USA), and the patient CT scan of the second task group employs dual source CT (Siemens Somtom Definition, forshheim, german), 128 rows of MDCT (Siemens Somatom Perspective, forshheim, german) and second generation dual source CT (Siemens Somatom Flash, forshheim, germanman); all patients are taken in by adopting an automatic tube voltage and automatic tube current adjusting technology, the injection of the contrast agent through the elbow vein adopts the iopromide injection 370mgI/ml or the ioversol injection 320mgI/ml, the injection rate is 3ml/s, the contrast agent dosage is 65-80ml, and the enhancement scanning is carried out 40s after the contrast agent is injected; all patients lie on the back, hold the head with both hands, and perform image acquisition when the patient is informed to inhale and hold breath; reconstructing images at mediastinal windows of 400-450HU, 20-50HU, and pulmonary windows of 1000-1500HU, and 650-450 HU; a 0.5-1.25mm thin layer image reconstructed at the mediastinum window was acquired.
Preferably, in the step (2), firstly, a mask file of a two-dimensional matrix of the double lung tissue is designed based on a method of calculating an individualized lung tissue threshold and searching a maximum connected region, the mask and an original image to be processed are subjected to dot multiplication operation, the same algorithm is subjected to layer-by-layer cyclic processing to generate a mediastinum 3D image, the chest peripheral tissue structure of a non-mediastinum region is removed from the image, and only a mediastinum region including a focus is reserved.
Preferably, in the step (3), the image is preprocessed before being input into the network, including normalization of gray average and variance; the up-sampling process of the network output adopts bicubic interpolation to compensate the image dimension change generated in the pooling process; after the segmentation result output by the V_Net is obtained, smoothing is carried out on the selected focus area.
Preferably, in the step (3), the network structure adopts an end-to-end learning mode, the input image size is 256×256×32, the channel number is 1, the network is divided into four stages during encoding, each stage contains 1 to 3 convolutional layers, the convolutional kernel size used by the convolutional operation of each stage is 3 x 3, the step size is 1, and the number of the convolutional layers used by the convolutional operation of each stage is equal to 1, each stage of the code is downsampled by a convolution kernel of size 2 x 2 and step size 2, the network sums the inputs and outputs of each stage to obtain a learning of the residual function, the dimension of the feature map becomes 16 x 256 x 32 (number of channels x image height x image width x image depth) through the first stage; through the second stage, the dimension of the feature map becomes 32×128×128×16; through the third stage of the process, the process is carried out, dimension change of feature map is 64 multiplied by 64 x 64 x 8; through the fourth stage, the dimension of the feature map becomes 64×32×32×4; the decoding stage corresponds to the encoding stage, the up-sampling process combines the features extracted by the corresponding layer in the encoding process, and the convolution kernel size used by the last convolution layer is 1 multiplied by 1, so that the size of an output image is kept consistent with the size of an original input image; finally, a segmentation probability map of the foreground and background is generated using softmax (fig. 1).
Preferably, in the step (3), v_net is trained using a volumetric convolution method and a novel objective function, dice loss function, based on Dice coefficients, and optimized during training, and Dice coefficients D are expressed as formula (1):
wherein the sum runs on N voxels, the predicted binary segment pi epsilon P and the golden standard binary gi epsilon G; the Dice coefficient is expressed as a gradient in formula (2):
preferably, in the step (4), on the basis of performing preliminary segmentation on the focus, fine segmentation is performed by adopting a Morphological Snakes algorithm, so that the accuracy of segmentation is further improved, and the image to be segmented is set as a pixel level set solved by the image to be segmented is represented by a formula (3):
in the level set framework, the conventional partial differential equation represents the curve evolution equation as formula (4):
when F= + -1 andthe above equation is expressed as equation (5) using mathematical morphology:
wherein u represents a level set, represents a gradient operator, div represents a divergence operator, v represents a constant, g () represents a boundary stop function, and the result after n times of contour boundary evolution is formula (6):
the result after n times of contour evolution is formula (7):
the number of successive applications of the smoothing operator controls the intensity of the smoothing step, this number being represented by the parameter mu,
the final course of the pass after n iterations is expressed as equations (8) - (10):
wherein D is d To expand operators, E d In order to erode the operators,is a smoothing operator.
Preferably, in the step (5), three indexes of a Dice coefficient, an accuracy rate P, and a recall rate R are used to evaluate the segmentation result, where the Dice rate is defined as formula (11):
wherein A represents a segmentation result, B represents a manual sketching standard corresponding to the segmentation result, A and B are parts belonging to a focus in a segmentation diagram, and are correctly predicted parts, and the closer the Dice is to 1, the more accurate the segmentation result is;
the precision is defined as formula (12):
in the formula, TP represents the correctly segmented area of the focus, FP represents the segmentation of the non-focus area into focus areas, and P represents the percentage of the correctly segmented area in the segmentation result.
Recall is defined as equation (13):
where TP represents the correctly segmented area of the lesion, FN represents the incorrectly segmented area of the lesion, and R value represents the percentage of the correctly segmented area to the original lesion area.
Drawing a Dice coefficient curve (figure 2) after automatic segmentation of all focuses, wherein in a training set sample, the Dice coefficient of a patient has a minimum value of 0.623, a maximum value of 0.999 and 125 samples have Dice values of more than 0.95; in the validation set samples, the patient's Dice coefficient had a minimum of 0.624 and a maximum of 0.998, and the Dice values of 78 samples were in the range of 0.85-0.95 (67.24%). And (3) the result of automatic segmentation of the focus shows that the Dice coefficient in the training set is 0.942+/-0.066, the precision rate is 0.915+/-0.083, and the recall rate is 0.907+/-0.091. The Dice coefficient in the verification set was 0.911±0.051, the precision rate was 0.926±0.042, and the recall rate was 0.89±0.059.
The previous research method for segmenting based on the inherent features of the image only comprises the steps of segmenting information such as image gray scale (pixel intensity), gradient or texture, geometric features of the image and the like which can be directly obtained from the image, and the implementation modes comprise gray threshold segmentation, feature cluster segmentation, region growing modes and the like. These methods typically require manual complementary segmentation and tend to be time consuming and of limited effectiveness for images of low contrast or images lacking significant gradients and geometric features. Compared with the methods, the convolutional neural network based on deep learning, especially FCN and the derivative network thereof can realize end-to-end model training when used for image segmentation, has the characteristics of good segmentation effect, strong fault tolerance, high-efficiency self-learning capability and self-adaption, and is suitable for application in the field of medical image segmentation.
The FCN architecture that is popular for current medical image segmentation is U-Net. Ronneberger et al create a resulting output path based on downsampling and upsampling corresponding encoding and decoding paths, and combine this approach with a skip connection so that combining the higher resolution features of the encoding path with the upsampling features of the decoding path can better locate and learn the feature representation of the input image. U-Net is based on the thinking of classification and uses the input picture as the whole classification to divide, thus overcoming the classification method using pixels as the unit, having small calculation amount and being capable of gradually restoring the image precision, being V_Net based on the improvement of FCN and U-Net, being a full convolution neural network used for 3D medical image division. The V_Net is trained by using a volume convolution method and a new objective function based on a Dice coefficient and is optimized during training, so that the requirements of small image data amount and improved segmentation effect can be better met.
On a conventional chest CT image, the scanning range is from the tip of the lung to the top level of the liver, the multi-tissue organ is involved, the density variation range is larger, and because the densities of the double-lung tissue, the mediastinum and the peripheral tissue structure are compared with each other naturally, before the preliminary segmentation, the double-lung tissue mask file is designed to remove the area except the mediastinum by adopting a method for calculating the pixel value of the lung tissue on the basis of the original CT image, and only the mediastinum area containing the focus is reserved for the preliminary segmentation of the focus in a V_Net network, so that the complexity of the input network image is greatly reduced, and the segmentation precision is favorably improved. And secondly, after the focus is initially segmented, the focus is finely segmented by adopting a Morphological Snakes algorithm, morphological Snakes is an evolution method of a contour boundary by adopting a mathematical morphology operator, and the focus segmentation edge can be optimized, so that the boundary of the final segmentation result is smoother and more accurate, and the segmentation result is efficient and stable. The Dice coefficient curve after the focus is automatically segmented in the study is displayed in a training set sample, 52.51% of samples have Dice values larger than 0.95, and the result that the Dice values are larger than 0.8 is analyzed, so that the focus automatic segmentation result is highly consistent with a focus region manually drawn; analysis of results with a Dice value less than 0.8 shows that the automatic segmentation results of most lesions substantially cover the manually delineated lesion areas, and that small areas at the edges of the automatically segmented image exceed the manually delineated lesion edge areas in a particular direction. In the validation set samples, the patient's Dice coefficient value 67.24% is in the range of 0.85-0.95. The result shows that the research segmentation model has good effect and can meet the follow-up quantitative feature analysis adopting deep learning.
The present invention is not limited to the preferred embodiments, but can be modified in any way according to the technical principles of the present invention, and all such modifications, equivalent variations and modifications are included in the scope of the present invention.

Claims (5)

1. The method for automatically segmenting the anterior mediastinum focus of the chest based on the CT image comprises the following steps:
(1) CT image acquisition: CT contrast enhancement scanning is carried out on all patients, and a thin layer image reconstructed on a chest mediastinum window is obtained;
(2) The method comprises the steps of designing a double-lung mask file by utilizing the natural density difference of lung and surrounding tissues on an original CT image and adopting a method for calculating lung tissue pixel values, removing the area except for mediastinum and reserving focus;
(3) Performing preliminary segmentation on the focus by adopting a V_Net network;
(4) Finely dividing the focus by adopting a Morphological Snakes algorithm;
(5) Evaluating the segmentation result;
in the step (3), the network structure adopts an end-to-end learning mode, the input image size is 256 multiplied by 32, the channel number is 1, the network is divided into four stages during encoding, each stage contains 1 to 3 convolutional layers, the convolutional kernel size used by the convolutional operation of each stage is 3 x 3, the step size is 1, and the number of the convolutional layers used by the convolutional operation of each stage is equal to 1, each stage of the code is downsampled by a convolution kernel of size 2 x 2 and step size 2 using a prime nonlinear activation function, the network sums the inputs and outputs of each stage to obtain a learning of a residual function, and the dimension of the feature map becomes 16 x 256 x 32 through the first stage; through the second stage, the dimension of the feature map becomes 32×128×128×16; through the third stage of the process, the process is carried out, dimension change of feature map is 64 multiplied by 64 x 64 x 8; through the fourth stage, the dimension of the feature map becomes 64×32×32×4; the decoding stage corresponds to the encoding stage, the up-sampling process combines the features extracted by the corresponding layer in the encoding process, and the convolution kernel size used by the last convolution layer is 1 multiplied by 1, so that the size of an output image is kept consistent with the size of an original input image; finally, generating a segmentation probability map of the foreground and the background by using softmax;
in the step (3), v_net is trained and optimized during training using a volumetric convolution method and a novel objective function, dice loss function, based on Dice coefficients, and Dice coefficient D is formula (1):
wherein the sum runs on N voxels, the predicted binary segment pi epsilon P and the golden standard binary gi epsilon G; the Dice coefficient is expressed as a gradient in formula (2):
in the step (4), on the basis of preliminary segmentation of the focus, fine segmentation is performed by adopting a Morphological Snakes algorithm, so that the accuracy of segmentation is further improved, and the image to be segmented is provided with a pixel level set solved by the image to be segmented as a formula (3):
in the level set framework, the conventional partial differential equation represents the curve evolution equation as formula (4):
when F= + -1 andthe above equation is expressed as equation (5) using mathematical morphology:
wherein u represents a level set, v represents a gradient operator, div represents a divergence operator, v represents a constant, g () represents a boundary stop function, and the result after n times of contour boundary evolution is formula (6):
the result after n times of contour evolution is formula (7):
the number of successive applications of the smoothing operator controls the intensity of the smoothing step, this number being represented by the parameter mu,
the final course of the pass after n iterations is expressed as equations (8) - (10):
wherein D is d To expand operators, E d In order to erode the operators,is a smoothing operator.
2. The method for automatic segmentation of a anterior mediastinal lesion on the basis of CT images according to claim 1, wherein: in the step (1), the patient CT scan of the first task group adopts 16 rows of MDCTs, 320 rows of MDCTs and 256 rows of MDCTs, and the patient CT scan of the second task group adopts double-source CT, 128 rows of MDCTs and second generation double-source CT; all patients were enrolled using the tube voltage and tube current regulation technique and the elbow intravenous injection contrast agent was either iopromide injection 370mgI/ml or ioversol injection
320mgI/ml, injection rate 3ml/s, contrast agent dosage 65-80ml, and enhanced scanning 40s after contrast agent injection; all patients lie on the back, hold the head with both hands, and perform image acquisition when the patient is informed to inhale and hold breath; reconstructing images at mediastinal windows of 400-450HU, 20-50HU, and pulmonary windows of 1000-1500HU, and 650-450 HU; a 0.5-1.25mm thin layer image reconstructed at the mediastinum window was acquired.
3. The method for automatic segmentation of the anterior mediastinum focus on the basis of CT images according to claim 2, wherein: in the step (2), firstly, a mask file of a two-dimensional matrix of double lung tissues is designed based on a method for calculating an individualized lung tissue threshold and searching a maximum communication area, dot multiplication operation is carried out on the mask and an original image to be processed, a mediastinum 3D image is generated after the same algorithm is circularly processed layer by layer, the chest peripheral tissue structure of a non-mediastinum area is removed, and only a mediastinum area including a focus is reserved.
4. The method for automatic segmentation of a anterior mediastinal lesion on the basis of CT images according to claim 3, wherein: in the step (3), preprocessing is carried out on the image before the image is input into a network, wherein the preprocessing comprises normalization of a gray average value and a variance; the up-sampling process of the network output adopts bicubic interpolation to compensate the image dimension change generated in the pooling process; after the segmentation result output by the V_Net is obtained, smoothing is carried out on the selected focus area.
5. The method for automatically segmenting the anterior mediastinum focus on the chest based on the CT image according to claim 4, wherein: in the step (5), three indexes of a Dice coefficient, an accuracy rate P and a recall rate R are adopted to evaluate the segmentation result, and the Dice coincidence rate is defined as a formula (11):
wherein A represents the segmentation result, B represents the manual sketching standard corresponding to the segmentation result, A is U B
For the part belonging to the focus in the segmentation map, as the part which is predicted correctly, the closer the Dice is to 1, the more accurate the segmentation result is;
the precision is defined as formula (12):
in the formula, TP represents the correctly segmented area of the focus, FP represents the segmentation of the non-focus area into focus areas, and P represents the percentage of the correctly segmented area to the segmentation result;
recall is defined as equation (13):
where TP represents the correctly segmented area of the lesion, FN represents the incorrectly segmented area of the lesion, and R value represents the percentage of the correctly segmented area to the original lesion area.
CN202010388777.5A 2020-05-09 2020-05-09 Method for automatically segmenting anterior mediastinum focus of chest based on CT image Active CN113706548B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010388777.5A CN113706548B (en) 2020-05-09 2020-05-09 Method for automatically segmenting anterior mediastinum focus of chest based on CT image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010388777.5A CN113706548B (en) 2020-05-09 2020-05-09 Method for automatically segmenting anterior mediastinum focus of chest based on CT image

Publications (2)

Publication Number Publication Date
CN113706548A CN113706548A (en) 2021-11-26
CN113706548B true CN113706548B (en) 2023-08-22

Family

ID=78645296

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010388777.5A Active CN113706548B (en) 2020-05-09 2020-05-09 Method for automatically segmenting anterior mediastinum focus of chest based on CT image

Country Status (1)

Country Link
CN (1) CN113706548B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1912927A (en) * 2006-08-25 2007-02-14 西安理工大学 Semi-automatic partition method of lung CT image focus
CN104992445A (en) * 2015-07-20 2015-10-21 河北大学 Automatic division method for pulmonary parenchyma of CT image
CN106940816A (en) * 2017-03-22 2017-07-11 杭州健培科技有限公司 Connect the CT image Lung neoplasm detecting systems of convolutional neural networks entirely based on 3D
CN108027970A (en) * 2015-09-10 2018-05-11 爱克发医疗保健公司 Methods, devices and systems for the medical image for analyzing blood vessel
RU2656761C1 (en) * 2017-02-09 2018-06-06 Общество С Ограниченной Ответственностью "Сибнейро" Method and system of segmentation of lung foci images
CN109271992A (en) * 2018-09-26 2019-01-25 上海联影智能医疗科技有限公司 A kind of medical image processing method, system, device and computer readable storage medium
CN109410166A (en) * 2018-08-30 2019-03-01 中国科学院苏州生物医学工程技术研究所 Full-automatic partition method for pulmonary parenchyma CT image
CN109978886A (en) * 2019-04-01 2019-07-05 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1912927A (en) * 2006-08-25 2007-02-14 西安理工大学 Semi-automatic partition method of lung CT image focus
CN104992445A (en) * 2015-07-20 2015-10-21 河北大学 Automatic division method for pulmonary parenchyma of CT image
CN108027970A (en) * 2015-09-10 2018-05-11 爱克发医疗保健公司 Methods, devices and systems for the medical image for analyzing blood vessel
RU2656761C1 (en) * 2017-02-09 2018-06-06 Общество С Ограниченной Ответственностью "Сибнейро" Method and system of segmentation of lung foci images
CN106940816A (en) * 2017-03-22 2017-07-11 杭州健培科技有限公司 Connect the CT image Lung neoplasm detecting systems of convolutional neural networks entirely based on 3D
CN109410166A (en) * 2018-08-30 2019-03-01 中国科学院苏州生物医学工程技术研究所 Full-automatic partition method for pulmonary parenchyma CT image
CN109271992A (en) * 2018-09-26 2019-01-25 上海联影智能医疗科技有限公司 A kind of medical image processing method, system, device and computer readable storage medium
CN109978886A (en) * 2019-04-01 2019-07-05 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于CT影像的肺组织分割方法综述;耿欢;覃文军;杨金柱;曹鹏;赵大哲;;计算机应用研究(第07期);全文 *

Also Published As

Publication number Publication date
CN113706548A (en) 2021-11-26

Similar Documents

Publication Publication Date Title
Almajalid et al. Development of a deep-learning-based method for breast ultrasound image segmentation
CN111798462B (en) Automatic delineation method of nasopharyngeal carcinoma radiotherapy target area based on CT image
CN112150428B (en) Medical image segmentation method based on deep learning
CN110120048B (en) Three-dimensional brain tumor image segmentation method combining improved U-Net and CMF
Vesal et al. A 2D dilated residual U-Net for multi-organ segmentation in thoracic CT
CN111640120A (en) Pancreas CT automatic segmentation method based on significance dense connection expansion convolution network
WO2022246677A1 (en) Method for reconstructing enhanced ct image
CN112767407A (en) CT image kidney tumor segmentation method based on cascade gating 3DUnet model
CN110945564A (en) Medical image segmentation based on mixed context CNN model
CN114972362A (en) Medical image automatic segmentation method and system based on RMAU-Net network
CN111091575A (en) Medical image segmentation method based on reinforcement learning method
Motamed et al. A transfer learning approach for automated segmentation of prostate whole gland and transition zone in diffusion weighted MRI
CN113706548B (en) Method for automatically segmenting anterior mediastinum focus of chest based on CT image
CN112348826A (en) Interactive liver segmentation method based on geodesic distance and V-net
CN112258508B (en) Image processing analysis segmentation method, system and storage medium for four-dimensional flow data
Erdt et al. Computer aided segmentation of kidneys using locally shape constrained deformable models on CT images
Tao et al. Automatic segmentation of the prostate on MR images based on anatomy and deep learning
CN114387257A (en) Segmentation method, system, device and medium for lung lobe region in lung image
CN114049357A (en) Breast ultrasonic segmentation method based on feature set association degree
CN112396579A (en) Human tissue background estimation method and device based on deep neural network
Roy et al. MDL-IWS: multi-view deep learning with iterative watershed for pulmonary fissure segmentation
CN113052840A (en) Processing method based on low signal-to-noise ratio PET image
Colmeiro et al. Whole body positron emission tomography attenuation correction map synthesizing using 3D deep generative adversarial networks
Ghofrani et al. Liver Segmentation in CT Images Using Deep Neural Networks
CN113222852B (en) Reconstruction method for enhanced CT image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant