CN113706548A - Method for automatically segmenting breast anterior mediastinum focus based on CT image - Google Patents

Method for automatically segmenting breast anterior mediastinum focus based on CT image Download PDF

Info

Publication number
CN113706548A
CN113706548A CN202010388777.5A CN202010388777A CN113706548A CN 113706548 A CN113706548 A CN 113706548A CN 202010388777 A CN202010388777 A CN 202010388777A CN 113706548 A CN113706548 A CN 113706548A
Authority
CN
China
Prior art keywords
image
segmentation
focus
lesion
mediastinum
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010388777.5A
Other languages
Chinese (zh)
Other versions
CN113706548B (en
Inventor
马国林
李海梅
张冰
韩小伟
刘秀秀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Kangxing Shunda Science And Trade Co ltd
Original Assignee
Beijing Kangxing Shunda Science And Trade Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Kangxing Shunda Science And Trade Co ltd filed Critical Beijing Kangxing Shunda Science And Trade Co ltd
Priority to CN202010388777.5A priority Critical patent/CN113706548B/en
Publication of CN113706548A publication Critical patent/CN113706548A/en
Application granted granted Critical
Publication of CN113706548B publication Critical patent/CN113706548B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Biomedical Technology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Computation (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Pathology (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Processing (AREA)

Abstract

The method for automatically segmenting the breast anterior mediastinum focus based on the CT image has a better segmentation result, can meet the quantitative feature analysis of subsequent deep learning, and provides a methodology basis for the automatic segmentation research of the anterior mediastinum focus. The method for automatically segmenting the breast anterior mediastinum lesion based on the CT image comprises the following steps: (1) CT image acquisition: performing chest CT contrast enhanced scanning on all patients to obtain a thin layer image reconstructed by a mediastinum window; (2) on an original CT image, a double-lung mask file is designed by utilizing the natural density difference between the lung and the surrounding tissues and adopting a method for calculating the pixel value of the lung tissue, and the region outside the mediastinum is removed; (3) performing primary segmentation on the focus by adopting a V _ Net network; (4) finely dividing the focus by adopting a Morphological Snakes algorithm; (5) the segmentation results were evaluated.

Description

Method for automatically segmenting breast anterior mediastinum focus based on CT image
Technical Field
The invention relates to the technical field of medical image processing, in particular to a method for automatically segmenting a breast anterior mediastinum focus based on a CT image.
Background
The purpose of preoperative imaging examination of the anterior mediastinal focus of the chest is to preliminarily evaluate the malignancy degree of the tumor, and further to speculate histopathological changes and carry out risk evaluation, so that preoperative treatment scheme selection and clinical prognosis judgment are assisted. The assessment of the signs in CT imaging is based on the lesion itself and its relationship to the adjacent surrounding tissue structures, and often uses empirical, observational rather than quantitative indicators. The image omics is a computer quantitative image analysis method, can extract high-throughput features of medical image images, but the traditional image omics rely on manual delineation for lesion segmentation, and complex delineation standards are often required to be designed to ensure the accuracy of manual segmentation results as much as possible. At present, most tumors are still segmented by adopting manual identification and manual delineation modes on the basis of morphological identification, and the process of manually delineating the focus is time-consuming and easily influenced by human factors due to low automation degree. Therefore, how to develop a method for automatically identifying and ideally segmenting a lesion is one of important research contents in the field of medical image segmentation at present.
The CT image-based automatic segmentation of the anterior mediastinum focus can improve the automation degree of the image omics process, improve the standardization and the accuracy of focus segmentation, and is beneficial to accurately extracting the subsequent image characteristic data and improving the classification performance of a prediction model. In addition, the segmentation of malignant tumor lesions is an important front-end process conventionally executed in radiotherapy, and the precise segmentation of tumors by adopting an automatic segmentation technology can improve the segmentation speed of the lesions before radiotherapy and improve the standardization of the process, thereby facilitating the accurate estimation of the dose in radiotherapy plans and ensuring the standard of curative effect evaluation.
The existing research shows that in the medical image automatic segmentation technology, the focus automatic segmentation based on the deep learning has obvious advantages compared with the traditional method, and the classification and identification performance of the image interested region can be obviously improved and the focus segmentation accuracy can be improved by using the deep convolution neural network architecture. Therefore, the CNN and the related derivative network architecture are adopted to become a preferred algorithm in the field of current medical image lesion segmentation. The networks derived based on CNN mainly include FCN, U-net and V-net. V _ Net is a full convolution neural network which can be used for 3D medical image segmentation and is provided based on the improvement of FCN and U-Net, a volume convolution method is used, a new objective function based on a Dice coefficient is adopted for training, a model is optimized during training, in addition, the V _ Net can also be used for data enhancement in a nonlinear transformation and histogram matching mode, and the characteristics of small medical image data quantity and strong interpretability can be better met. However, because the incidence rate of the anterior mediastinal lesion is relatively low, there is currently less research on the automatic segmentation of CT images, and therefore, how to accurately segment the anterior mediastinal lesion still faces a great challenge.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a method for automatically segmenting the breast anterior mediastinum focus based on a CT image, which has a better result of finely segmenting the focus, can meet the quantitative feature analysis of subsequent deep learning, and provides a methodology basis for the automatic segmentation research of the anterior mediastinum focus.
The technical scheme of the invention is as follows: the method for automatically segmenting the breast anterior mediastinum focus based on the CT image comprises the following steps:
(1) CT image acquisition: all patients are subjected to CT contrast enhanced scanning to obtain a thin layer image reconstructed at a chest mediastinum window;
(2) on an original CT image, a method of calculating a lung tissue pixel value is adopted by utilizing the natural density difference between a lung and surrounding tissues, a double-lung mask file is designed, the region outside a mediastinum is removed, and a focus is reserved;
(3) performing primary segmentation on the focus by adopting a V _ Net network;
(4) finely dividing the focus by adopting a Morphological Snakes algorithm;
(5) the segmentation results were evaluated.
Firstly, based on an original CT image, a method for calculating lung tissue pixel values is adopted to design a double-lung tissue mask file to remove regions except mediastinum, and only mediastinum regions containing focuses are reserved for preliminary segmentation of the focuses in a V _ Net network, so that the complexity of inputting network images is greatly reduced, and the segmentation precision is favorably improved; after the focus is primarily segmented, the focus is finely segmented by adopting a Morphological Snakes algorithm, which is an evolution method of a contour boundary by adopting a mathematical morphology operator, so that the segmentation edge of the focus can be optimized, the boundary of the final segmentation result is smoother and more accurate, and the segmentation result is efficient and stable, therefore, the fine segmentation result of the focus is better, the quantitative feature analysis of subsequent deep learning can be met, and a methodology basis is provided for the automatic segmentation research of the front mediastinal focus.
Drawings
Fig. 1 shows a schematic diagram of a V _ Net network structure.
Figure 2 shows a Dice coefficient curve for automated lesion segmentation. (A) The graph is a Dice coefficient curve of automatic segmentation of the patient focus in a training set sample; (B) the figure is a Dice coefficient curve of automatic segmentation of the patient focus in a validation set sample.
Fig. 3 shows a flow chart of a method for automatic segmentation of anterior chest mediastinal lesions based on CT images according to the present invention.
Detailed Description
As shown in fig. 3, the method for automatically segmenting the anterior mediastinum lesion of the chest based on the CT image comprises the following steps:
(1) CT image acquisition: performing chest CT enhanced scanning on all patients to obtain a thin-layer image reconstructed on a chest mediastinum window;
(2) on an original CT image, a method of calculating a lung tissue pixel value is adopted by utilizing the natural density difference between a lung and surrounding tissues, a double-lung mask file is designed, the region outside a mediastinum is removed, and a focus is reserved;
(3) performing Initial Segmentation (Initial Segmentation) on the focus by adopting a V _ Net network;
(4) fine Segmentation (Accurate Segmentation) of the lesion using the Morphological Snakes algorithm;
(5) the segmentation results were evaluated.
Firstly, based on an original CT image, a method for calculating lung tissue pixel values is adopted to design a double-lung tissue mask file to remove regions except mediastinum, and only mediastinum regions containing focuses are reserved for preliminary segmentation of the focuses in a V _ Net network, so that the complexity of inputting network images is greatly reduced, and the segmentation precision is favorably improved; after the focus is primarily segmented, the focus is finely segmented by adopting a Morphological Snakes algorithm, which is an evolution method of a contour boundary by adopting a mathematical morphology operator, so that the segmentation edge of the focus can be optimized, the boundary of the final segmentation result is smoother and more accurate, and the segmentation result is efficient and stable, therefore, the fine segmentation result of the focus is better, the quantitative feature analysis of subsequent deep learning can be met, and a methodology basis is provided for the automatic segmentation research of the front mediastinal focus.
Preferably, in the step (1), the CT scan of the patient of the first task group adopts 16 rows (cannon aquilon RXL, Tokyo, Japan), 320 rows MDCT (cannon aquilon One, Tokyo, Japan) and 256 rows MDCT (GE revaluation, Massachusetts, USA), and the CT scan of the patient of the second task group adopts two-source CT (Siemens somehow Definition, formalheim, German), 128 rows MDCT (Siemens somehow Perspective, formalheim, German) and two-source CT (Siemens somehow Flash, formalheim, German); all patients are accommodated in the device and the device adopt automatic tube voltage and automatic tube current regulation technology, contrast agent is injected through elbow vein and adopts iopromide injection 370mgI/ml or ioversol injection 320mgI/ml, the injection speed is 3ml/s, the dosage of the contrast agent is 65-80ml, and enhanced scanning is carried out 40s after the contrast agent is injected; all patients are in supine positions, hold the head with both hands, and order the patients to perform image acquisition when holding breath after inhaling; respectively reconstructing images in the longitudinal separation windows with the window width of 400-450HU and the window position of 20-50HU, and the lung windows with the window width of 1000-1500HU and the window position of-650-450 HU; a 0.5-1.25mm thin slice image reconstructed at the mediastinal window was acquired.
Preferably, in the step (2), firstly, based on a method for calculating an individualized lung tissue threshold and searching for a maximum connected region, a mask file of a two-dimensional matrix of a double lung tissue is designed, a point multiplication operation is performed on the mask and an original image to be processed, a mediastinum 3D image is generated after layer-by-layer cyclic processing by the same algorithm, a breast peripheral tissue structure of a non-mediastinum region is removed from the image, and only a mediastinum region including a focus is reserved.
Preferably, in the step (3), the image is preprocessed before being input into the network, and the preprocessing comprises normalization of a gray level average value and a variance; in the network output up-sampling process, bicubic interpolation is adopted to make up for image dimension change generated in the pooling process; and after the segmentation result output by the V _ Net is obtained, smoothing the selected lesion area.
Preferably, in the step (3), the network structure adopts an end-to-end learning manner, the size of the input image is 256 × 256 × 32, the number of channels is 1, the network is divided into four stages during encoding, each stage includes 1 to 3 convolutional layers, the convolutional kernel used in the convolutional operation of each stage is 3 × 3 × 3, the step size is 1, a prilu nonlinear activation function is used in the whole network structure, each stage of encoding is downsampled through a convolutional kernel with the size of 2 × 2 × 2 and the step size of 2, the network adds the input and output of each stage to obtain the learning of a residual error function, and after the first stage, the dimension of the feature map becomes 16 × 256 × 256 × 32 (the number of channels × the image height × the image width × the image depth); after the second stage, the dimension of the feature map is changed into 32 × 128 × 128 × 16; through the third stage, the dimension of the feature map is changed into 64 × 64 × 64 × 8; after the fourth stage, the dimension of the feature map becomes 64 × 32 × 32 × 4; the decoding stage corresponds to the encoding stage, the up-sampling process combines the characteristics extracted by the corresponding layer in the encoding process, and the size of the convolution kernel used by the last convolution layer is 1 multiplied by 1, so that the size of the output image is kept consistent with the size of the original input image; finally, softmax is used to generate a segmentation probability map of foreground and background (fig. 1).
Preferably, in the step (3), V _ Net is trained by using a volume convolution method and a new objective function Dice loss function based on a Dice coefficient, and is optimized during training, where the Dice coefficient D is formula (1):
Figure BDA0002484840640000061
wherein the sum runs on N voxels, and the predicted binary segmentation body pi belongs to P and the gold standard binary body gi belongs to G; the Dice coefficient is expressed by a gradient as formula (2):
Figure BDA0002484840640000062
preferably, in the step (4), on the basis of performing preliminary segmentation on the lesion, a Morphological snake algorithm is adopted to perform fine segmentation, so as to further improve the accuracy of segmentation, and the pixel level set to be solved in the image to be segmented is set as a formula (3):
Figure BDA0002484840640000063
in the level set framework, the conventional partial differential equation represents a curve evolution equation as formula (4):
Figure BDA0002484840640000064
when F is ═ 1 and
Figure BDA0002484840640000065
the above equation is expressed as formula (5) using mathematical morphology:
Figure BDA0002484840640000066
wherein u represents a level set, represents a gradient operator, div represents a divergence operator, v represents a constant, and g () represents a boundary stopping function, which results in formula (6) after n times of contour boundary evolution:
Figure BDA0002484840640000067
the result after n times of profile evolution is formula (7):
Figure BDA0002484840640000071
the number of consecutive applications of the smoothing operator controls the strength of the smoothing step, this number being represented by the parameter mu as N,
the final variation after n iterations is expressed as equations (8) - (10):
Figure BDA0002484840640000072
Figure BDA0002484840640000073
Figure BDA0002484840640000074
wherein D isdTo expand the operator, EdIn order to erode the operator,
Figure BDA0002484840640000075
is a smoothing operator.
Preferably, in the step (5), the segmentation result is evaluated by using three indexes, namely, a Dice coefficient, a precision P, and a recall R, where the Dice coincidence is defined as formula (11):
Figure BDA0002484840640000076
in the formula, A represents a segmentation result, B represents a manual drawing standard corresponding to the segmentation result, A ^ B is a part belonging to a focus in a segmentation graph and is a correctly predicted part, and the closer to 1 the Dice is, the more accurate the segmentation result is;
the precision is defined as formula (12):
Figure BDA0002484840640000081
in the formula, TP represents a region in which a lesion is correctly divided, FP represents a region in which a non-lesion is divided into lesion regions, and P represents a percentage of the correctly divided region in the division result.
The recall is defined as equation (13):
Figure BDA0002484840640000082
where TP represents the region where the lesion is correctly segmented, FN represents the region where the lesion is not correctly segmented, and R represents the percentage of the region where segmentation is correct to the original lesion region.
Drawing a Dice coefficient curve (figure 2) after automatic segmentation of all focuses, wherein in a training set sample, the Dice coefficient minimum value of a patient is 0.623, the Dice coefficient maximum value of the patient is 0.999, and the Dice value of 125 samples is larger than 0.95; in the validation set samples, the patient's Dice coefficient had a minimum of 0.624 and a maximum of 0.998, with 78 samples having Dice values in the range of 0.85-0.95 (67.24%). The results of automatic segmentation of the focus show that the Dice coefficient in the training set is 0.942 +/-0.066, the precision rate is 0.915 +/-0.083 and the recall rate is 0.907 +/-0.091. The Dice coefficient in the verification set is 0.911 +/-0.051, the precision rate is 0.926 +/-0.042, and the recall rate is 0.89 +/-0.059.
In the past, methods for performing segmentation simply based on image inherent features include segmentation of information directly obtainable from an image, such as image gray scale (pixel intensity), gradient or texture, image geometric features, and the like, and the implementation modes include gray threshold segmentation, feature cluster segmentation, region growing modes, and the like. These methods typically require complementary segmentation by hand, and tend to be long and of limited effectiveness for images with low contrast or images lacking significant gradients and geometric features. Compared with the methods, the convolutional neural network based on deep learning, particularly the FCN and the derivative network thereof are used for image segmentation, end-to-end model training can be realized, the segmentation effect is good, the fault-tolerant capability is strong, the method has the characteristics of high-efficiency self-learning capability and self-adaption, and the method is suitable for application in the field of medical image segmentation.
The FCN architecture, which is more popular for medical image segmentation at present, is U-Net. Ronneberger et al creates a resultant output path based on downsampling the encoding and decoding paths corresponding to the upsampling, and combines this method with a jump connection so that combining the higher resolution features of the encoding path with the upsampling features of the decoding path can better locate and learn the feature representation of the input image. The U-Net is divided by taking an input picture as a whole classification based on the classification idea, so that the method overcomes the classification method taking pixels as units, has small calculation amount and can gradually restore the image precision, the improvement based on FCN and U-Net is V _ Net, and the U-Net is a full convolution neural network which can be used for 3D medical image division. V _ Net uses a volume convolution method and adopts a new objective function based on a Dice coefficient to train and optimize during the training, and can better meet the requirements of small image data volume and improvement of segmentation effect at the same time.
On a conventional chest CT image, a scanning range is from a lung tip to a liver top layer, multiple tissue organs are involved, the density change range is large, and the density of double lung tissues, mediastinum and peripheral tissue structures is naturally and greatly compared, so that a method for calculating a lung tissue pixel value is adopted to design a double lung tissue mask file to remove a region except the mediastinum before primary segmentation, and only a mediastinum region containing a focus is reserved for primary segmentation of the focus in a V _ Net network, so that the complexity of an input network image is greatly reduced, and the segmentation precision is favorably improved. Secondly, after the focus is initially segmented, the focus is finely segmented by adopting a Morphological Snakes algorithm, wherein the Morphological Snakes is an evolution method of a contour boundary by adopting a mathematical morphology operator, so that the focus segmentation edge can be optimized, the boundary of the final segmentation result is smoother and more accurate, and the segmentation result is efficient and stable. In the research, a Dice coefficient curve of the automatically segmented focus is displayed in a training set sample, 52.51% of sample Dice values are larger than 0.95, and the result that the Dice values are larger than 0.8 is analyzed, so that the automatically segmented focus result is highly consistent with a manually drawn focus area; analyzing the result that the Dice value is less than 0.8, it can be seen that most of the automatic segmentation results of the focus basically cover the focus area drawn manually, and a small part of the area of the edge of the automatic segmentation image exceeds the edge area of the manually drawn focus in a certain specific direction. In the validation set samples, the patient's Dice coefficient value of 67.24% was in the range of 0.85-0.95. The result shows that the research segmentation model has good effect and can meet the requirement of quantitative feature analysis of subsequent deep learning.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the present invention in any way, and all simple modifications, equivalent variations and modifications made to the above embodiment according to the technical spirit of the present invention still belong to the protection scope of the technical solution of the present invention.

Claims (8)

1. The method for automatically segmenting the breast anterior mediastinum focus based on the CT image comprises the following steps:
(1) CT image acquisition: all patients are subjected to CT contrast enhanced scanning to obtain a thin layer image reconstructed at a chest mediastinum window;
(2) on an original CT image, a method of calculating a lung tissue pixel value is adopted by utilizing the natural density difference between a lung and surrounding tissues, a double-lung mask file is designed, the region outside a mediastinum is removed, and a focus is reserved;
(3) performing primary segmentation on the focus by adopting a V _ Net network;
(4) finely dividing the focus by adopting a Morphological Snakes algorithm;
(5) the segmentation results were evaluated.
2. The method for automatic segmentation of anterior mediastinal lesion of breast based on CT image as claimed in claim 1, wherein: in the step (1), the patient CT scan of the first task group adopts 16 rows of MDCT, 320 rows of MDCT and 256 rows of MDCT, and the patient CT scan of the second task group adopts double-source CT, 128 rows of MDCT and second-generation double-source CT; all patients are accommodated in the device and the device adopt automatic tube voltage and automatic tube current regulation technology, contrast agent is injected through elbow vein and adopts iopromide injection 370mgI/ml or ioversol injection 320mgI/ml, the injection speed is 3ml/s, the dosage of the contrast agent is 65-80ml, and enhanced scanning is carried out 40s after the contrast agent is injected; all patients are in supine positions, hold the head with both hands, and order the patients to perform image acquisition when holding breath after inhaling; respectively reconstructing images in the longitudinal separation windows with the window width of 400-450HU and the window position of 20-50HU, and the lung windows with the window width of 1000-1500HU and the window position of-650-450 HU; a 0.5-1.25mm thin slice image reconstructed at the mediastinal window was acquired.
3. The method of claim 2 for automatic segmentation of anterior mediastinal lesion of breast based on CT image, wherein: in the step (2), firstly, based on a method for calculating an individualized lung tissue threshold value and searching a maximum connected region, a mask file of a two-dimensional lung tissue matrix is designed, the mask and an original image to be processed are subjected to point multiplication, the same algorithm generates a mediastinum 3D image after layer-by-layer cyclic processing, the chest peripheral tissue structure of a non-mediastinum region of the image is removed, and only the mediastinum region including a focus is reserved.
4. The method of claim 3 for automatic segmentation of anterior mediastinal lesion of breast based on CT image, wherein: in the step (3), the image is preprocessed before being input into a network, and the preprocessing comprises normalization of a gray average value and a variance; in the network output up-sampling process, bicubic interpolation is adopted to make up for image dimension change generated in the pooling process; and after the segmentation result output by the V _ Net is obtained, smoothing the selected lesion area.
5. The method of claim 4 for automatic segmentation of anterior mediastinal lesion of breast based on CT image, wherein: in the step (3), the network structure adopts an end-to-end learning mode, the size of an input image is 256 × 256 × 32, the number of channels is 1, the network is divided into four stages during encoding, each stage includes 1 to 3 convolutional layers, the size of a convolutional kernel used in convolutional operation of each stage is 3 × 3 × 3, the step size is 1, a PReLu nonlinear activation function is used in the whole network structure, each stage of encoding is downsampled through a convolutional kernel with the size of 2 × 2 × 2 and the step size of 2, the network adds the input and the output of each stage to obtain the learning of a residual error function, and after the first stage, the dimension of a feature map is changed into 16 × 256 × 256 × 32; after the second stage, the dimension of the feature map is changed into 32 × 128 × 128 × 16; through the third stage, the dimension of the feature map is changed into 64 × 64 × 64 × 8; after the fourth stage, the dimension of the feature map becomes 64 × 32 × 32 × 4; the decoding stage corresponds to the encoding stage, the up-sampling process combines the characteristics extracted by the corresponding layer in the encoding process, and the size of the convolution kernel used by the last convolution layer is 1 multiplied by 1, so that the size of the output image is kept consistent with the size of the original input image; finally, softmax is used to generate a segmentation probability map of foreground and background.
6. The method of claim 5 for automatic segmentation of anterior mediastinal lesion of breast based on CT image, wherein: in the step (3), V _ Net is trained by using a volume convolution method and a Diceloss function based on a new objective function of a Dice coefficient and optimized during the training, where the Dice coefficient D is formula (1):
Figure FDA0002484840630000031
wherein the sum runs on N voxels, and the predicted binary segmentation body pi belongs to P and the gold standard binary body gi belongs to G; the Dice coefficient is expressed by a gradient as formula (2):
Figure FDA0002484840630000032
7. the method of claim 6, wherein the method comprises automatically segmenting the anterior mediastinal lesion based on the CT image, wherein the method comprises: in the step (4), on the basis of primary segmentation of the focus, a Morphological Snakes algorithm is adopted for fine segmentation, so that the segmentation accuracy is further improved, and a pixel level set solved by the image to be segmented is set as a formula (3):
Figure FDA0002484840630000033
in the level set framework, the conventional partial differential equation represents a curve evolution equation as formula (4):
Figure FDA0002484840630000034
when F is ═ 1 and
Figure FDA0002484840630000035
the above equation is expressed as formula (5) using mathematical morphology:
Figure FDA0002484840630000036
wherein u represents a level set, represents a gradient operator, div represents a divergence operator, v represents a constant, and g () represents a boundary stopping function, which results in formula (6) after n times of contour boundary evolution:
Figure FDA0002484840630000041
the result after n times of profile evolution is formula (7):
Figure FDA0002484840630000042
the number of consecutive applications of the smoothing operator controls the strength of the smoothing step, this number being represented by the parameter mu as N,
the final variation after n iterations is expressed as equations (8) - (10):
Figure FDA0002484840630000043
Figure FDA0002484840630000044
Figure FDA0002484840630000045
wherein D isdTo expand the operator, EdIn order to erode the operator,
Figure FDA0002484840630000046
is a smoothing operator.
8. The method of claim 7 for automatic segmentation of anterior mediastinal lesion of breast based on CT image, wherein: in the step (5), the segmentation result is evaluated by using three indexes of a Dice coefficient, a precision rate P and a recall rate R, wherein the Dice coincidence rate is defined as a formula (11):
Figure FDA0002484840630000051
in the formula, A represents a segmentation result, B represents a manual drawing standard corresponding to the segmentation result, A ^ B is a part belonging to a focus in a segmentation graph and is a correctly predicted part, and the closer to 1 the Dice is, the more accurate the segmentation result is;
the precision is defined as formula (12):
Figure FDA0002484840630000052
in the formula, TP represents a region in which a lesion is correctly divided, FP represents a region in which a non-lesion is divided into lesion regions, and P represents a percentage of the correctly divided region in the division result.
The recall is defined as equation (13):
Figure FDA0002484840630000053
where TP represents the region where the lesion is correctly segmented, FN represents the region where the lesion is not correctly segmented, and R represents the percentage of the region where segmentation is correct to the original lesion region.
CN202010388777.5A 2020-05-09 2020-05-09 Method for automatically segmenting anterior mediastinum focus of chest based on CT image Active CN113706548B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010388777.5A CN113706548B (en) 2020-05-09 2020-05-09 Method for automatically segmenting anterior mediastinum focus of chest based on CT image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010388777.5A CN113706548B (en) 2020-05-09 2020-05-09 Method for automatically segmenting anterior mediastinum focus of chest based on CT image

Publications (2)

Publication Number Publication Date
CN113706548A true CN113706548A (en) 2021-11-26
CN113706548B CN113706548B (en) 2023-08-22

Family

ID=78645296

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010388777.5A Active CN113706548B (en) 2020-05-09 2020-05-09 Method for automatically segmenting anterior mediastinum focus of chest based on CT image

Country Status (1)

Country Link
CN (1) CN113706548B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1912927A (en) * 2006-08-25 2007-02-14 西安理工大学 Semi-automatic partition method of lung CT image focus
CN104992445A (en) * 2015-07-20 2015-10-21 河北大学 Automatic division method for pulmonary parenchyma of CT image
CN106940816A (en) * 2017-03-22 2017-07-11 杭州健培科技有限公司 Connect the CT image Lung neoplasm detecting systems of convolutional neural networks entirely based on 3D
CN108027970A (en) * 2015-09-10 2018-05-11 爱克发医疗保健公司 Methods, devices and systems for the medical image for analyzing blood vessel
RU2656761C1 (en) * 2017-02-09 2018-06-06 Общество С Ограниченной Ответственностью "Сибнейро" Method and system of segmentation of lung foci images
CN109271992A (en) * 2018-09-26 2019-01-25 上海联影智能医疗科技有限公司 A kind of medical image processing method, system, device and computer readable storage medium
CN109410166A (en) * 2018-08-30 2019-03-01 中国科学院苏州生物医学工程技术研究所 Full-automatic partition method for pulmonary parenchyma CT image
CN109978886A (en) * 2019-04-01 2019-07-05 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1912927A (en) * 2006-08-25 2007-02-14 西安理工大学 Semi-automatic partition method of lung CT image focus
CN104992445A (en) * 2015-07-20 2015-10-21 河北大学 Automatic division method for pulmonary parenchyma of CT image
CN108027970A (en) * 2015-09-10 2018-05-11 爱克发医疗保健公司 Methods, devices and systems for the medical image for analyzing blood vessel
RU2656761C1 (en) * 2017-02-09 2018-06-06 Общество С Ограниченной Ответственностью "Сибнейро" Method and system of segmentation of lung foci images
CN106940816A (en) * 2017-03-22 2017-07-11 杭州健培科技有限公司 Connect the CT image Lung neoplasm detecting systems of convolutional neural networks entirely based on 3D
CN109410166A (en) * 2018-08-30 2019-03-01 中国科学院苏州生物医学工程技术研究所 Full-automatic partition method for pulmonary parenchyma CT image
CN109271992A (en) * 2018-09-26 2019-01-25 上海联影智能医疗科技有限公司 A kind of medical image processing method, system, device and computer readable storage medium
CN109978886A (en) * 2019-04-01 2019-07-05 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
耿欢;覃文军;杨金柱;曹鹏;赵大哲;: "基于CT影像的肺组织分割方法综述", 计算机应用研究, no. 07 *
陈卉,徐岩,马斌荣: "针对肺结节检测的肺实质CT图像分割", 《中国医学物理学杂志 》, vol. 25, no. 06 *

Also Published As

Publication number Publication date
CN113706548B (en) 2023-08-22

Similar Documents

Publication Publication Date Title
CN111798462B (en) Automatic delineation method of nasopharyngeal carcinoma radiotherapy target area based on CT image
Almajalid et al. Development of a deep-learning-based method for breast ultrasound image segmentation
Skourt et al. Lung CT image segmentation using deep neural networks
Gul et al. Deep learning techniques for liver and liver tumor segmentation: A review
CN110120048B (en) Three-dimensional brain tumor image segmentation method combining improved U-Net and CMF
Dubey et al. Evaluation of three methods for MRI brain tumor segmentation
CN112150428A (en) Medical image segmentation method based on deep learning
Pulagam et al. Automated lung segmentation from HRCT scans with diffuse parenchymal lung diseases
CN111640120A (en) Pancreas CT automatic segmentation method based on significance dense connection expansion convolution network
CN110619635B (en) Hepatocellular carcinoma magnetic resonance image segmentation system and method based on deep learning
Tan et al. An approach for pulmonary vascular extraction from chest CT images
CN112767407A (en) CT image kidney tumor segmentation method based on cascade gating 3DUnet model
CN114972362A (en) Medical image automatic segmentation method and system based on RMAU-Net network
CN113706486A (en) Pancreas tumor image segmentation method based on dense connection network migration learning
Motamed et al. A transfer learning approach for automated segmentation of prostate whole gland and transition zone in diffusion weighted MRI
Kriti et al. A review of Segmentation Algorithms Applied to B-Mode breast ultrasound images: a characterization Approach
CN110570430A (en) orbital bone tissue segmentation method based on body registration
CN116797612B (en) Ultrasonic image segmentation method and device based on weak supervision depth activity contour model
CN110533667B (en) Lung tumor CT image 3D segmentation method based on image pyramid fusion
CN112348826A (en) Interactive liver segmentation method based on geodesic distance and V-net
Delmoral et al. Segmentation of pathological liver tissue with dilated fully convolutional networks: A preliminary study
CN113706548B (en) Method for automatically segmenting anterior mediastinum focus of chest based on CT image
Kim et al. A new hyper parameter of hounsfield unit range in liver segmentation
Erdt et al. Computer aided segmentation of kidneys using locally shape constrained deformable models on CT images
Ghofrani et al. Liver Segmentation in CT Images Using Deep Neural Networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant