CN111127504A - Heart medical image segmentation method and system for atrial septal occlusion patient - Google Patents
Heart medical image segmentation method and system for atrial septal occlusion patient Download PDFInfo
- Publication number
- CN111127504A CN111127504A CN201911389822.2A CN201911389822A CN111127504A CN 111127504 A CN111127504 A CN 111127504A CN 201911389822 A CN201911389822 A CN 201911389822A CN 111127504 A CN111127504 A CN 111127504A
- Authority
- CN
- China
- Prior art keywords
- mri
- medical image
- data set
- segmentation
- image segmentation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 81
- 238000003709 image segmentation Methods 0.000 title claims abstract description 48
- 230000001746 atrial effect Effects 0.000 title claims abstract description 28
- 238000002595 magnetic resonance imaging Methods 0.000 claims abstract description 77
- 238000012549 training Methods 0.000 claims abstract description 40
- 230000011218 segmentation Effects 0.000 claims abstract description 35
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 32
- 238000013526 transfer learning Methods 0.000 claims abstract description 15
- 238000010183 spectrum analysis Methods 0.000 claims abstract description 14
- 238000012545 processing Methods 0.000 claims abstract description 8
- 238000013184 cardiac magnetic resonance imaging Methods 0.000 claims abstract description 7
- 238000013135 deep learning Methods 0.000 claims description 26
- 238000000605 extraction Methods 0.000 claims description 7
- 238000000638 solvent extraction Methods 0.000 claims description 7
- 230000008602 contraction Effects 0.000 claims description 6
- 210000002837 heart atrium Anatomy 0.000 claims description 2
- 238000005192 partition Methods 0.000 claims description 2
- 238000003745 diagnosis Methods 0.000 abstract description 2
- 230000002526 effect on cardiovascular system Effects 0.000 abstract description 2
- 239000002184 metal Substances 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 238000013459 approach Methods 0.000 description 4
- 238000012360 testing method Methods 0.000 description 4
- 238000013528 artificial neural network Methods 0.000 description 3
- 210000005240 left ventricle Anatomy 0.000 description 3
- 208000035478 Interatrial communication Diseases 0.000 description 2
- 241000270295 Serpentes Species 0.000 description 2
- 208000013914 atrial heart septal defect Diseases 0.000 description 2
- 206010003664 atrial septal defect Diseases 0.000 description 2
- 239000008280 blood Substances 0.000 description 2
- 210000004369 blood Anatomy 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 230000002980 postoperative effect Effects 0.000 description 2
- 238000007476 Maximum Likelihood Methods 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 210000005242 cardiac chamber Anatomy 0.000 description 1
- 230000000747 cardiac effect Effects 0.000 description 1
- 238000007675 cardiac surgery Methods 0.000 description 1
- 230000001427 coherent effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000002068 genetic effect Effects 0.000 description 1
- 208000019622 heart disease Diseases 0.000 description 1
- 208000025339 heart septal defect Diseases 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 210000005246 left atrium Anatomy 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 210000005245 right atrium Anatomy 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000008961 swelling Effects 0.000 description 1
- 230000001052 transient effect Effects 0.000 description 1
- 238000002604 ultrasonography Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/174—Segmentation; Edge detection involving the use of two or more images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10088—Magnetic resonance imaging [MRI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30048—Heart; Cardiac
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Magnetic Resonance Imaging Apparatus (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to a heart medical image segmentation method for a patient with atrial septal occlusion, which comprises the following steps: acquiring an atrial septal occlusion patient cardiac MRI dataset, and processing the MRI dataset by using a spectral analysis method; performing data enhancement on the processed MRI data set, and performing binary classification segmentation on the data-enhanced MRI data set to obtain an correctly-divided MRI data set; finely adjusting the convolutional neural network model by adopting a transfer learning method for the correctly divided MRI training data set so as to extract the characteristics useful for the subsequent medical image segmentation; and designing a U-Net framework by using the extracted features useful for medical image segmentation, and completing end-to-end pixel-to-pixel medical image segmentation by using the U-Net framework. The invention also relates to a heart medical image segmentation system for the atrial septal occlusion patient. The invention can improve the diagnosis efficiency in cardiovascular MRI examination, accurately segment the heart medical image and efficiently realize end-to-end object and background classification of the heart MRI image.
Description
Technical Field
The invention relates to a heart medical image segmentation method and system for a patient with atrial septal occlusion.
Background
In Atrial Septal Defect (ASD) medical image analysis, the right atrium shows severe swelling due to blood inflow from the septal defect of the left atrium, and leads to an imbalance of blood volumes in both atria. Magnetic Resonance Imaging (MRI) is commonly used to diagnose such heart disease, however, the introduction of metal atrial septal occluders causes ghosting effects in the area where they are placed, resulting in incorrect active contour segmentation.
Kucera et al have implemented a reliable active contour 3D model over short and long axis views of the heart, who proposes a region-based external force to segment the left ventricle. Sarti et al propose a region-based segmentation model approach that implements a priori knowledge of the statistical distribution of gray levels, using a level set approach to drive curve evolution to obtain the maximum likelihood segmentation of the target relative to the statistical distribution law of the image pixels. Boukerroui et al propose another region-based segmentation method, based on an adaptive segmentation algorithm, in which a weighting function takes into account local and global statistics. Mishra et al propose another active contour segmentation model of the left ventricle in a short axis view segmentation method based on a Genetic Algorithm (GA) solution optimization problem. Subsequently, Mignotte and meuner propose a multi-scale approach to contour optimization. Mitchell et al performs three-dimensional Active Appearance Model (AAM) segmentation in transient ultrasound images. Bosch et al propose an Active Appearance Motion Model (AAMM) based on its predecessor AAM and develop methods for segmenting the left ventricle in the complete cardiac cycle, other well-established segmentation methods involve artificial neural networks, fuzzy multi-scale edge detectors and kalman filter-based tracking methods.
The classical snake model was originally proposed by Kass, Witkin, and Terzopolous to synthesize the noise filter response generated by an edge detector into a coherent depiction of a perceived edge in an image. In this way, a boundary separating two image areas having different gray scale characteristics can be established. A semi-automatic segmentation method is used for segmenting the heart chamber based on MR images, achieved by the Kass snake algorithm, which involves a region-based approach for segmentation. The method can overcome the problem of segmenting objects with poor boundary definition which is common in ultrasonic imaging, but the segmentation of MRI images cannot be completed for segmenting objects with poor boundary definition. Meanwhile, the traditional method (active contour model) cannot accurately segment some special MRI images, such as patients who have undergone cardiac surgery and are provided with metal stents or metal meshes, and metal objects in the MRI images of the hearts of the patients appear in shadow. Meanwhile, the traditional method cannot better solve the problem that the training data in the research is relatively small in scale.
Disclosure of Invention
In view of the above, it is desirable to provide a method and a system for segmenting a medical image of a heart of a patient with atrial septal occlusion.
The invention provides a heart medical image segmentation method for a patient with atrial septal occlusion, which comprises the following steps: a. acquiring an atrial septal occlusion patient cardiac MRI dataset, and processing the MRI dataset by using a spectral analysis method; b. carrying out data enhancement on the MRI data set processed by the spectral analysis method, and carrying out binary classification segmentation on the MRI data set subjected to data enhancement to obtain an correctly-divided MRI data set; c. finely adjusting the convolutional neural network model by adopting a transfer learning method for the correctly divided MRI training data set so as to extract the characteristics useful for the subsequent medical image segmentation; d. and designing a U-Net framework by using the extracted features useful for medical image segmentation, and completing end-to-end pixel-to-pixel medical image segmentation by using the U-Net framework.
Wherein, the step b specifically comprises the following steps:
performing data enhancement, horizontal and vertical sliding and random cutting on the MRI data set by adopting a data enhancement method, and increasing color jitter and Gaussian noise;
the data enhanced MRI dataset is segmented and the segmentation is considered as a binary classification, i.e. 0 and 1, with 1 representing correctly segmented and 0 representing incorrectly segmented.
The step c specifically comprises the following steps:
selecting a pre-training model of a deep learning convolutional neural network VGG16 as an encoder of the U-Net network by using a transfer learning method;
initializing ImageNet weight by utilizing a pre-training model of a deep learning convolutional neural network VGG 16;
modifying the output category of the last layer of the deep learning convolutional neural network VGG16 pre-training model by adopting a fine tuning method, and accelerating the parameter learning rate of the last layer; and adjusting the configuration parameters of the Solver.
The step d specifically comprises the following steps:
the left half part of the U-Net framework is an encoder part, and the encoder captures a contraction path of a context and performs feature extraction;
the right half of the U-Net architecture is a decoder part, and the decoder carries out accurate positioning of symmetrical extension paths;
and (4) segmenting the medical image by utilizing an encoder part and a decoder part of the U-Net framework and obtaining a segmentation result.
The segmentation result comprises:
true posives: the number of positive cases correctly divided;
false positive: the number of instances wrongly divided into positive instances;
false negatives: the number of instances wrongly divided into negative cases;
true negatives: is correctly divided into the number of negative examples.
The invention provides a heart medical image segmentation system for a patient with atrial septal occlusion, which comprises an acquisition module, a data set division module, a fine adjustment module and an image segmentation module, wherein: the acquisition module is used for acquiring an atrial septal occlusion patient heart MRI dataset and processing the MRI dataset by using a spectral analysis method; the data set dividing module is used for performing data enhancement on the MRI data set processed by the spectral analysis method and performing binary classification segmentation on the MRI data set subjected to data enhancement to obtain a correctly divided MRI data set; the fine tuning module is used for fine tuning the convolutional neural network model by adopting a transfer learning method according to the correctly divided MRI training data set so as to extract the useful characteristics for the subsequent medical image segmentation; the image segmentation module is used for designing a U-Net framework by utilizing the extracted features which are useful for medical image segmentation, and completing end-to-end pixel-to-pixel medical image segmentation by utilizing the U-Net framework.
The data set partitioning module is specifically configured to:
performing data enhancement, horizontal and vertical sliding and random cutting on the MRI data set by adopting a data enhancement method, and increasing color jitter and Gaussian noise;
the data enhanced MRI dataset is segmented and the segmentation is considered as a binary classification, i.e. 0 and 1, with 1 representing correctly segmented and 0 representing incorrectly segmented.
The fine tuning module is specifically configured to:
selecting a pre-training model of a deep learning convolutional neural network VGG16 as an encoder of the U-Net network by using a transfer learning method;
initializing ImageNet weight by utilizing a pre-training model of a deep learning convolutional neural network VGG 16;
modifying the output category of the last layer of the deep learning convolutional neural network VGG16 pre-training model by adopting a fine tuning method, and accelerating the parameter learning rate of the last layer; and adjusting the configuration parameters of the Solver.
The image segmentation module is specifically configured to:
the left half part of the U-Net framework is an encoder part, and the encoder captures a contraction path of a context and performs feature extraction;
the right half of the U-Net architecture is a decoder part, and the decoder carries out accurate positioning of symmetrical extension paths;
and (4) segmenting the medical image by utilizing an encoder part and a decoder part of the U-Net framework and obtaining a segmentation result.
The segmentation result comprises:
true posives: the number of positive cases correctly divided;
false positive: the number of instances wrongly divided into positive instances;
false negatives: the number of instances wrongly divided into negative cases;
true negatives: is correctly divided into the number of negative examples.
The invention can improve the diagnosis efficiency in cardiovascular MRI examination, regards medical image segmentation as a binary classification problem, and solves the problem of overfitting caused by insufficient training data of a convolutional neural network in a training stage by transfer learning under the condition of relatively small training data scale; for the traditional method such as a movable contour model, a special MRI image can not be accurately segmented, such as a heart MRI image of a patient who has undergone a heart operation and is provided with a metal bracket or a metal mesh, a complete convolution network based on a U-Net framework is constructed, the special MRI image is accurately segmented, and the end-to-end target and background classification of the heart MRI image can be more efficiently realized.
Drawings
FIG. 1 is a flow chart of a method for segmenting a medical image of a heart of a patient with atrial septal occlusion according to the present invention;
FIG. 2 is a schematic illustration of a spectral analysis method for processing an atrial septal occlusion patient cardiac MRI dataset, in accordance with an embodiment of the present invention;
fig. 3 is a schematic diagram of a deep learning convolutional neural network VGG16 used in the transfer learning according to the embodiment of the present invention;
FIG. 4 is a schematic diagram of a pre-training result of a deep learning convolutional neural network VGG16 according to an embodiment of the present invention;
FIG. 5 is a schematic structural diagram of a convolutional neural network based on a U-Net framework according to an embodiment of the present invention;
FIG. 6 is a diagram of the hardware architecture of the medical image segmentation system for atrial septal occlusion patient heart of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and specific embodiments.
Referring to FIG. 1, a flow chart of the operation of the method for segmenting a medical image of the heart of a patient with atrial septal occlusion according to the preferred embodiment of the present invention is shown.
Step S1, a cardiac MRI dataset of the atrial septal occlusion patient is acquired, processed using spectral analysis (please also refer to fig. 2), and divided into a training set and a test set. Specifically, the method comprises the following steps:
this example recruited 200 atrial septal occlusion patients during the experiment;
acquiring pre-operative and post-operative MRI data sets of an atrial septal occlusion patient by using a Siemens 1.5T magnetic resonance system (MRI) MAGNETOM Avanto 1.5T scanner and Numaris-4 software;
images acquired for the MRI dataset each acquired a single slice image using retrospective gating and 25 time frame indices (from nt 1 to 25), the acquisition parameters including: at a matrix of 256 × 256 pixels TR is 47.1ms, TE is 1.6ms, FOV is 298 × 340mm2;
The 550 atrial septal occlusion patient cardiac MRI data sets collected were used with 80% of the data as the training set and the remaining 20% as the test set.
Step S2 is to perform data enhancement on the MRI data set processed by the spectral analysis method, and perform binary classification segmentation on the MRI data set after data enhancement, thereby obtaining an MRI data set that is correctly divided and an MRI data set that is incorrectly divided. Specifically, the method comprises the following steps:
and step S21, performing data enhancement, horizontal and vertical sliding, random cutting and color dithering and Gaussian noise addition on the MRI data set by adopting a data enhancement method. The method comprises the following steps:
step S211, sliding data of the MRI dataset horizontally and vertically;
and horizontally turning or vertically turning the data of the MRI data set, turning by using any command of a tool kit, and randomly rotating the slice image by any angle (0-360 degrees).
Step S212, randomly cropping the data of the MRI dataset;
random _ crop slice image is randomly cropped to 2/3 its size, breadth, height, in this embodiment, using the random cropping function tf in tensorflow.
Step S213, adding color dither and gaussian noise to the data of the MRI dataset;
the method comprises the steps of carrying out color dithering on a slice image, adjusting the saturation of the image, adjusting the brightness of the image, adjusting the contrast of the image, adjusting the sharpness of the image and simultaneously carrying out Gaussian noise processing on the image.
In step S22, the data-enhanced MRI dataset is segmented, and the segmentation is considered as binary classification, i.e. 0 and 1, with 1 representing a correct segmentation and 0 representing an incorrect segmentation.
And step S3, fine tuning the convolution neural network model by adopting a transfer learning method for the correctly divided MRI training data set so as to extract the useful characteristics for the subsequent medical image segmentation.
Specifically, the method comprises the following steps:
step S31, selecting a pre-training model of a deep learning convolutional neural network VGG16 (please refer to fig. 3) as an encoder of the U-Net network by using a transfer learning method;
step S32, initializing ImageNet weight by using a pre-training model of a deep learning convolutional neural network VGG 16;
step S33, modifying the output category of the last layer of the deep learning convolutional neural network VGG16 pre-training model by adopting a fine tuning method in the training process, and accelerating the parameter learning rate of the last layer; the configuration parameters of the Solver are adjusted, and the pre-training result of the deep learning convolutional neural network VGG16 in this embodiment is shown in fig. 4:
one of the fine tuning is a deep learning method, which is to make the performance of the convolutional network reach the best by continuously adjusting the parameters of the network. Since the prerequisite for fine tuning is the weights of the pre-trained models with meaningful values. When the learning rate is large, the weight will be updated rapidly and the original learning rate of the training network structure will be destroyed, in this embodiment, the learning rate is set to 1 × 10-4。
In step S4, a U-Net structure is designed by using the extracted features useful for medical image segmentation (please refer to fig. 5), and end-to-end pixel-to-pixel medical image segmentation is completed by using the U-Net structure. Specifically, the method comprises the following steps:
step S41, the left half of the U-Net architecture is an encoder part, and the encoder captures a contraction path of the context to perform feature extraction, specifically including:
step S411, the network of the encoder adopts a deep learning convolutional neural network VGG16 to extract features;
step S412, the encoder partially removes the fully connected layer and replaces it with a single convolutional layer of 512 channels;
step S42, the right half of the U-Net architecture is a decoder part, and the decoder performs accurate positioning on the symmetric extension path, which specifically includes:
step S421, the decoder uses the transposed convolution layer to construct, so as to double the size of the feature mapping and reduce the number of channels by half;
step S422, the output of the transposed convolution is connected to a decoder for output;
step 423, repeating the up-sampling process for 5 times to match 5 pools with the largest output feature map size; the custom loss function is:
wherein, yiIs the correct answer to the ith data in a batch, yi' is a predicted value obtained by the neural network, x is an actual value, y is a predicted value, and a and b are constants.
Step S43, segmenting the medical image by using the encoder portion and the decoder portion of the U-Net architecture, and the obtained segmentation result specifically includes:
true Posives (TP): the number of instances that are correctly classified as positive, i.e., the number of instances that are actually positive and classified as positive by the classifier (sample number);
false Positives (FP): the number of instances that are wrongly divided into positive instances, i.e., the number of instances that are actually negative instances but are divided into positive instances by the classifier;
false Negatives (FN): the number of instances that are wrongly divided into negative cases, i.e., the number of instances that are actually positive cases but are divided into negative cases by the classifier;
true Negatives (TN): the number of instances that are correctly divided into negative cases, i.e., the number of instances that are actually negative cases and are divided into negative cases by the classifier;
the similarity of the segmented images is evaluated by the following commonly used metrics in medical image segmentation:
the metric includes: dice index, accuracy and Jaccard similarity factor. Wherein the Jaccard similarity coefficient (Jaccard similarity coefficient) is used for comparing similarity and difference between limited sample sets, and the larger the Jaccard coefficient value is, the higher the sample similarity is.
Referring to FIG. 6, a hardware architecture diagram of a medical image segmentation system 10 for atrial septal occlusion patient hearts according to the present invention is shown. The system comprises: the system comprises an acquisition module 101, a data set dividing module 102, a fine tuning module 103 and an image segmentation module 104.
The acquisition module 101 is configured to acquire an MRI dataset of a heart of a patient with atrial septal occlusion, process the MRI dataset using a spectral analysis method (please refer to fig. 2), and divide the MRI dataset into a training set and a test set. Specifically, the method comprises the following steps:
this example recruited 200 atrial septal occlusion patients during the experiment;
acquiring pre-operative and post-operative MRI data sets of an atrial septal occlusion patient by using a Siemens 1.5T magnetic resonance system (MRI) MAGNETOM Avanto 1.5T scanner and Numaris-4 software;
images acquired for the MRI dataset each acquired a single slice image using retrospective gating and 25 time frame indices (from nt 1 to 25), the acquisition parameters including: at a matrix of 256 × 256 pixels TR is 47.1ms, TE is 1.6ms, FOV is 298 × 340mm2;
The 550 atrial septal occlusion patient cardiac MRI data sets collected were used with 80% of the data as the training set and the remaining 20% as the test set.
The data set partitioning module 102 is configured to perform data enhancement on the MRI data set processed by the spectral analysis method, and perform binary classification partitioning on the data-enhanced MRI data set to obtain an MRI data set that is correctly partitioned and an MRI data set that is incorrectly partitioned.
Specifically, the method comprises the following steps:
the data set partitioning module 102 performs data enhancement, horizontal and vertical sliding, random cropping, and color dithering and gaussian noise increase on the MRI data set by using a data enhancement method. The method comprises the following steps:
the data of the MRI dataset are slid horizontally and vertically:
horizontally turning or vertically turning the data of the MRI data set, turning by using any command of a tool kit, and randomly rotating the slice image by any angle (0-360 degrees);
random cropping of data of the MRI dataset:
random _ crop slice image, which is cropped to its size, small, width, height 2/3, with a random cropping function tf in tensorflow;
data from the MRI dataset were added to color jitter and gaussian noise:
the method comprises the steps of carrying out color dithering on a slice image, adjusting the saturation of the image, adjusting the brightness of the image, adjusting the contrast of the image, adjusting the sharpness of the image and simultaneously carrying out Gaussian noise processing on the image.
The data set partitioning module 102 partitions the data-enhanced MRI data set by considering the partitioning as binary classification, i.e., 0 and 1, with 1 representing being correctly partitioned and 0 representing being incorrectly partitioned.
The fine tuning module 103 is configured to fine tune the convolutional neural network model by using a transfer learning method for the correctly divided MRI training data set, so as to extract features useful for subsequent medical image segmentation. Specifically, the method comprises the following steps:
the fine tuning module 103 selects a pre-training model of a deep learning convolutional neural network VGG16 (please refer to fig. 3) as an encoder of the U-Net network by using a transfer learning method;
the fine tuning module 103 initializes the ImageNet weight by using a pre-training model of the deep learning convolutional neural network VGG 16;
the fine tuning module 103 modifies the output category of the last layer of the pre-training model of the deep learning convolutional neural network VGG16 by adopting a fine tuning method in the training process, and accelerates the parameter learning rate of the last layer; the configuration parameters of the Solver are adjusted, and the pre-training result of the deep learning convolutional neural network VGG16 of the embodiment is shown in fig. 4:
one of the fine tuning is a deep learning method, namely, a network is continuously adjustedThe parameters of (2) to maximize the performance of the convolutional network. Since the prerequisite for fine tuning is the weights of the pre-trained models with meaningful values. When the learning rate is large, the weight will be updated rapidly and the original learning rate of the training network structure will be destroyed, in this embodiment, the learning rate is set to 1 × 10-4。
The image segmentation module 104 is configured to design a U-Net structure by using the extracted features useful for medical image segmentation (please refer to fig. 5), and complete end-to-end pixel-to-pixel medical image segmentation by using the U-Net structure. Specifically, the method comprises the following steps:
the image segmentation module 104 performs feature extraction, that is, the left half of the U-Net architecture is an encoder part, and the encoder captures a contraction path of a context to perform feature extraction, specifically including:
the network of the encoder adopts a deep learning convolutional neural network VGG16 to extract features;
the encoder partially removes the fully connected layers and replaces them with a single convolutional layer of 512 channels;
the image segmentation module 104 constructs a decoder part of a U-Net architecture, a right half of the U-Net architecture is the decoder part, and the decoder performs accurate positioning on symmetric extension paths, specifically including:
the image segmentation module 104 uses the transposed convolutional layer to construct the decoder portion, so that the size of the feature mapping is doubled, and the number of channels is reduced by half;
the image segmentation module 104 connects the output of the transposed convolution to a decoder for output;
the image segmentation module 104 repeats the upsampling process 5 times to pair 5 pools with the largest output feature map size; the custom loss function is:
wherein, yiIs the correct answer to the ith data in a batch, yi' is a predicted value obtained by a neural network, and x is actualThe value, y is the predicted value, and a and b are constants.
The image segmentation module 104 segments the medical image by using an encoder part and a decoder part of a U-Net architecture, and the obtained segmentation result specifically includes:
true Posives (TP): the number of instances that are correctly classified as positive, i.e., the number of instances that are actually positive and classified as positive by the classifier (sample number);
false Positives (FP): the number of instances that are wrongly divided into positive instances, i.e., the number of instances that are actually negative instances but are divided into positive instances by the classifier;
false Negatives (FN): the number of instances that are wrongly divided into negative cases, i.e., the number of instances that are actually positive cases but are divided into negative cases by the classifier;
true Negatives (TN) which is the number of instances that are correctly divided into negative cases, i.e., the number of instances that are actually negative cases and are divided into negative cases by the classifier;
the similarity of the segmented images is evaluated by the following commonly used metrics in medical image segmentation:
the metric includes: dice index, accuracy and Jaccard similarity factor. Wherein the Jaccard similarity coefficient (Jaccard similarity coefficient) is used for comparing similarity and difference between limited sample sets, and the larger the Jaccard coefficient value is, the higher the sample similarity is.
Although the present invention has been described with reference to the presently preferred embodiments, it will be understood by those skilled in the art that the foregoing description is illustrative only and is not intended to limit the scope of the invention, as claimed.
Claims (10)
1. A method for segmenting a medical image of a heart of a patient with atrial septal occlusion, the method comprising the steps of:
a. acquiring an atrial septal occlusion patient cardiac MRI dataset, and processing the MRI dataset by using a spectral analysis method;
b. carrying out data enhancement on the MRI data set processed by the spectral analysis method, and carrying out binary classification segmentation on the MRI data set subjected to data enhancement to obtain an correctly-divided MRI data set;
c. finely adjusting the convolutional neural network model by adopting a transfer learning method for the correctly divided MRI training data set so as to extract the characteristics useful for the subsequent medical image segmentation;
d. and designing a U-Net framework by using the extracted features useful for medical image segmentation, and completing end-to-end pixel-to-pixel medical image segmentation by using the U-Net framework.
2. The method according to claim 1, wherein said step b specifically comprises:
performing data enhancement, horizontal and vertical sliding and random cutting on the MRI data set by adopting a data enhancement method, and increasing color jitter and Gaussian noise;
the data enhanced MRI dataset is segmented and the segmentation is considered as a binary classification, i.e. 0 and 1, with 1 representing correctly segmented and 0 representing incorrectly segmented.
3. The method according to claim 2, wherein said step c specifically comprises:
selecting a pre-training model of a deep learning convolutional neural network VGG16 as an encoder of the U-Net network by using a transfer learning method;
initializing ImageNet weight by utilizing a pre-training model of a deep learning convolutional neural network VGG 16;
modifying the output category of the last layer of the deep learning convolutional neural network VGG16 pre-training model by adopting a fine tuning method, and accelerating the parameter learning rate of the last layer; and adjusting the configuration parameters of the Solver.
4. The method according to claim 3, wherein said step d comprises the steps of:
the left half part of the U-Net framework is an encoder part, and the encoder captures a contraction path of a context and performs feature extraction;
the right half of the U-Net architecture is a decoder part, and the decoder carries out accurate positioning of symmetrical extension paths;
and (4) segmenting the medical image by utilizing an encoder part and a decoder part of the U-Net framework and obtaining a segmentation result.
5. The method of claim 4, wherein the segmentation result comprises:
true posives: the number of positive cases correctly divided;
false positive: the number of instances wrongly divided into positive instances;
false negatives: the number of instances wrongly divided into negative cases;
true negatives: is correctly divided into the number of negative examples.
6. The utility model provides an atrium separates occlusion patient's heart medical image segmentation system which characterized in that, this system includes acquisition module, data set partition module, fine setting module and image segmentation module, wherein:
the acquisition module is used for acquiring an atrial septal occlusion patient heart MRI dataset and processing the MRI dataset by using a spectral analysis method;
the data set dividing module is used for performing data enhancement on the MRI data set processed by the spectral analysis method and performing binary classification segmentation on the MRI data set subjected to data enhancement to obtain a correctly divided MRI data set;
the fine tuning module is used for fine tuning the convolutional neural network model by adopting a transfer learning method according to the correctly divided MRI training data set so as to extract the useful characteristics for the subsequent medical image segmentation;
the image segmentation module is used for designing a U-Net framework by utilizing the extracted features which are useful for medical image segmentation, and completing end-to-end pixel-to-pixel medical image segmentation by utilizing the U-Net framework.
7. The system of claim 6, wherein the dataset partitioning module is specifically configured to:
performing data enhancement, horizontal and vertical sliding and random cutting on the MRI data set by adopting a data enhancement method, and increasing color jitter and Gaussian noise;
the data enhanced MRI dataset is segmented and the segmentation is considered as a binary classification, i.e. 0 and 1, with 1 representing correctly segmented and 0 representing incorrectly segmented.
8. The system of claim 7, wherein the fine-tuning module is specifically configured to:
selecting a pre-training model of a deep learning convolutional neural network VGG16 as an encoder of the U-Net network by using a transfer learning method;
initializing ImageNet weight by utilizing a pre-training model of a deep learning convolutional neural network VGG 16;
modifying the output category of the last layer of the deep learning convolutional neural network VGG16 pre-training model by adopting a fine tuning method, and accelerating the parameter learning rate of the last layer; and adjusting the configuration parameters of the Solver.
9. The system of claim 8, wherein the image segmentation module is specifically configured to:
the left half part of the U-Net framework is an encoder part, and the encoder captures a contraction path of a context and performs feature extraction;
the right half of the U-Net architecture is a decoder part, and the decoder carries out accurate positioning of symmetrical extension paths;
and (4) segmenting the medical image by utilizing an encoder part and a decoder part of the U-Net framework and obtaining a segmentation result.
10. The system of claim 9, wherein the segmentation result comprises:
true posives: the number of positive cases correctly divided;
false positive: the number of instances wrongly divided into positive instances;
false negatives: the number of instances wrongly divided into negative cases;
true negatives: is correctly divided into the number of negative examples.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911389822.2A CN111127504B (en) | 2019-12-28 | 2019-12-28 | Method and system for segmenting heart medical image of patient with atrial septal occlusion |
PCT/CN2020/129400 WO2021129234A1 (en) | 2019-12-28 | 2020-11-17 | Cardiac medicine image segmentation method and system for atrial septal occlusion patient |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911389822.2A CN111127504B (en) | 2019-12-28 | 2019-12-28 | Method and system for segmenting heart medical image of patient with atrial septal occlusion |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111127504A true CN111127504A (en) | 2020-05-08 |
CN111127504B CN111127504B (en) | 2024-02-09 |
Family
ID=70504584
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911389822.2A Active CN111127504B (en) | 2019-12-28 | 2019-12-28 | Method and system for segmenting heart medical image of patient with atrial septal occlusion |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN111127504B (en) |
WO (1) | WO2021129234A1 (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111584093A (en) * | 2020-05-12 | 2020-08-25 | 鲁东大学 | Method and device for constructing left ventricle geometric model for evaluating curative effect of injectable hydrogel |
CN111739000A (en) * | 2020-06-16 | 2020-10-02 | 山东大学 | System and device for improving left ventricle segmentation accuracy of multiple cardiac views |
CN111915557A (en) * | 2020-06-23 | 2020-11-10 | 杭州深睿博联科技有限公司 | Deep learning atrial septal defect detection method and device |
WO2021129234A1 (en) * | 2019-12-28 | 2021-07-01 | 中国科学院深圳先进技术研究院 | Cardiac medicine image segmentation method and system for atrial septal occlusion patient |
CN113205528A (en) * | 2021-04-02 | 2021-08-03 | 上海慧虎信息科技有限公司 | Medical image segmentation model training method, segmentation method and device |
CN113379682A (en) * | 2021-05-21 | 2021-09-10 | 郑州大学 | Heart MRI image coupling level set segmentation method and system |
CN113555089A (en) * | 2021-07-14 | 2021-10-26 | 江苏宏创信息科技有限公司 | Artificial intelligence medical image quality control method applied to clinical image |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118334356B (en) * | 2024-06-14 | 2024-08-20 | 华中科技大学同济医学院附属同济医院 | Automatic fat and muscle region segmentation method in MRI (magnetic resonance imaging) based on migration learning |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170109881A1 (en) * | 2015-10-14 | 2017-04-20 | The Regents Of The University Of California | Automated segmentation of organ chambers using deep learning methods from medical imaging |
CN108492286A (en) * | 2018-03-13 | 2018-09-04 | 成都大学 | A kind of medical image cutting method based on the U-shaped convolutional neural networks of binary channel |
US20190205606A1 (en) * | 2016-07-21 | 2019-07-04 | Siemens Healthcare Gmbh | Method and system for artificial intelligence based medical image segmentation |
CN110570432A (en) * | 2019-08-23 | 2019-12-13 | 北京工业大学 | CT image liver tumor segmentation method based on deep learning |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109215035B (en) * | 2018-07-16 | 2021-12-03 | 江南大学 | Brain MRI hippocampus three-dimensional segmentation method based on deep learning |
CN110619641A (en) * | 2019-09-02 | 2019-12-27 | 南京信息工程大学 | Automatic segmentation method of three-dimensional breast cancer nuclear magnetic resonance image tumor region based on deep learning |
CN111127504B (en) * | 2019-12-28 | 2024-02-09 | 中国科学院深圳先进技术研究院 | Method and system for segmenting heart medical image of patient with atrial septal occlusion |
-
2019
- 2019-12-28 CN CN201911389822.2A patent/CN111127504B/en active Active
-
2020
- 2020-11-17 WO PCT/CN2020/129400 patent/WO2021129234A1/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170109881A1 (en) * | 2015-10-14 | 2017-04-20 | The Regents Of The University Of California | Automated segmentation of organ chambers using deep learning methods from medical imaging |
US20190205606A1 (en) * | 2016-07-21 | 2019-07-04 | Siemens Healthcare Gmbh | Method and system for artificial intelligence based medical image segmentation |
CN108492286A (en) * | 2018-03-13 | 2018-09-04 | 成都大学 | A kind of medical image cutting method based on the U-shaped convolutional neural networks of binary channel |
CN110570432A (en) * | 2019-08-23 | 2019-12-13 | 北京工业大学 | CT image liver tumor segmentation method based on deep learning |
Non-Patent Citations (2)
Title |
---|
叶晨;赵作鹏;马小平;胡延军;刘翼;赵海含;: "基于CNN迁移学习的甲状腺结节检测方法", 计算机工程与应用, no. 22 * |
马金林;魏萌;马自萍;: "基于深度迁移学习的肺结节分割方法", 计算机应用, no. 07 * |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021129234A1 (en) * | 2019-12-28 | 2021-07-01 | 中国科学院深圳先进技术研究院 | Cardiac medicine image segmentation method and system for atrial septal occlusion patient |
CN111584093A (en) * | 2020-05-12 | 2020-08-25 | 鲁东大学 | Method and device for constructing left ventricle geometric model for evaluating curative effect of injectable hydrogel |
CN111739000A (en) * | 2020-06-16 | 2020-10-02 | 山东大学 | System and device for improving left ventricle segmentation accuracy of multiple cardiac views |
CN111739000B (en) * | 2020-06-16 | 2022-09-13 | 山东大学 | System and device for improving left ventricle segmentation accuracy of multiple cardiac views |
CN111915557A (en) * | 2020-06-23 | 2020-11-10 | 杭州深睿博联科技有限公司 | Deep learning atrial septal defect detection method and device |
CN113205528A (en) * | 2021-04-02 | 2021-08-03 | 上海慧虎信息科技有限公司 | Medical image segmentation model training method, segmentation method and device |
CN113205528B (en) * | 2021-04-02 | 2023-07-07 | 上海慧虎信息科技有限公司 | Medical image segmentation model training method, segmentation method and device |
CN113379682A (en) * | 2021-05-21 | 2021-09-10 | 郑州大学 | Heart MRI image coupling level set segmentation method and system |
CN113379682B (en) * | 2021-05-21 | 2022-10-04 | 郑州大学 | Heart MRI image coupling level set segmentation method and system |
CN113555089A (en) * | 2021-07-14 | 2021-10-26 | 江苏宏创信息科技有限公司 | Artificial intelligence medical image quality control method applied to clinical image |
Also Published As
Publication number | Publication date |
---|---|
WO2021129234A1 (en) | 2021-07-01 |
CN111127504B (en) | 2024-02-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111127504B (en) | Method and system for segmenting heart medical image of patient with atrial septal occlusion | |
Armanious et al. | Unsupervised medical image translation using cycle-MedGAN | |
Kumar et al. | U-segnet: fully convolutional neural network based automated brain tissue segmentation tool | |
CN109584254B (en) | Heart left ventricle segmentation method based on deep full convolution neural network | |
CN110889853B (en) | Tumor segmentation method based on residual error-attention deep neural network | |
CN110889852B (en) | Liver segmentation method based on residual error-attention deep neural network | |
CN110930416B (en) | MRI image prostate segmentation method based on U-shaped network | |
EP0990222B1 (en) | Image processing method and system involving contour detection steps | |
CN107563434B (en) | Brain MRI image classification method and device based on three-dimensional convolutional neural network | |
CN111401480A (en) | Novel breast MRI (magnetic resonance imaging) automatic auxiliary diagnosis method based on fusion attention mechanism | |
CN110705555A (en) | Abdomen multi-organ nuclear magnetic resonance image segmentation method, system and medium based on FCN | |
US9875570B2 (en) | Method for processing image data representing a three-dimensional volume | |
CN112052877B (en) | Picture fine granularity classification method based on cascade enhancement network | |
CN113888520A (en) | System and method for generating a bullseye chart | |
KR102466061B1 (en) | Apparatus for denoising using hierarchical generative adversarial network and method thereof | |
CN111340699B (en) | Magnetic resonance image denoising method and device based on non-local prior and sparse representation | |
CN116894783A (en) | Metal artifact removal method for countermeasure generation network model based on time-varying constraint | |
Xing et al. | The Beauty or the Beast: Which Aspect of Synthetic Medical Images Deserves Our Focus? | |
Arora et al. | Noise adaptive FCM algorithm for segmentation of MRI brain images using local and non-local spatial information | |
CN115601535A (en) | Chest radiograph abnormal recognition domain self-adaption method and system combining Wasserstein distance and difference measurement | |
Shao et al. | Semantic segmentation method of 3D liver image based on contextual attention model | |
Carmo et al. | Extended 2d volumetric consensus hippocampus segmentation | |
Ribeiro et al. | Evaluating the pre-processing impact on the generalization of deep learning networks for left ventricle segmentation | |
Gautam et al. | Implementation of NLM and PNLM for de-noising of MRI images | |
CN113538451B (en) | Method and device for segmenting magnetic resonance image of deep vein thrombosis, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |