CN115409857A - Three-dimensional hydrocephalus CT image segmentation method based on deep learning - Google Patents

Three-dimensional hydrocephalus CT image segmentation method based on deep learning Download PDF

Info

Publication number
CN115409857A
CN115409857A CN202211215390.5A CN202211215390A CN115409857A CN 115409857 A CN115409857 A CN 115409857A CN 202211215390 A CN202211215390 A CN 202211215390A CN 115409857 A CN115409857 A CN 115409857A
Authority
CN
China
Prior art keywords
data
model
hydrocephalus
deep learning
residual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202211215390.5A
Other languages
Chinese (zh)
Inventor
颜成钢
张帅杰
刘思危
孙垚棋
陈楚翘
王鸿奎
胡冀
高宇涵
朱尊杰
何敏
殷海兵
张继勇
李宗鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN202211215390.5A priority Critical patent/CN115409857A/en
Publication of CN115409857A publication Critical patent/CN115409857A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a three-dimensional hydrocephalus CT image segmentation method based on deep learning, which comprises the following steps: step (1), acquiring a data set; step (2), data preprocessing; step (3), constructing a residual U-net convolution network model; step (4), training the constructed residual U-net convolution network model through the preprocessed data; and (5) inputting test data into the trained residual U-net convolution network model, and testing the model. The method introduces residual convolution as a basic convolution unit, enhances the robustness of the segmentation model, constructs a three-dimensional segmentation model for the ventricle region in the CT image based on deep learning, fully utilizes the advantages of data in a three-dimensional space, explores the possibility of high-dimensional information on the model, and improves the accuracy of the final expression effect.

Description

Three-dimensional hydrocephalus CT image segmentation method based on deep learning
Technical Field
The invention relates to the fields of computer vision, deep learning of traditional Chinese medicine image region segmentation and the like, and also consults professional doctors to evaluate postoperative recovery indexes.
Background
Hydrocephalus is a brain disease which is usually manifested by abnormal expansion of ventricles of the brain, and further oppresses other brain tissues, with great risk. Quantitative evaluation of the ventricles is essential for early detection and therapy monitoring, and the related work must be based on segmentation of regions of interest (ROIs) in medical images, but the related research work is still relatively few at present.
The ventricles often exhibit irregular boundaries and a blurred region in medical images, which causes difficulties in performing the related work. The traditional method can only aim at the target with fixed characteristics, and is difficult to effectively extract the target with polymorphic change, so that the traditional method is difficult to carry out related work. Deep Learning (DL) is an algorithm model simulating human neuron design, and the most prominent characteristic of the model is that the task target can be completed in a self-Learning manner without specific design data characteristics, and the model has the characteristic of strong fitting, and has great advantages compared with the traditional method. Therefore, researchers have begun to study the segmentation of ventricular regions in medical images based on a deep learning method.
Klimont et al proposed the application of a U-Net-based convolutional neural network, which proves that the deep learning-based method can achieve the performance of nearly human level in the ventricular region segmentation of CT images, and shows that the deep learning model has potential practical application prospect, which encourages further research in the field and promotes implementation in clinical environment. The study uses CT images as study data and model methods including 1cycle learning rate strategy, transfer learning, generalized dice loss function, mixed floating point precision, self-attention, data enhancement, and the like. However, the study uses a small amount of data, is statistically lacking in validation, and utilizes only two-dimensional images without exploring and utilizing the advantages of medical images in three-dimensional space.
Ono et al propose a U-Net method based on 2.5-dimensional space and a deep learning method based on transfer learning, which are used for segmenting the ventricles of infants suffering from hydrocephalus in medical images. In the research, a network architecture combining low-level features and high-level features is applied to improve learning efficiency and keep the correlation of slice directions, and meanwhile, the model processes data of hydrocephalus infants through transfer learning of a large number of adult data sets. This study demonstrates the effectiveness of high dimensional space on model performance and the importance of data to obtain robust models, but it has not yet been studied on fully three-dimensional based data.
There is currently no research related to the quantitative estimation of hydrocephalus based on three-dimensional CT medical images. According to the study, the ventricle area in the CT image is segmented by acquiring the CT medical image of the head before and after the treatment of the hydrocephalus patient, combining the characteristic information of the medical image and adopting a deep learning-based method, the ventricle area is effectively quantified, the treatment quality after the hydrocephalus operation is evaluated, and the clinical diagnosis value brought by the medical image is improved.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a three-dimensional hydrocephalus CT image segmentation method based on deep learning. The method can more conveniently and quickly judge the postoperative recovery condition of the patient and help the doctor to make a judgment more quickly and accurately.
The invention is based on a U-net model, introduces a method of a residual error processing unit block, and carries out segmentation, marking and quantitative evaluation on the head CT image of an adult. In order to accurately segment the ROI in the medical image, a deep learning model of a U-Net structure is adopted. The U-Net model is a deep learning framework specially designed for biomedical image segmentation, and the whole framework of the model is formed by combining an encoding structure and a decoding structure. This new model architecture has enjoyed great success in traditional models. The upper and lower hierarchical structure of the U-Net model enables the model to generate multi-level features of an image so as to effectively extract multi-view characteristic information, and the transmission mode of the upper and lower information of the model can effectively blend large-view features into small-view features so as to effectively utilize global information. The U-Net framework has the other characteristic that a long jump connection information transmission mode is adopted, so that the problem of spatial information loss caused by down sampling in the traditional model is solved, and the model has higher segmentation precision on image details. The U-Net model framework is widely applied to various biomedical image tasks and shows excellent performance.
The deep learning model is composed of a large number of operators, and when the depth of the model reaches a certain degree, the characteristic numerical value is biased along with the calculation. The deviation of the values is not beneficial to the back propagation of the model, and the phenomena of gradient extinction and gradient explosion can occur, so that the characteristic values need to be controlled. Batch Normalization (BN) is an effective planning method, but its performance depends on the overall distribution of data, requiring a large enough sample size to be input for each training. In the research, the three-dimensional data is used for training the model, so that large-batch data input is difficult to realize in single training, and the distribution trend of the whole data cannot be effectively provided. The Instance Normalization (IN) is a modified scheme of the Batch Normalization, which only depends on the state of the distribution of a single Instance, is an effective method for replacing the Batch Normalization, and is commonly used IN various models of large-scale Instance tasks.
And longitudinal deep supervision is adopted to enhance the feature expression of different levels in the U-Net model. The deep learning model learns the optimal weight distribution by data fitting, and therefore, even in a structure in which bias is generalized, the effect of design is not necessarily exhibited. And the model supervision is used in the tail end layers of different levels of the U-Net model to form longitudinal deep supervision, so that different levels of structures of the model can directly learn the specific contribution mode of the model to the final result, and the expected multi-level expression characteristic is displayed. This method has proven effective in a number of experiments.
A three-dimensional hydrocephalus CT image segmentation method based on deep learning comprises the following steps:
step (1), acquiring a data set;
step (2), preprocessing data;
step (3), constructing a residual U-net convolution network model;
step (4), training the constructed residual U-net convolution network model through the preprocessed data;
and (5) inputting test data into the trained residual U-net convolution network model, and testing the model.
Further, the specific method of the step (1) is as follows;
medical image data of a CT of a head of a hydrocephalus patient is acquired.
And a case searching system is used for finding the record of the patients who have been treated with hydrocephalus in recent years, screening the information of 197 patients in total, and acquiring effective case data. The data of each patient comprises medical image data used before and after an operation and during clinical diagnosis, the condition of research data is ensured to be in accordance with the actual clinical environment, and subsequent research is carried out based on the clinical actual data, so that the research result is in accordance with the actual use case, the flow is efficient, and unnecessary burden is reduced. A medical image of a head CT of a hydrocephalus patient is manually segmented by a professional doctor to be used as a label to construct a data set.
Further, the specific method of the step (2) is as follows;
and preprocessing the data, including data resampling, window width and window level adjustment, background area removal and data standardization.
(1) Data resampling:
the spatial resolution of all data is resampled to the median of the overall data spatial resolution.
(2) Adjusting the window width and the window position:
CT images typically have thousands of gray levels, but not all gray levels are effectively represented. And adjusting the window width and window level according to the suggested range of the built-in gray level range in the CT image.
(3) Removing background areas:
and after z-source standardization is carried out on the image after the window is adjusted, taking the area larger than 0 value as a foreground, and selecting the frame area where the foreground area is located as data for removing the background.
(4) Data normalization:
z-source normalization was performed on background-removed data.
Further, the specific method of the step (3) is as follows;
a deep learning model based on a U-Net structure is adopted, and the deep learning model comprises a residual convolution unit, a down-sampling encoder, an up-sampling decoder and a long jump link. In order to effectively improve the robustness of the model, residual convolution is used as a basic convolution unit, and the value distribution of the feature map is controlled by using the instant Normalization so that the model can be effectively learned.
The downsampling encoder comprises a repeated convolution layer and a pooling layer, the depth of the model is one of factors influencing the accuracy of the model, and the convolution layer of the downsampling encoder adopts a residual convolution mode and can better adapt to the model;
the upsampling decoder restores the size of the picture through repeated deconvolution and pooling layers.
And the characteristic graphs are spliced between every two layers through long jump links, so that the loss of the characteristics is reduced.
Further, the specific method of the step (4) is as follows;
the preprocessed data are divided into training and testing sets, and the proportion is 8. And training the constructed residual U-net convolution network model through a training set.
The model training adopts the clipping and data enhancement method of patch. Through statistics on the size of the data ventricle area, a patch with the pixel size of 16 × 256 × 256 is finally selected as a model input. And sequentially using a data enhancement method of mirror image, brightness change and Gaussian noise to the input training data to enhance the data.
The model was trained using Adam's gradient descent method. The value distribution of the feature map is controlled using Instance Normalization to enable efficient learning of the model. And longitudinal deep supervision is adopted to enhance the feature expression of different levels in the U-Net model.
In the process of model training, the minimum loss function of the whole model needs to be calculated, so that the model tends to be converged; and calculating the loss of the model by adopting a Dice loss function, and measuring the overlapping degree between the two samples through Dice coefficients.
The invention has the following beneficial effects:
supplementing the improvements and advantages of the present invention over the prior art
1. Residual convolution is introduced as a basic convolution unit to enhance robustness of a segmentation model
2. The invention takes the three-dimensional CT image data of the patient as a research object, constructs a method for segmenting the hydrocephalus of the patient, exerts potential diagnosis information of the medical image and provides assistance for clinical diagnosis.
3. The method constructs the three-dimensional segmentation model for the ventricle region in the CT image based on deep learning, fully utilizes the advantages of data in a three-dimensional space, explores the possibility of the model brought by high-dimensional information, and improves the accuracy of the final expression effect.
4. According to the invention, the model parameters are constructed in a data adaptability mode, so that the overall scheme is more suitable for the distribution characteristics of data, the data characteristics are fully obtained, and the diagnostic information of the data is more accurately obtained.
Drawings
FIG. 1 is a schematic diagram of a residual U-net convolution network structure according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a residual basic processing unit according to an embodiment of the present invention
FIG. 3 is a schematic diagram of the algorithm configuration of BN (left) and (right) IN;
FIG. 4 is a flowchart of data processing and model building according to an embodiment of the present invention.
Detailed Description
The method of the invention is further described below with reference to the accompanying drawings and examples.
In the present invention, a general data preprocessing and model generation and training procedure based on data distribution is used. The whole flow is shown in fig. 4.
The invention adopts a general data preprocessing flow, which can ensure that the characteristics of the data are not damaged and is beneficial to efficient model training. Although deep learning models can accomplish the task goal through learning methods, data preprocessing still plays an important role because effective data preprocessing can reduce the difficulty of model learning and increase the ability of the model to fit a particular expression. The data preprocessing process comprises data resampling, window width and window level adjustment, background area removal and data value standardization.
A) And (4) data resampling. When the CT is taken, the spatial resolutions of all the images cannot be completely guaranteed to be consistent, and the difference of the spatial resolutions of the data is not beneficial to the learning of the model, so the spatial resolutions of all the data are firstly resampled into a uniform numerical value.
B) And adjusting the window width and the window level. CT medical images typically have thousands of gray levels, but not all gray levels are effectively represented. The proposed range of gray scale ranges is typically built into the CT document and is used in this study.
C) The background area is removed. The CT image usually contains a large number of non-foreground regions, which greatly increases the memory occupation and the calculation cost of a computer, so that the background removal in the data preprocessing is more beneficial to the subsequent calculation and has no adverse effect.
D) Data values were normalized. The data standardization can lead the data to tend to the same distribution space, and the expression is smoother, thereby being more beneficial to the learning of the model.
The model training adopts the clipping and data enhancement method of patch. In the invention, three-dimensional data is used as a research object to fully acquire information of a three-dimensional space to enhance the perception capability of a model, but the problems of huge calculation amount and size difference of preprocessed data exist at the same time. Therefore, in the present invention, the size of the image patch is determined according to the statistical characteristics of the preprocessed data, and the patch is selected according to the distribution space of the ventricles on the head during training, so as to effectively utilize the structural information of the image and avoid the loss of the context information. The essential principle of the deep learning model is to count the data characteristics and use the statistical characteristics as the data characterization, so that more sufficient data characteristics are beneficial to the model, and data enhancement is an effective method for increasing the data characteristics. The disturbance of the data is increased according to the similar characteristics of the data to train the model, so that the robustness of the model can be effectively enhanced. During the model training process, the patch acquisition and the data enhancement use an online method, which is beneficial to fully change data and reduce the storage resource of a computer. Fig. 3 is a schematic diagram of the algorithm structure of BN (left) and (right) IN.
A three-dimensional hydrocephalus CT image segmentation method based on deep learning comprises the following steps:
step (1), acquiring a data set;
medical image data of a head CT of a hydrocephalus patient is acquired.
And a case searching system is used for finding the record of the patients who have been treated with hydrocephalus in recent years, screening the information of 197 patients in total, and acquiring effective case data. The data of each patient comprises medical image data used before and after an operation and during clinical diagnosis, the condition of research data is ensured to be in accordance with the actual clinical environment, and subsequent research is carried out based on the clinical actual data, so that the research result is in accordance with the actual use case, the flow is efficient, and unnecessary burden is reduced. A medical image of a head CT of a hydrocephalus patient is manually segmented by a professional doctor to be used as a label to construct a data set.
Step (2), preprocessing data;
and preprocessing the data, including data resampling, window width and window level adjustment, background area removal and data standardization.
1. Data resampling:
since the variability of spatial resolution is not conducive to model learning, the spatial resolution of all data is first resampled to the median of the overall data spatial resolution.
2. Adjusting the window width and the window position:
CT images typically have thousands of gray levels, but not all gray levels are effectively represented. The window width window level is adjusted according to the proposed range of the gray scale range embedded in the CT image, which is adopted in the present study.
3. Removing a background area:
the CT image usually contains a large number of non-foreground regions, which greatly increases the storage resources and computational overhead, so that the background removal in the preprocessing of the data is more beneficial to the subsequent computation and has no adverse effect. And after z-source standardization is carried out on the image after the window is adjusted, taking the area larger than 0 value as a foreground, and selecting the frame area where the foreground area is located as data for removing the background.
4. Data normalization:
the background-removed data was z-source normalized. Most of the background area has been removed at this point, and the effective standard distribution is better characterized after normalization.
Step (3), constructing a residual U-net convolution network model;
as shown in FIG. 1, a deep learning model based on a U-Net structure is adopted, and the deep learning model comprises a residual convolution unit, a down-sampling encoder, an up-sampling decoder and a long-jump link. In order to effectively improve the robustness of the model, residual convolution is used as a basic convolution unit, and the value distribution of the feature map is controlled by using the Instance Normalization, so that the model is effectively learned, as shown in fig. 2.
The downsampling encoder comprises a repeated convolution layer and a pooling layer, the depth of the model is one of factors influencing the accuracy of the model, and the convolution layer of the downsampling encoder adopts a residual convolution mode and can better adapt to the model;
the upsampling decoder restores the size of the picture through repeated deconvolution and pooling layers.
And the characteristic graphs are spliced between every two layers through long jump links, so that the loss of the characteristics is reduced.
Step (4), training the constructed residual U-net convolution network model through the preprocessed data;
the preprocessed data are divided into training and testing sets, and the proportion is 8. And training the constructed residual U-net convolution network model through a training set.
Adam's gradient descent method was used. Adam calculates gradient by utilizing first moment estimation and second moment estimation, can better adapt to the gradient direction of data, enables parameters to change stably through bias correction, and has great advantages compared with the traditional methods such as SGD and the like due to an initial convergence speed block.
The value distribution of the feature map is controlled using Instance Normalization to enable efficient learning of the model. And longitudinal deep supervision is adopted to enhance the feature expression of different levels of layers in the U-Net model.
In the process of model training, the minimum loss function of the whole model needs to be calculated, so that the model tends to be converged; and calculating the loss of the model by adopting a Dice loss function, and measuring the overlapping degree between the two samples by using a Dice coefficient.
The model training adopts the clipping and data enhancement method of patch. In the invention, three-dimensional data is used to fully acquire information of a three-dimensional space to enhance the perception capability of the model, but huge calculation amount is increased at the same time, actual operation is not facilitated, and the shape of the data is inconsistent due to the difference of factors such as the size of the head of a human body after preprocessing, so that a proper patch needs to be selected. By counting the sizes of the data ventricle areas, a patch with a pixel size of 16 × 256 × 256 is finally selected as a model input, which can well contain most ventricle areas. Where 16 denotes the number of slices.
And sequentially using a data enhancement method of mirror image, brightness change and Gaussian noise to the input training data to enhance the data. The essential principle of the deep learning model is to count the data characteristics and use the statistical characteristics as the data characterization, so that more sufficient data characteristics are beneficial to the model, and data enhancement is an effective method for increasing the data characteristics. The enhancement of the adoption of the mirror image is beneficial to enhancing the learning of the symmetric structure of the ventricles by the model. On the other hand, the influence of different cases may show a certain change in brightness and noise, and in order to make the model adaptable to this change, it is an effective method to perform brightness change and gaussian noise on the data.
In the model training process, the method of selecting patch and enhancing data uses an online method, which is beneficial to fully changing data and reducing the storage resource of a computer.
Inputting test data into the trained residual U-net convolution network model, and testing the model;
inputting the data of the test set into a trained residual U-net convolution network model, outputting a corresponding segmentation image, and comparing the segmentation image with a label image which is professionally divided by a doctor to obtain a corresponding dice index to express the excellent degree of the result.
By adopting the method, the average Dice score obtained on the test set is 0.9321, and the average Hausdorff distance is 14.77, so that the method shows higher segmentation accuracy. This suggests that the deep learning based approach can be effectively applied in the ventricle segmentation task.
The foregoing is a more detailed description of the invention in connection with specific/preferred embodiments and is not intended to limit the practice of the invention to those descriptions. For those skilled in the art to which the invention pertains, several alternatives or modifications may be made to the described embodiments without departing from the inventive concept, and such alternatives or modifications should be considered as falling within the scope of the present invention.
The present invention has not been described in detail, partly as is known to the person skilled in the art.

Claims (5)

1. A three-dimensional hydrocephalus CT image segmentation method based on deep learning is characterized by comprising the following steps:
step (1), acquiring a data set;
step (2), preprocessing data;
step (3), constructing a residual U-net convolution network model;
step (4), training the constructed residual U-net convolution network model through the preprocessed data;
and (5) inputting test data into the trained residual U-net convolution network model, and testing the model.
2. The three-dimensional hydrocephalus CT image segmentation method based on deep learning as claimed in claim 1, characterized in that, the specific method of step (1) is as follows;
acquiring medical image data of a head CT of a hydrocephalus patient;
a case searching system is used for finding the records of patients who have been treated with hydrocephalus in recent years, screening information of 197 patients in total and acquiring effective case data; the data of each patient comprises medical image data used before and after an operation and during clinical diagnosis, so that the condition of research data is ensured to be in accordance with the actual clinical environment, and the subsequent research is carried out based on the clinical actual data, so that the research result is in accordance with the actual use case, the process is efficient, and unnecessary burden is reduced; a medical image of a head CT of a hydrocephalus patient is manually segmented by a professional doctor to be used as a label to construct a data set.
3. The deep learning-based three-dimensional hydrocephalus CT image segmentation method according to claim 2, characterized in that the specific method of the step (2) is as follows;
preprocessing data, including data resampling, window width and window level adjustment, background area removal and data standardization;
(1) Data resampling:
resampling the spatial resolution of all data to be the median of the overall data spatial resolution;
(2) Adjusting the window width and the window position:
CT images typically have thousands of gray levels, but not all gray levels are effectively represented; adjusting a window width window level according to a suggested range of a built-in gray level range in the CT image;
(3) Removing background areas:
after z-source standardization is carried out on the windowed image, taking an area larger than 0 as a foreground, and selecting a frame selection area where the foreground area is located as data for removing the background;
(4) Data normalization:
the background-removed data was z-source normalized.
4. The deep learning-based three-dimensional hydrocephalus CT image segmentation method according to claim 3, characterized in that the specific method of the step (3) is as follows;
the method comprises the steps that a deep learning model based on a U-Net structure is adopted, and the deep learning model comprises a residual convolution unit, a down-sampling encoder, an up-sampling decoder and a long jump link; in order to effectively improve the robustness of the model, residual convolution is used as a basic convolution unit, and the moment Normalization is used for controlling the numerical distribution of the characteristic diagram so as to effectively learn the model;
the downsampling encoder comprises a repeated convolution layer and a pooling layer, the depth of the model is one of factors influencing the accuracy of the model, and the convolution layer of the downsampling encoder adopts a residual convolution mode and can better adapt to the model;
an upsampling decoder restores the size of the picture through repeated deconvolution and pooling layers;
and the characteristic graphs are spliced between every two layers through long jump links, so that the loss of the characteristics is reduced.
5. The deep learning-based three-dimensional hydrocephalus CT image segmentation method according to claim 4, characterized in that the specific method of the step (4) is as follows;
dividing the preprocessed data into a training set and a testing set, wherein the proportion is 8; training the constructed residual U-net convolution network model through a training set;
the model training adopts the cutting and data enhancement method of patch; finally, selecting a patch with the pixel size of 16 multiplied by 256 as model input through statistics of the size of the data ventricle area; carrying out data enhancement on input training data by using a data enhancement method of mirror image, brightness change and Gaussian noise in sequence;
training the model by adopting an Adam gradient descent method; using Instance Normalization to control the numerical distribution of the feature map so that the model learns effectively; longitudinal deep supervision is adopted to enhance the feature expression of different levels of layers in the U-Net model;
in the process of model training, the minimum loss function of the whole model needs to be calculated, so that the model tends to be converged; and calculating the loss of the model by adopting a Dice loss function, and measuring the overlapping degree between the two samples by using a Dice coefficient.
CN202211215390.5A 2022-09-30 2022-09-30 Three-dimensional hydrocephalus CT image segmentation method based on deep learning Withdrawn CN115409857A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211215390.5A CN115409857A (en) 2022-09-30 2022-09-30 Three-dimensional hydrocephalus CT image segmentation method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211215390.5A CN115409857A (en) 2022-09-30 2022-09-30 Three-dimensional hydrocephalus CT image segmentation method based on deep learning

Publications (1)

Publication Number Publication Date
CN115409857A true CN115409857A (en) 2022-11-29

Family

ID=84167816

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211215390.5A Withdrawn CN115409857A (en) 2022-09-30 2022-09-30 Three-dimensional hydrocephalus CT image segmentation method based on deep learning

Country Status (1)

Country Link
CN (1) CN115409857A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117058172A (en) * 2023-08-24 2023-11-14 吉林大学 CT image multi-region segmentation method and device, electronic equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117058172A (en) * 2023-08-24 2023-11-14 吉林大学 CT image multi-region segmentation method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN110992382B (en) Fundus image optic cup optic disc segmentation method and system for assisting glaucoma screening
CN111798462B (en) Automatic delineation method of nasopharyngeal carcinoma radiotherapy target area based on CT image
US20220230302A1 (en) Three-dimensional automatic location system for epileptogenic focus based on deep learning
CN111161273B (en) Medical ultrasonic image segmentation method based on deep learning
CN110930416A (en) MRI image prostate segmentation method based on U-shaped network
CN109767440A (en) A kind of imaged image data extending method towards deep learning model training and study
CN111047594A (en) Tumor MRI weak supervised learning analysis modeling method and model thereof
US11222425B2 (en) Organs at risk auto-contouring system and methods
CN103942780B (en) Based on the thalamus and its minor structure dividing method that improve fuzzy connectedness algorithm
CN113298830B (en) Acute intracranial ICH region image segmentation method based on self-supervision
CN109215035B (en) Brain MRI hippocampus three-dimensional segmentation method based on deep learning
CN114972362A (en) Medical image automatic segmentation method and system based on RMAU-Net network
CN113160120A (en) Liver blood vessel segmentation method and system based on multi-mode fusion and deep learning
CN115082493A (en) 3D (three-dimensional) atrial image segmentation method and system based on shape-guided dual consistency
CN115409857A (en) Three-dimensional hydrocephalus CT image segmentation method based on deep learning
CN114387282A (en) Accurate automatic segmentation method and system for medical image organs
CN111292285B (en) Automatic screening method for diabetes mellitus based on naive Bayes and support vector machine
CN114862799B (en) Full-automatic brain volume segmentation method for FLAIR-MRI sequence
WO2007095284A2 (en) Systems and methods for automatic symmetry identification and for quantification of asymmetry for analytic, diagnostic and therapeutic purposes
CN112734769B (en) Medical image segmentation and quantitative analysis method based on interactive information guided deep learning method, computer device and storage medium
Yan et al. Segmentation of pulmonary parenchyma from pulmonary CT based on ResU-Net++ model
CN114581459A (en) Improved 3D U-Net model-based segmentation method for image region of interest of preschool child lung
CN114400086A (en) Articular disc forward movement auxiliary diagnosis system and method based on deep learning
CN112967269A (en) Pulmonary nodule identification method based on CT image
CN117036372B (en) Robust laser speckle image blood vessel segmentation system and segmentation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20221129

WW01 Invention patent application withdrawn after publication