Liver lesion image segmentation method based on cascade hybrid network
Technical Field
The invention belongs to the field of medical image processing, and particularly relates to a liver lesion image segmentation method based on a cascade hybrid network.
Background
Liver cancer is one of the leading causes of cancer death worldwide, and for screening of liver cancer, Computed Tomography (CT) is the most common imaging tool, where morphological and textural abnormalities of the liver and visible lesions are important markers of disease progression in primary and secondary liver tumor diseases. Clinically, although manual and semi-manual techniques exist, these methods are subjective, heavily operator dependent and very time consuming. Computer-assisted methods have been developed in the past to improve radiologists' productivity, however, automated liver and its lesion segmentation remains a very challenging problem due to the low contrast of the liver and its lesions, the different types of contrast, abnormalities in the tissue (metastatic resection), the size and number of lesions that vary.
In the prior art, a liver tumor automatic segmentation method based on a 2D Convolutional Neural Network (Convolutional Neural Network) or a 3D Convolutional Neural Network is usually adopted, the 2D Convolutional Neural Network has a poor effect on processing a small lesion, and the problem of false positive in a liver lesion segmentation result also exists, while the 3D Convolutional Neural Network ensures accuracy, but has the problems of long calculation time and high memory cost.
Disclosure of Invention
In order to solve the above problems in the prior art, the present invention provides a liver lesion image segmentation method based on a cascade hybrid network, which greatly reduces the calculation time and the memory cost without sacrificing the precision, and in order to achieve the purpose, the technical scheme of the present invention is as follows:
a liver lesion image segmentation method based on a cascade hybrid network, the method comprising:
s1, acquiring an abdomen CT image, and processing the abdomen CT image through a 2D convolutional neural network to obtain a liver region image, wherein the liver region image comprises a training set and a testing set;
s2, constructing a hybrid network image segmentation model, wherein the hybrid network image segmentation model comprises a 2D convolutional neural network for segmenting large lesions in a liver CT image and a 3D convolutional neural network for segmenting small lesions in the liver; the large focus is a liver lesion larger than a preset threshold value in a liver CT image, and the small focus is a liver lesion smaller than the preset threshold value in the liver CT image;
s3, preprocessing a training set in the acquired liver region image to obtain a 2D network training set of the 2D convolutional neural network and a 3D network training set of the 3D convolutional neural network; the preprocessing comprises histogram equalization processing on the liver region image;
s4, inputting the preprocessed 2D network training set into a 2D convolutional neural network for training, and inputting the preprocessed 3D network training set into a 3D convolutional neural network for training to obtain a trained 2D convolutional neural network and a trained 3D convolutional neural network;
and S5, inputting the test set in the liver region image into the trained hybrid network segmentation model to complete liver lesion image segmentation.
Further, the 2D convolutional neural network and the 3D convolutional neural network adopt a Unet neural network structure.
Further, the encoder in the 2D convolutional neural network is composed of two convolutional layers, and the filter size of each convolutional layer is 3 × 3.
Further, the encoder in the 3D convolutional neural network is composed of 3D convolutional blocks.
Further, the resolution of each slice of the abdominal CT image is 512 × 512, and the preset threshold is 32 × 32.
Further, preprocessing the training set in the acquired liver CT image further comprises positioning a lesion center by using a component label, and setting a 3D network training set according to the lesion center.
The invention has the beneficial effects that:
(1) compared with the method for segmenting the liver CT by using the 2D convolutional neural network, the method has better segmentation effect on the kitchen range for treating the small diseases in the liver;
(2) compared with the method for segmenting the liver CT by using the 3D convolutional neural network, the method is more efficient in the aspects of computing time and memory cost.
Drawings
FIG. 1 is a schematic flow chart of a CT image segmentation method for liver lesion according to the present invention;
FIG. 2 is a block diagram of a CT image segmentation method for liver lesion according to the present invention;
FIG. 3 is a schematic diagram of a 2D convolutional neural network of the present invention;
FIG. 4 is a schematic diagram of a 3D convolutional neural network of the present invention;
FIG. 5 is an original abdominal CT image with image segmentation according to the present invention;
FIG. 6 is a liver CT image obtained by processing an original abdominal CT image according to the present invention;
FIG. 7 is a processed image of a large lesion in a hybrid network image segmentation model of the present invention;
FIG. 8 is a processed image of a lesion in a hybrid network image segmentation model of the present invention;
FIG. 9 is an image processed by the hybrid network image segmentation model of the present invention.
Detailed Description
The technical scheme of the invention is further described by combining the drawings and the embodiment:
the embodiment provides a liver lesion image segmentation method based on a cascade hybrid network, as shown in fig. 1, the process includes the following steps:
step 1, acquiring an abdominal CT image, and processing the abdominal CT image through a 2D convolutional neural network to obtain a liver region image, wherein in this embodiment, a common liver tumor segmentation data set (LiTS) is used as a training set and a test set, wherein the common liver tumor segmentation data set comprises 19163 2D slices, each slice comprises 11503 samples containing small lesions, the resolution of each slice is 512 × 512, and the set size of the small lesion samples is 32 × 32 × 32;
and 2, constructing a hybrid network image segmentation model as shown in fig. 2, wherein the hybrid network image segmentation model comprises a 2D convolutional neural network for segmenting large lesions in the CT image of the liver and a 3D convolutional neural network for segmenting small lesions in the liver.
As shown in fig. 3, the encoder of the segmentation and reconstruction part in the 2D convolutional neural network model constructed in this embodiment is composed of a plurality of blocks of two convolutional layers, and then a batch normalization layer, all convolutional layers use a filter size of 3 × 3, the number of filters in the blocks is sequentially set to 16, 32, 64, 128 and a transition block 256, an encoder in which a pooling layer is provided after each block, and a decoder branches to merge layers and is replaced by a 2D transposed convolutional layer.
As shown in fig. 4, the encoder of the segmentation and reconstruction part of the constructed 3D convolutional neural network model is composed of three 3D convolutional blocks. Each block consists of 32, 64 and 128 feature maps, respectively, each followed by a 3D merge layer with a pool size of (2, 2). The transform block has 256 feature maps and the decoder bramble mirror encoder bramble pool layers to be replaced with 3D transposed convolutional layers.
Step 3, preprocessing a training set in the obtained liver CT image to obtain a 2D network training set of a 2D convolutional neural network and a 3D network training set of a 3D convolutional neural network;
in the preprocessing stage in this embodiment, a histogram-based thresholding method is used to process the liver CT scan, and a histogram equalization algorithm is used to generate an enhanced image.
The method sets the resolution ratio larger than 32 multiplied by 32 as the liver large focus, and the 2D network training set only extracts the focus larger than 32 multiplied by 32 for each CT slice, and deletes all the focuses with the horizontal and vertical sizes smaller than or equal to 32.
Whereas for a 3D network training set, the method uses component labels to locate the center of the lesion and estimates its size on 2D slices, leaving lesions on 2D slices with horizontal and vertical dimensions less than or equal to 32, since a small lesion usually occupies only a few CT slices, a 32 x 32 cube is selected to cover the lesion, i.e., around the center of the lesion on the slice, the 15 CT slices above the CT slice and the 16 CT slices below the CT slice are selected to create the 3D network training set.
Step 4, inputting the 2D network training set obtained by preprocessing into a 2D convolutional neural network for training, and inputting the 3D network training set obtained by preprocessing into a 3D convolutional neural network for training to obtain a trained 2D convolutional neural network and a trained 3D convolutional neural network;
in this example, a 3D sliding cube was used to predict within a trained 3D network liver volume, with 8 voxels as the stride, using the average of all predictions over that voxel, and setting a value greater than 0.5 as 1 as the final prediction. And the learning rate of the network using Adam optimizer is 1e-5 while setting 300 maximum training epochs, and furthermore, using L2 regularization of parameters 1e-5 and a loss rate of 0.5 after all pooling and upsampling layers to mitigate overfitting.
And 5, inputting the test set in the liver region image into the trained hybrid network segmentation model to complete liver lesion image segmentation.
It is to be understood that the invention is not limited to the examples described above, but that modifications and variations may be effected thereto by those of ordinary skill in the art in light of the foregoing description, and that all such modifications and variations are intended to be within the scope of the invention as defined by the appended claims.