CN113205454A - Segmentation model establishing and segmenting method and device based on multi-scale feature extraction - Google Patents

Segmentation model establishing and segmenting method and device based on multi-scale feature extraction Download PDF

Info

Publication number
CN113205454A
CN113205454A CN202110372219.4A CN202110372219A CN113205454A CN 113205454 A CN113205454 A CN 113205454A CN 202110372219 A CN202110372219 A CN 202110372219A CN 113205454 A CN113205454 A CN 113205454A
Authority
CN
China
Prior art keywords
abdomen
image
model
segmentation
encoder
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110372219.4A
Other languages
Chinese (zh)
Inventor
谢飞
郜刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shaanxi Great Wisdom Medical Care Technology Co ltd
Original Assignee
Shaanxi Great Wisdom Medical Care Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shaanxi Great Wisdom Medical Care Technology Co ltd filed Critical Shaanxi Great Wisdom Medical Care Technology Co ltd
Priority to CN202110372219.4A priority Critical patent/CN113205454A/en
Publication of CN113205454A publication Critical patent/CN113205454A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4023Decimation- or insertion-based scaling, e.g. pixel or line decimation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Abstract

The invention belongs to the technical field of semantic segmentation, and discloses a segmentation model establishing and segmenting method and device based on multi-scale feature extraction. The device comprises a data acquisition and preprocessing module, a label acquisition and label labeling module and a data processing and preprocessing module, wherein the data acquisition and preprocessing module is used for acquiring an original abdomen 3D CT image set, preprocessing the original abdomen 3D CT image set to acquire the abdomen 3D CT image set, and labeling to acquire the label set; the model building module is used for building a 3D U-Net model, the 3D U-Net model comprises an encoder and a decoder, the encoder and the decoder respectively comprise a plurality of levels from low to high, jump connections exist between each layer of encoder and decoders of the same level and all high levels, and jump connections exist between each layer of decoder and decoders of all high levels; the model training module is used for training and taking the trained model as a segmentation model; the segmentation module is used for obtaining the segmentation result of the original abdomen 3D CT image to be segmented. According to the invention, jump connection is introduced between the encoders and decoders at different levels, so that information transmission paths in the network are increased, the problem that the original U-Net is difficult to extract features of different scales simultaneously is solved, and the segmentation accuracy is improved.

Description

Segmentation model establishing and segmenting method and device based on multi-scale feature extraction
Technical Field
The invention belongs to the technical field of semantic segmentation, and particularly relates to a segmentation model establishing and segmenting method and device based on multi-scale feature extraction.
Background
In recent years, with the continuous maturity of image processing and deep learning technologies, computer-aided diagnosis based on artificial intelligence technology can help pathologists to make more objective and effective diagnosis. The deep convolutional neural network can automatically learn the medical image characteristics from the data samples directly in an implicit mode, and the learning process is essentially a solving process of an optimization problem. Through learning, the model selects the correct features from the training data, allowing it to make the correct decisions when testing new data.
Although the application of deep learning to medical imaging problems has been quite common, research on digestive tract tumors remains rare. Compared with the segmentation of organs which is researched more in medical images, the tumor segmentation is always a difficult point of the segmentation of the medical images and is gradually called as a popular research topic. The difficulty of tumor segmentation is three-fold. The first is that the categories are unbalanced, most images in 3D CT are background, and only a small proportion is tumor; secondly, the contrast is low, and for people engaged in non-medical image research, the organ in each data can be known approximately by looking at a small number of cases, but the difficulty is much higher when the tumor in each case is to be found out; third, the tumor has significant differences in scale changes relative to the organ, and the multi-scale problem needs to be solved for better segmentation.
The existing method usually adopts a U-Net method for medical image segmentation, but aiming at a data set of small intestinal stromal tumor, the effect of directly using the U-Net for segmentation is not ideal, and the problems are as follows: firstly, for tumors with small volumes, the network is difficult to distinguish; second, considering the clinical image characteristics, the tumor has continuity in three dimensions, but the segmentation results in a case where only one CT is identified as the tumor and no adjacent CT is identified.
Disclosure of Invention
The invention aims to provide a segmentation model establishing and segmenting method and device based on multi-scale feature extraction, which are used for solving the problem of poor segmentation effect on tumors with small size in the prior art.
In order to realize the task, the invention adopts the following technical scheme:
compared with the prior art, the invention has the following technical characteristics:
(1) in digestive tract image data, tumor focus areas have the characteristics of multiple scales and medical image anisotropy, and the traditional U-Net does not fully fuse the characteristics of different scales, so that the positioning and boundary segmentation of tumors are inaccurate. The improved 3D U-Net of the invention introduces skip-connection between encoders and decoders of different levels, increases the path of information transmission in the network, and thus fully fuses the features of different scales extracted by the network.
(2) The invention introduces the segmentation result of the digestive tract organ, thereby enhancing the feature extraction capability of the network to the digestive tract region.
(3) In order to solve the anisotropy of the network medical image, in the data preprocessing stage, the invention uses a third-order spline interpolation method to unify the distances of the data in all directions.
Drawings
FIG. 1 shows a schematic diagram of a modified 3D U-Net structure;
FIG. 2 is a schematic diagram showing an input of D2 in the embodiment;
fig. 3 shows a comparison of the segmentation effect of the models generated by different methods on small-scale tumors.
Detailed Description
The embodiment discloses a segmentation model establishing method based on multi-scale feature extraction, which comprises the following steps:
step 1: acquiring an original abdomen 3D CT image set, preprocessing the original abdomen 3D CT image set to obtain an abdomen 3D CT image set, and labeling digestive tract organs of each abdomen 3D CT image in the abdomen 3D CT image set to obtain a label set;
step 2: establishing a 3D U-Net model, wherein the 3D U-Net model comprises an encoder and a decoder, the encoder and the decoder respectively comprise a plurality of levels from low to high, with reference to FIG. 1, the high level refers to the level with a larger corner mark, the low level refers to the level with a smaller corner mark, skip-connection (skip-connection) exists between each layer of encoder and decoders of the same level and all high levels, and skip-connection exists between each layer of decoder and decoders of all high levels;
the input of the encoder at the lowest layer is an abdomen 3D CT image, the input of each other layer of encoders is the output result of the encoder at the low level, and the input of each layer of decoder is the output result of the encoder at the same level and the low level and the output result of the decoder at the high level;
the encoder is used for reducing the size of an input abdominal 3D CT image to a single digit, for example, the input size is (48,160,224), and after passing through the encoder, the size is (6,5, 7);
the encoder is used for down-sampling, the decoder is used for up-sampling, wherein each layer of encoder is used for firstly performing down-sampling in the x direction and the y direction until the resolution of the x direction and the resolution of the y direction are the same as the resolution of the z direction, and then starting the down-sampling in the z direction;
in both the x and y directions the voxel spacing is less than 1 mm, whereas in the z direction the voxel spacing is typically 5mm, i.e. there is a significant anisotropy in the data set. Therefore, in the encoder part, the down-sampling is not performed in three dimensions simultaneously, but the high-resolution dimension (x, y) is firstly performed until the resolution is close to the resolution of the dimension z, and then the down-sampling in the z direction is started, so that the anisotropy of the medical image is relieved;
and step 3: and (3) training the 3D U-Net model by using the abdomen 3D CT image set and the label set obtained in the step (1), and taking the trained model as a segmentation model based on multi-scale feature extraction.
Specifically, the pretreatment in step 1 comprises: data clipping, resampling and normalization of third-order spline interpolation.
The reason for adopting data clipping is as follows: the small intestine portion occupies a small proportion of the whole CT sequence, and it is necessary to cut the original CT data in order to reduce the pressure of network learning and reduce the input of redundant information. On the premise of the existing small intestine segmentation result, the maximum connected domain of the small intestine region is searched, and then the original CT file is cut. Through statistics, the data input into the network can be reduced by about 40% in the whole cutting process, all the cut data are backgrounds, and the problem of unbalanced category is relieved to a certain extent. Meanwhile, traces of the bed position inevitably appear in the original CT image, and one trace can be removed through cutting, so that the noise reduction effect is achieved.
The reason for resampling using third-order spline interpolation is: in medical images, the voxel spacing varies from data to data, depending on the scanner. This phenomenon is referred to in medicine as voxel-to-voxel anisotropy. While CNNs do not understand voxel spacing by themselves. In order to enable the network to correctly learn the spatial semantics, the median of the voxel spacing of all data is taken as a reference, and the resampling of the third-order spline interpolation is carried out on all data, so that the voxel spacing of the images input into the network is ensured to be equal.
The reason for using normalization is: the pixel values in CT are directly linked to structures in the body, reflecting the physical properties of the tissue. Based on two values of the mean value and the standard deviation of HU values of the foreground part in the data set, the data set is normalized by using a z-score method, so that the speed of solving the optimal solution by gradient descent is increased.
Specifically, the encoder has 6 layers, respectively, E0-E5The decoder has 5 layers respectively D0-D4
The decoder works as follows: in the decoder part, the input of each level is not the output of the decoder of the previous layer stacked with the output of the encoder of the same level, but the coding and decoding outputs of each level are stacked together, so as to achieve the purpose of further fusing the multi-scale features. As shown in fig. 1, it is illustrated how the input of D2 is constructed (D2 refers to the second level of the decoder, starting with 0 and increasing downwards). Compared to the conventional U-Net, D2 receives not only the output signature of encoder E2 at the same level, but also the output signatures of E0 and E1 that contain lower level semantic information. In order to make the feature maps have the same size, the output feature maps of E0 and E1 need to be downsampled. Similarly, decoders between different layers have richer information flow: the outputs of E5, D3, and D4 are upsampled and then connected to E2 via skip conditions to convey more high level semantic information. After the feature maps of different levels are sampled to the same resolution, the number of channels of each feature map needs to be unified, and excessive information is reduced. Specifically, the feature map of each level is convolved with 32 (z, x, y) filters of size (3,3, 3). Finally, stacking feature maps of all levels, and completing fusion of features with different scales as input of D2.
Specifically, in step 2, the structures of the encoder and the decoder of each layer are: convolutional layer + regularization + activation function. The regularization uses 3D instance regularization. The activation function is selected from LeakyReLU activation functions.
Specifically, when training is performed in step 3, the input of the network is patch, the size is (48,160,224), two patches are input in one training, and the determination of the size of the patch is determined by the video memory size of the machine, so that one training is ensured to occupy the video memory; the output of the network is a 0, 1 matrix of the same size as the patch, where 0 represents non-tumor, 1 represents tumor, and is visually a black and white image, white represents tumor, and black represents background area of non-tumor.
Specifically, the loss functions adopted in the training in the step 3 are Dice loss and cross entropy loss, and the expression form of Dice loss is as shown in formula (1):
Figure BDA0003009762950000051
wherein, A and B are respectively a prediction set and a real annotation set.
The embodiment discloses a segmentation method based on multi-scale feature extraction, which comprises the following steps:
step a: acquiring an original abdomen 3D CT image to be segmented, and preprocessing the original abdomen 3D CT image to be segmented to obtain an abdomen 3D CT image;
step b: and inputting the 3D abdominal CT image into a segmentation model obtained by any one segmentation model establishing method based on multi-scale feature extraction, and obtaining a segmentation result of the original 3D abdominal CT image to be segmented.
In particular, the segmentation result is a 3D CT image (nii file). The nii file of the segmentation result contains no original case CT image but only one image mask, and the white region indicates a background region where the corresponding position is predicted to be a tumor and the black region is a non-tumor region.
The embodiment also discloses a segmentation device based on multi-scale feature extraction, which comprises a processor and a memory for storing a plurality of functional modules capable of running on the processor, wherein the functional modules comprise: the system comprises a data acquisition and preprocessing module, a model establishing module, a model training module and a segmentation module;
the data acquisition and preprocessing module is used for acquiring an original abdomen 3D CT image set, preprocessing the original abdomen 3D CT image set to acquire an abdomen 3D CT image set, and labeling digestive tract organs of each abdomen 3D CT image in the abdomen 3D CT image set to acquire a label set;
the model building module is used for building a 3D U-Net model, the 3D U-Net model comprises an encoder and a decoder, the encoder and the decoder respectively comprise a plurality of levels from low to high, jump connections exist between each layer of encoder and decoders at the same level and all high levels, and jump connections exist between each layer of decoder and decoders at all high levels; the encoder is used for down-sampling, the decoder is used for up-sampling, wherein each layer of encoder is used for firstly performing down-sampling in the x direction and the y direction until the resolution of the x direction and the resolution of the y direction are the same as the resolution of the z direction, and then starting the down-sampling in the z direction;
the model training module is used for training the 3D U-Net model by utilizing an abdomen 3D CT image set and a label set, and taking the trained model as a segmentation model; model training is performed on a GTX 1080Ti video card with the video memory of 12GB, the number of batchs is fixed to be 2 in order to increase the size of the input batch as much as possible, and under the condition that the video memory capacity is not exceeded, the network inputs 3D batch with the size of (z, x, y) being (48,160,224), and the maximum number of feature maps of the network is limited to be 320.
The segmentation module is used for acquiring an original abdomen 3D CT image to be segmented, and preprocessing the original abdomen 3D CT image to be segmented to acquire an abdomen 3D CT image; and inputting the 3D abdominal CT image into the segmentation model obtained by the model training module to obtain the segmentation result of the original 3D abdominal CT image to be segmented.
Specifically, the preprocessing in the data acquisition and preprocessing module includes: data clipping and resampling by third-order spline interpolation.
Example 1
In this embodiment, a segmentation model establishment method based on multi-scale feature extraction is disclosed, wherein the original abdomen 3D CT image set in step 1 is 527 cases of medical image data of small intestinal stromal tumor obtained from a hospital, and the data is stored in a dicom (digital Imaging and Communications in medicine) format. 451 cases were randomly selected as training sets and 76 cases as test sets. The label set labels the interstitial tumor area of each case for the doctor by using a related annotation tool, and a corresponding label file in a dicom format is generated. A plurality of continuous dicom files of a case are converted into NIFTI format images by means of a SimpleITK open library and input into a network as 3D data, and the label file is also subjected to the same format conversion.
This embodiment takes as input the patch of size (48,160,224). A total of 1000 rounds of training, 250 iterations in each round, 2 patchs per iteration. The optimizer chooses a random gradient descent SGD with momentum of 0.99, an initial learning rate set to 0.01, and a gradual decay as training rounds increase.
The scheme design of the embodiment for selecting different methods for segmentation is as follows:
TABLE 1
Figure BDA0003009762950000071
Figure BDA0003009762950000081
In the above table, C1 indicates that the data is slightly cropped, i.e., the region outside the human body in the CT image is cropped, during the data preprocessing. C2 shows that during data preprocessing, the data is heavily clipped, that is, the largest connected domain of the small intestine region is found on the premise of the existing small intestine segmentation result, and then the original CT file is clipped. Severe cuts also cut away part of the body tissue compared to mild cuts. R represents the use of the modified 3D U-Net model. P represents the migration of the model of small bowel segmentation to the interstitial tumor segmentation task at training.
The experimental results of this example are as follows:
watch two
Figure BDA0003009762950000082
Wherein, Dicepercase、DiceglobalAnd (3) representing a global Dice coefficient, FPR representing an over-segmentation rate, and 3D U-Net representing an original 3D U-Net model. Comparing the results of the experiments with the 3D U-Net and R protocols, it can be seen that the improved 3D U-Net performance is superior to the conventional one. Meanwhile, the evaluation index of the C1 model is higher than that of the original 3D U-Net model, which shows that the original data really has redundant information and noise, and the learning of the network is disturbed.
After combining the tailored data pre-processing with the modified 3D U-Net, the performance of the C1-R and C2-R schemes was significantly improved over the previous two schemes. The reason is that the improved 3D U-Net network can better fuse multi-scale information, information flow channels inside the network are increased, and the cutting reduces the input of redundant information, so that the network can fuse more effective information together, and finally the network can segment tumors with smaller volumes. The reduction in false positive rate also indicates that the network has a more accurate sense of the tumor characteristics. The C2-R approach is superior to the C1-R approach also illustrates that heavy cropping is necessary.
As shown in FIG. 3, FIG. 3 is the original image (a), FIG. 3(b) is the real label, FIG. 3(C) is C1-R-P, FIG. 3(d) is C2-R, the traditional U-Net network can not segment small-sized tumors, and the segmentation effect of the C1-R-P and C2-R models is obviously improved. C1-R-P outperformed the corresponding scenario C1-R without transfer learning. The relevance of the small intestine segmentation task and the small intestine interstitial tumor segmentation task is proved. Since the small intestine interstitial tumor appears in the small intestine region, if the network grasps the characteristics of the small intestine in the CT image in advance, the wrong case of dividing the interstitial tumor in the non-small intestine region can be reduced.

Claims (5)

1. A segmentation model establishment method based on multi-scale feature extraction is characterized by comprising the following steps:
step 1: acquiring an original abdomen 3D CT image set, preprocessing the original abdomen 3D CT image set to obtain an abdomen 3D CT image set, and labeling digestive tract organs of each abdomen 3D CT image in the abdomen 3D CT image set to obtain a label set;
step 2: establishing a 3D U-Net model, wherein the 3D U-Net model comprises an encoder and a decoder, the encoder and the decoder respectively comprise a plurality of levels from low to high, jump connections exist between each layer of encoder and decoders at the same level and all high levels, and jump connections exist between each layer of decoder and decoders at all high levels;
the encoder is used for down-sampling, the decoder is used for up-sampling, wherein each layer of encoder is used for firstly performing down-sampling in the x direction and the y direction until the resolution of the x direction and the resolution of the y direction are the same as the resolution of the z direction, and then starting the down-sampling in the z direction;
and step 3: and (3) training the 3D U-Net model by using the abdomen 3D CT image set and the label set obtained in the step (1), and taking the trained model as a segmentation model.
2. The method for building a segmentation model based on multi-scale feature extraction as claimed in claim 1, wherein the preprocessing in step 1 comprises: data clipping, third-order spline interpolation and normalization.
3. A segmentation method based on multi-scale feature extraction comprises the following steps:
step a: acquiring an original abdomen 3D CT image to be segmented, and preprocessing the original abdomen 3D CT image to be segmented to obtain an abdomen 3D CT image;
step b: inputting the abdomen 3D CT image into the segmentation model obtained by the segmentation model establishing method based on multi-scale feature extraction according to any one of claims 1-2, and obtaining the segmentation result of the original abdomen 3D CT image to be segmented.
4. A multi-scale feature extraction based segmentation apparatus, characterized in that the apparatus comprises a processor and a memory for storing a plurality of functional modules capable of running on the processor, the functional modules comprising: the system comprises a data acquisition and preprocessing module, a model establishing module, a model training module and a segmentation module;
the data acquisition and preprocessing module is used for acquiring an original abdomen 3D CT image set, preprocessing the original abdomen 3D CT image set to acquire an abdomen 3D CT image set, and labeling digestive tract organs of each abdomen 3D CT image in the abdomen 3D CT image set to acquire a label set;
the model building module is used for building a 3D U-Net model, the 3D U-Net model comprises an encoder and a decoder, the encoder and the decoder respectively comprise a plurality of levels from low to high, jump connections exist between each layer of encoder and decoders at the same level and all high levels, and jump connections exist between each layer of decoder and decoders at all high levels; the encoder is used for down-sampling, the decoder is used for up-sampling, wherein each layer of encoder is used for firstly performing down-sampling in the x direction and the y direction until the resolution of the x direction and the resolution of the y direction are the same as the resolution of the z direction, and then starting the down-sampling in the z direction;
the model training module is used for training the 3D U-Net model by utilizing an abdomen 3D CT image set and a label set, and taking the trained model as a segmentation model;
the segmentation module is used for acquiring an original abdomen 3D CT image to be segmented, and preprocessing the original abdomen 3D CT image to be segmented to acquire an abdomen 3D CT image; and inputting the 3D abdominal CT image into the segmentation model obtained by the model training module to obtain the segmentation result of the original 3D abdominal CT image to be segmented.
5. The multi-scale feature extraction based segmentation apparatus according to claim 4, wherein the pre-processing in the data acquisition and pre-processing module comprises: data clipping, third-order spline interpolation and normalization.
CN202110372219.4A 2021-04-07 2021-04-07 Segmentation model establishing and segmenting method and device based on multi-scale feature extraction Pending CN113205454A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110372219.4A CN113205454A (en) 2021-04-07 2021-04-07 Segmentation model establishing and segmenting method and device based on multi-scale feature extraction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110372219.4A CN113205454A (en) 2021-04-07 2021-04-07 Segmentation model establishing and segmenting method and device based on multi-scale feature extraction

Publications (1)

Publication Number Publication Date
CN113205454A true CN113205454A (en) 2021-08-03

Family

ID=77026324

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110372219.4A Pending CN113205454A (en) 2021-04-07 2021-04-07 Segmentation model establishing and segmenting method and device based on multi-scale feature extraction

Country Status (1)

Country Link
CN (1) CN113205454A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116256720A (en) * 2023-05-09 2023-06-13 武汉大学 Underground target detection method and device based on three-dimensional ground penetrating radar and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116256720A (en) * 2023-05-09 2023-06-13 武汉大学 Underground target detection method and device based on three-dimensional ground penetrating radar and electronic equipment
CN116256720B (en) * 2023-05-09 2023-10-13 武汉大学 Underground target detection method and device based on three-dimensional ground penetrating radar and electronic equipment

Similar Documents

Publication Publication Date Title
CN110889853B (en) Tumor segmentation method based on residual error-attention deep neural network
CN112150428B (en) Medical image segmentation method based on deep learning
CN111429460B (en) Image segmentation method, image segmentation model training method, device and storage medium
CN110889852A (en) Liver segmentation method based on residual error-attention deep neural network
CN110265141B (en) Computer-aided diagnosis method for liver tumor CT image
CN113674253A (en) Rectal cancer CT image automatic segmentation method based on U-transducer
CN113808146B (en) Multi-organ segmentation method and system for medical image
CN116309650B (en) Medical image segmentation method and system based on double-branch embedded attention mechanism
CN113436173B (en) Abdominal multi-organ segmentation modeling and segmentation method and system based on edge perception
CN115457021A (en) Skin disease image segmentation method and system based on joint attention convolution neural network
CN112884788B (en) Cup optic disk segmentation method and imaging method based on rich context network
CN112381846A (en) Ultrasonic thyroid nodule segmentation method based on asymmetric network
CN113421240A (en) Mammary gland classification method and device based on ultrasonic automatic mammary gland full-volume imaging
CN117455906B (en) Digital pathological pancreatic cancer nerve segmentation method based on multi-scale cross fusion and boundary guidance
CN113205454A (en) Segmentation model establishing and segmenting method and device based on multi-scale feature extraction
CN114549394A (en) Deep learning-based tumor focus region semantic segmentation method and system
CN113012164A (en) U-Net kidney tumor image segmentation method and device based on inter-polymeric layer information and storage medium
CN110992320B (en) Medical image segmentation network based on double interleaving
CN112634308A (en) Nasopharyngeal carcinoma target area and endangered organ delineation method based on different receptive fields
CN116258685A (en) Multi-organ segmentation method and device for simultaneous extraction and fusion of global and local features
CN113989269B (en) Traditional Chinese medicine tongue image tooth trace automatic detection method based on convolutional neural network multi-scale feature fusion
CN114882282A (en) Neural network prediction method for colorectal cancer treatment effect based on MRI and CT images
Wang et al. RFPNet: Reorganizing feature pyramid networks for medical image segmentation
Yuan et al. FM-Unet: Biomedical image segmentation based on feedback mechanism Unet
CN115345886B (en) Brain glioma segmentation method based on multi-modal fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination