CN114972171A - Prostate MR image semi-supervised segmentation method based on deep learning - Google Patents

Prostate MR image semi-supervised segmentation method based on deep learning Download PDF

Info

Publication number
CN114972171A
CN114972171A CN202210344442.2A CN202210344442A CN114972171A CN 114972171 A CN114972171 A CN 114972171A CN 202210344442 A CN202210344442 A CN 202210344442A CN 114972171 A CN114972171 A CN 114972171A
Authority
CN
China
Prior art keywords
image
prostate
loss
gold standard
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210344442.2A
Other languages
Chinese (zh)
Inventor
张旭明
张俊洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong University of Science and Technology
Original Assignee
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and Technology filed Critical Huazhong University of Science and Technology
Priority to CN202210344442.2A priority Critical patent/CN114972171A/en
Publication of CN114972171A publication Critical patent/CN114972171A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30081Prostate

Abstract

The invention discloses a deep learning-based prostate MR image semi-supervised segmentation method, which comprises the following steps: constructing an executor-evaluator network PAJ _ net by adopting the combination of the Attention Unet and the residual error network; introducing a group of self-made new labels into the prostate MR image data set with a small amount of gold standards, and assisting PAJ _ net to accurately distinguish whether the gold standards exist in the training process; the prostate MR image with a small amount of gold standard and new label is used for training PAJ _ net for completing the segmentation task of two areas, namely the prostate MR image transition zone TZ and the peripheral zone PZ. The invention adopts a strategy of step-by-step training, uses a training process of residual error network supervision Unet, further effectively utilizes the prostate MR image information without gold labels to obtain a network model with better generalization, tries to add a Mumford _ Shah energy function into a loss function in the training process of PAJ _ net for constraint, and provides a more effective semi-supervision method for multi-region real-time segmentation of the prostate MR image.

Description

Prostate MR image semi-supervised segmentation method based on deep learning
Technical Field
The invention belongs to the technical field of image segmentation in medical image processing, and particularly relates to a prostate MR image semi-supervised segmentation method based on deep learning.
Background
Under the influence of various factors such as age, inflammation, heredity, living habits and diet, the prostate cancer becomes one of the cancers with the highest incidence rate in the middle-aged and the elderly men all over the world, and the death rate of the prostate cancer is high every year. In recent years, with the remarkable improvement of the living standard of people, the living style changes day by day, and the number of new diseases and death of prostate cancer also increases year by year. Needle biopsy is the gold standard for definitive diagnosis of prostate cancer, and needle biopsy under MR and ultrasound fusion guidance has gained widespread attention in the clinic. The technology realizes targeted aspiration biopsy by combining the accuracy of MR positioning and the real-time property of ultrasonic imaging, and can effectively improve the detection rate of the prostate cancer.
Automatic segmentation of prostate MR images plays an important role in achieving targeted needle biopsy, but some classical segmentation algorithms are currently available, such as: threshold-based image segmentation methods, active contour model-based image segmentation methods, clustering-based image segmentation algorithms and the like cannot meet all-round technical requirements of real-time performance, accuracy, full-automation and the like in the clinical puncture process. For this reason, in recent years, various scholars have used the deep learning method in the prostate MR image segmentation. The Fang Song and The like greatly improve The accuracy of The prostate nuclear magnetic image segmentation by applying a new Pyramid Scene analysis network model (PSP-NET) to The two-dimensional prostate nuclear magnetic image segmentation. Liu et al designed a segmentation method based on a completely new fully convolutional network for the segmentation task of two areas PZ and TZ in a prostate nuclear magnetic image. According to the method, original image semantic information is extracted through improved ResNet50, multi-scale feature information is captured through a feature pyramid attention network, and finally spatial information is recovered through a decoding network. In subsequent researches, Liu et al further optimizes the network, not only adds a spatial attention module, but also replaces a single feature pyramid attention network with a plurality of feature pyramid attention networks, and further improves the segmentation effect of the PZ and the TZ. Qian et al propose a new split network model consisting of an encoder and decoder-ProSegNet. The model uses a dense block as a feature extraction unit in an encoder part, introduces a spatial attention and a channel attention mechanism in a decoder part, and shows excellent segmentation performance on two data sets of Promise12 and ProstateX. Chen et al uses dense Unet for cascade connection, and adopts a step-by-step training mode, and through the organic combination of rough segmentation, segmentation result refinement and scale normalization segmentation, the segmentation effect on prostate gland is improved, and the effectiveness of cascade operation is verified.
In general, the existing deep learning-based prostate MR segmentation method mainly aims to continuously optimize a network and improve the segmentation accuracy, but a large amount of medical image data and corresponding golden standards are required to be used in the training process. As is known, it is very difficult to acquire a large amount of medical image data, and the corresponding gold standard is very time-consuming and labor-consuming because of the need of manual labeling by experienced imaging physicians. Therefore, the dependence of the model on the label in the training process is reduced, and the method has important significance for improving the performance of the deep learning-based prostate MR image segmentation method.
Disclosure of Invention
Aiming at the defects and the improvement requirements of the prior art, the invention provides a deep learning-based prostate MR image semi-supervised segmentation method, aiming at reducing the dependence of a deep learning-based prostate MR image segmentation network model on a high-quality gold standard image and realizing the accurate segmentation of a prostate MR image under the condition of small sample data.
In order to achieve the above object, in a first aspect, the present invention provides a deep learning-based semi-supervised segmentation method for a prostate MR image, which includes the following steps:
s1, acquiring an original prostate MR image set and a gold standard corresponding to a partial image in the image set;
s2, constructing a first training set according to the partial image and the corresponding gold standard, importing the first training set into an executor network for training to obtain a corresponding segmentation result, and calculating to obtain a first actual loss by combining the gold standard; superposing the partial images and the corresponding segmentation results on a channel to obtain arrays, and training a judge network by taking the corresponding first actual loss as a label of each array;
s3, labeling the gold standard of each image in the image set, constructing a second training set according to the labeled image set and the gold standard of the partial image, importing the second training set into the executor network for training to obtain a corresponding segmentation result, and calculating the second actual loss of the image with the gold standard by combining the gold standard; superposing the golden standard-free image and the segmentation result on a channel to obtain a corresponding array, and introducing the corresponding array into the judger network trained in S2 to obtain the prediction loss; training the executor network according to the loss between the second actual loss and the predicted loss to obtain an optimal executor network;
s4, segmenting the prostate MR image to be segmented by utilizing the optimal performer network.
Further, after S1, the method further includes:
adjusting the window width and window level of the original prostate MR image in the image set, improving the contrast of the image through histogram equalization, and cutting the adjusted image into a uniform size;
expressing each pixel point in the gold standard by using a one-hot code;
after the corresponding relation between each original prostate MR image and the gold standard is determined, the images are disturbed and normalized.
Further, after normalization processing, image data is amplified by four modes, namely random angle deflection of images, equal-proportion translation of images, random scaling of images and up-down turning of images.
Further, the first actual loss is expressed as:
Figure BDA0003576112600000031
wherein, y t (n, m) is the m-th pixel value of the n-th category in the gold standard image, y p (N, M) is the mth pixel value of the nth category in the input image, N is the number of divided categories, and M is the number of pixels of the input image.
Further, in S2, the loss function in the training of the judger network is represented as:
Figure BDA0003576112600000041
wherein, Pred CE (b) For the b-th prediction loss, True CE (b) B is the batch size for the B-th first actual loss.
Further, in the S3, the loss between the second actual loss and the predicted loss is represented as:
PAJ_loss=α×True CE +(1-α)×λPred CE
wherein alpha is 0 or 1, alpha is 1 for the image with the gold standard, and alpha is 0 for the image without the gold standard; true CE A second actual loss; pred CE To predict loss; λ is a hyperparameter.
Further, in S3, the loss between the second actual loss and the predicted loss is represented as:
PAJ_loss new =α×True CE +(1-α)×λPred CE +βLoss MS
Figure BDA0003576112600000042
Figure BDA0003576112600000043
wherein alpha is 0 or 1, alpha is 1 for the image with the gold standard, and alpha is 0 for the image without the gold standard; true CE For the second actual loss, Pred CE For predicting loss, λ is a hyperparameter; beta is a fixed parameter, Loss MS Is an unsupervised loss function; n is the number of divided categories, x (r) is the normalized input image, A n Is the average pixel value of the nth class, x o (r) is the nth class segmentation result output through the Softmax layer,
Figure BDA0003576112600000044
for derivation of the nth class of segmentation results, gamma is a fixed parameter,
Figure BDA0003576112600000045
the representative r is any pixel value in the image.
Further, a new set of labels is made using the floating point number 0.or 1, with 1 for the original prostate MR image with gold standard and 0 for the original prostate MR image without gold standard.
In a second aspect, the present invention provides a deep learning-based prostate MR image semi-supervised segmentation system, including: a computer-readable storage medium and a processor;
the computer-readable storage medium is used for storing executable instructions;
the processor is configured to read executable instructions stored in the computer-readable storage medium and execute the method according to the first aspect.
Generally, by the above technical solution conceived by the present invention, the following beneficial effects can be obtained:
(1) the invention adopts a strategy of step-by-step training, firstly, a small amount of prostate MR image data sets with gold standards are used for training to generate a large amount of specific data, and an evaluator network utilizes the specific data for training and stores the optimal weight; and then, scoring the segmentation result of the prostate MR image without the gold standard in the executive network training process by using the judge network loaded with the optimal weight. Therefore, the network model still has excellent segmentation capability by using a small amount of gold standard images.
(2) According to the method, the Mumford-Shah energy function is added into the loss function of the training executor network for constraint, so that the generalization capability of the model is further improved, and the network model is more accurate to the segmentation result of the prostate MR image.
(3) By introducing a set of homemade new tags in the data set, the new tags are composed of floating point numbers 0.1, wherein 0 represents that the input prostate MR image has no gold standard, and 1 represents that the input prostate MR image has a gold standard, so that the network model has the capability of identifying whether the input image has the gold standard.
Drawings
Fig. 1 is a schematic flow chart of a deep learning-based semi-supervised segmentation method for a prostate MR image according to the present invention;
fig. 2(a) is a schematic diagram of a first step training structure PAJ _ net according to the present invention;
FIG. 2(b) is a schematic diagram of the connection operation (Concatenate) process provided by the present invention;
FIG. 2(c) is a schematic diagram of a second training step PAJ _ net according to the present invention;
FIG. 3 is a schematic diagram of the CE value regression result of Resnet _50 at different gold standard ratios according to the present invention;
FIG. 4(a) is the DSC result of PZ region of the prostate MR image obtained by the first and third methods under different standard ratios of gold according to the present invention;
FIG. 4(b) is the DSC result of TZ region of the prostate MR image obtained by the first and third methods under different standard gold proportions according to the present invention;
fig. 5(a) is a segmented image of a prostate MR image containing TZ only, which is obtained by the first method, the second method and the third method according to the present invention;
fig. 5(b) is a segmented image of a prostate MR image obtained by the first, second and third methods and simultaneously including two regions, namely TZ and PZ, provided by the present invention;
fig. 5(c) is a segmented image of a prostate MR image containing only PZ obtained by the first, second and third methods according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
Referring to fig. 1, in conjunction with fig. 2(a) to 2(c), the present invention provides a deep learning-based semi-supervised segmentation method for a prostate MR image, including operations S1 to S4.
In operation S1, a set of MR images of the original prostate and corresponding gold criteria for partial images in the set of images are obtained.
Using the published prostate MR image data set ProstateX, the original prostate MR image set and the gold standard corresponding to a portion of the image in the image set are obtained. It will be appreciated that the partial images may be 1/8, 1/4, 3/8, 1/2, etc. of all images of the image set.
Before importing the training image into the network model, preprocessing is required, specifically:
(1) adjusting the window width and window level of the prostate MR data in DICOM format in Prostatex, improving the contrast of the image through histogram equalization, and finally uniformly cutting the adjusted image size into 256 multiplied by 256;
(2) grouping the 255 pixel points in each gold standard image corresponding to the prostate MR image to 0, then carrying out unique hot coding, and expressing each pixel point in the gold standard image by using a unique hot code;
(3) all processed data were as follows 8: 1: 1, dividing a training set, a verification set and a test set according to the proportion, disordering after determining the corresponding relation between each MR image and a gold standard, and carrying out linear normalization according to the following formula;
Figure BDA0003576112600000071
wherein n is i Is the ith gray scale value, n, of the input feature min Is the gray value with the smallest input feature, n max Is the gray value with the largest input feature, and C is the number of all gray values of the input feature.
The prostate MR image semi-supervised segmentation model comprises an executor network and an evaluator network, wherein the executor network is used for carrying out image segmentation on an input image, and the evaluator network is used for grading a segmentation result of the input image. In this embodiment, the actor network preferably selects the Attention net, and the assessor network preferably selects the residual error network, thereby constructing an actor-assessor network (Performer and Judge network.
Operation S2, constructing a first training set according to the partial image and the gold standard corresponding to the partial image, importing the first training set into an executor network for training to obtain a corresponding segmentation result, and calculating to obtain a first actual loss by combining the gold standard; and superposing the partial images and the corresponding segmentation results on a channel to obtain arrays, and training the network of the judger by taking the corresponding first actual loss as a label of each array. Specifically, the method comprises the following steps:
(1) in order to improve the robustness of the model, a small number of prostate MR images and corresponding gold standards are amplified by using four modes of image random angle deflection, image equal-scale translation, image random scaling and image up-down turning, and the images are sent to PAJ _ net for first-step training.
(2) PAJ _ net, Performer selects Attention Unet, and slightly changes the structure of Attention Unet, reduces the number of convolution kernel in four up-sampling layers and four down-sampling layers in Attention Unet to half of the original number, namely 32, 64, 128, 256, thereby greatly reducing the parameter number of network. And as a loss function, the Cross Entropy formula (Cross Entropy, CE) is used, as follows:
Figure BDA0003576112600000081
wherein, y t (n, m) is the m-th pixel value of the n-th category in the gold standard image, y p (N, M) is the mth pixel value of the nth category in the input image, N is the number of divided categories, and M is the number of pixel points of the input image.
(3) 60000 segmentation results with the size of 256 × 256 × 3 generated by the perforer in the training process are overlapped with input images with the size of 256 × 256 × 1 on a channel, then arrays with the size of 256 × 256 × 4 after overlapping are saved, and 60000 cross entropy calculation results are used as labels of each array. At last, the generated 60000 groups of data are disorganized and divided into 10000 groups as verification sets, and the verification sets are sent to Judge for training;
(4) judge chooses ResNet50 to accomplish the regression analysis of the CE value of the Performer's segmentation result, and uses the following Mean square Error formula (Mean Squared Error, MSE) as the loss function in the training process, finally obtains the trained Judge:
Figure BDA0003576112600000082
wherein, Pred CE (b) For the b-th prediction loss, True CE (b) B is the batch size for the B-th first actual loss.
Operation S3, labeling the images in the image set with or without a gold standard, constructing a second training set with the labeled image set and the gold standard of the partial image, importing the second training set into the actor network for training to obtain a corresponding segmentation result, and calculating a second actual loss for the images with the gold standard by combining the gold standard; superposing the golden standard-free image and the segmentation result on a channel to obtain a corresponding array, and introducing the corresponding array into the judger network trained in S2 to obtain the prediction loss; and training the executor network according to the loss between the second actual loss and the predicted loss to obtain an optimal executor network.
In this example, a new set of labels is made using floating point numbers (0.or 1.) and labeled with 1 for prostate MR images with gold standards and 0 without gold standards, resulting in a semi-supervised training set.
Before the second training set is constructed, the semi-supervised training set is disturbed, data amplification is carried out in four modes of image random angle deflection, image equal-proportion translation, image random scaling and image up-down turning, and the data are sent into PAJ _ net in batches to carry out second training.
In the second training process, calculating a loss function by using CE for the prostate MR image with the golden standard, predicting a CE value by using Judge loaded with the optimal weight without the golden standard, and completing PAJ _ net second training by taking the following formula as the loss function:
PAJ_loss=α×True CE +(1-α)×λPred CE
wherein alpha is 0 or 1, alpha is 1 for the image with the gold standard, and alpha is 0 for the image without the gold standard; true CE A second actual loss; pred CE To predict loss; λ is a hyper-parameter, and λ is 0.1.
In addition, in order to further improve the generalization capability of the model, a classical Mumford _ Shah energy function in a level set segmentation method is introduced, and the following unsupervised loss function is constructed:
Figure BDA0003576112600000091
wherein N is the number of divided categories, x (r) is the normalized input image, x o (r) is the nth class segmentation result output through the Softmax layer,
Figure BDA0003576112600000092
in order to differentiate the nth segmentation result, gamma is a fixed parameter, gamma is 0.0001,
Figure BDA0003576112600000093
representing r is any pixel value in the image; a. the n The calculation formula of the average pixel value of the nth class is as follows:
Figure BDA0003576112600000094
on the basis of PAJ _ loss, adding a Mumford _ Shah functional to constrain the function to obtain redefined PAJ _ loss new
PAJ_loss new =α×True CE +(1-α)×λPred CE +βLoss MS
Wherein alpha is 0 or 1, alpha is 1 for the image with the gold standard, and alpha is 0 for the image without the gold standard; true CE For the second actual loss, Pred CE To predict loss, λ is a hyperparameter; beta is a fixed parameter, and takes beta as 1 e-5.
In operation S4, the optimal performer network is used to segment the MR image of the prostate to be segmented.
In order to further show the effectiveness of the present invention, the present embodiment respectively adopts the following three methods to compare the image segmentation results.
The method comprises the following steps: the segmentation method provided by the invention is adopted to realize the segmentation tasks of the TZ and the PZ of the prostate MR image, and the Mumford-Shah functional is not added for constraint;
the second method comprises the following steps: the segmentation method provided by the invention is adopted to realize the segmentation tasks of the TZ and the PZ of the prostate MR image, and a Mumford _ Shah functional is added for constraint;
the third method comprises the following steps: the task of segmenting the two regions of the MR image TZ and PZ of the prostate is realized according to the Attention U-Net (Attention U-Net: learning person to look for the prostate. arXiv preprinting arXiv:1804.03999v3,2018).
(1) And comparing the segmentation effects of the first method and the third method.
Quantitative comparisons were evaluated using a Dice Similarity Coefficient (DSC), where DSC is defined as follows:
Figure BDA0003576112600000101
wherein, TP represents a class of sample points with positive prediction labels and positive real labels, FP represents a class of sample points with positive prediction labels and negative real labels respectively, and FN represents a class of sample points with negative prediction labels and positive real labels respectively.
The segmentation of the two regions of the prostate MR images TZ and PZ at different gold standard scales is clearly reflected in the two line diagrams of fig. 4(a) and 4(b), PAJ _ net and Attention unnet. As shown in fig. 4(a), in the prostate MR image PZ region, PAJ _ net contrasts with Attention unnet at different gold standard ratios, and the generalization capability of the model is greatly improved. When only 1/8 gold standard data is used, PAJ _ net is compared with the situation U-net for supervised training, the DSC index of the PZ region segmentation result is improved by nearly 8 percent, and the DSC index reaches 90.03 percent of the total labeling data. With the increase of labeled data participating in training and the decrease of unlabeled data, the promotion effect of PAJ _ net gradually decreases, and when 1/2 labeled data is used, 96.88% of the total labeled data can be used. As shown in fig. 4(b), PAJ _ net is better than Attention unnet in the prostate MR image TZ region also under the labeling data of different scales. When only 1/8 labeled data is used, the DSC index of PAJ _ net is improved by more than 6% compared with the Attention U-net based on supervised training and the TZ region segmentation result, and the DSC index reaches 95.78% of the total labeled data.
(2) And comparing the segmentation effects of the method I, the method II and the method III.
Quantitative comparisons were evaluated using DSC, cross-Over Union (IOU), Recall (Recall), and Precision (Precision), where IOU, Recall, and Precision are defined as follows:
Figure BDA0003576112600000111
Figure BDA0003576112600000112
Figure BDA0003576112600000113
table 1 lists the extension Unet, PAJ _ net and PAJ _ net + Mumford _ Shah functional in the same way using 1/4 scale data set training, and four quantitative evaluation indexes of the segmentation result are obtained. As can be seen from the table, PAJ _ net and PAJ _ net + Mumford _ Shah functional compare with Attention Unet, and have a great improvement in DSC, IOU, Recall and Precision four quantitative evaluation indexes. In the PZ region of the prostate, the segmentation Precision of PAJ _ net and PAJ _ net + Mumford _ Shah functional are not very different, wherein PAJ _ net reaches 0.7012 and 0.7652 on DSC and Precision evaluation indexes respectively; PAJ _ net + Mumford _ Shah reached 0.5692 and 0.6966 on IOU and Recall evaluation indexes, respectively, with more overlapping area and better sensitivity. In the area TZ of the prostate, the PAJ _ net + Mumford _ Shah method has the most remarkable segmentation effect overall, is slightly better than PAJ _ net, and has better generalization capability.
TABLE 1 results of the segmentation of PZ and TZ regions by each segmentation method
Figure BDA0003576112600000121
With reference to fig. 5(a) to 5(c), it can also be seen that the second method can not only better segment the region of interest, but also largely avoid the occurrence of mis-segmentation compared with other methods.
It will be understood by those skilled in the art that the foregoing is only a preferred embodiment of the present invention, and is not intended to limit the invention, and that any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (9)

1. A deep learning-based prostate MR image semi-supervised segmentation method is characterized by comprising the following steps:
s1, acquiring an original prostate MR image set and a gold standard corresponding to a partial image in the image set;
s2, constructing a first training set according to the partial image and the corresponding gold standard, importing the first training set into an executor network for training to obtain a corresponding segmentation result, and calculating to obtain a first actual loss by combining the gold standard; superposing the partial images and the corresponding segmentation results on a channel to obtain arrays, and training a judge network by taking the corresponding first actual loss as a label of each array;
s3, labeling each image in the image set with or without a gold standard, constructing a second training set by using the labeled image set and the gold standard of the partial image, importing the second training set into the executor network for training to obtain a corresponding segmentation result, and calculating to obtain a second actual loss for the image with the gold standard by combining the gold standard; superposing the images without the golden standard and the segmentation results on a channel to obtain corresponding arrays, and importing the corresponding arrays into a trained evaluator network in S2 to obtain predicted loss; training the executor network according to the loss between the second actual loss and the predicted loss to obtain an optimal executor network;
s4, segmenting the prostate MR image to be segmented by utilizing the optimal performer network.
2. The deep learning based prostate MR image semi-supervised segmentation method of claim 1, wherein after S1, the method further comprises:
adjusting the window width and window level of the original prostate MR image in the image set, improving the contrast of the image through histogram equalization, and cutting the adjusted image into a uniform size;
expressing each pixel point in the gold standard by using a one-hot code;
after the corresponding relation between each original prostate MR image and the gold standard is determined, the images are disturbed and normalized.
3. The deep learning-based prostate MR image semi-supervised segmentation method according to claim 2, wherein after the normalization process, the image data is augmented by four modes, i.e. random image angle deflection, image equal-scale translation, random image scaling and image up-down flipping.
4. The deep learning based MR image semi-supervised segmentation method of a prostate according to claim 1, wherein the first actual loss is expressed as:
Figure FDA0003576112590000021
wherein, y t (n, m) is the m-th pixel value of the n-th category in the gold standard image, y p (N, M) is the mth pixel value of the nth category in the input image, N is the number of divided categories, and M is the number of pixel points of the input image.
5. The deep learning based prostate MR image semi-supervised segmentation method according to claim 1, wherein in S2, the loss function in training the network of judges is expressed as:
Figure FDA0003576112590000022
wherein, Pred CE (b) For the b-th prediction loss, True CE (b) B is the batch size for the B-th first actual loss.
6. The deep learning based prostate MR image semi-supervised segmentation method according to claim 1, wherein in S3, the loss between the second actual loss and the predicted loss is expressed as:
PAJ_loss=α×True CE +(1-α)×λPred CE
wherein alpha is 0 or 1, alpha is 1 for the image with the gold standard, and alpha is 0 for the image without the gold standard; true CE A second actual loss; pred CE To predict loss; λ is a hyperparameter.
7. The deep learning based prostate MR image semi-supervised segmentation method according to claim 1, wherein in S3, the loss between the second actual loss and the predicted loss is expressed as:
PAJ_loss new =α×True CE +(1-α)×λPred CE +βLoss MS
Figure FDA0003576112590000031
Figure FDA0003576112590000032
wherein alpha is 0 or 1, alpha is 1 for the image with the gold standard, and alpha is 0 for the image without the gold standard; true CE For the second actual loss, Pred CE For predicting loss, λ is a hyperparameter; beta is a fixed parameter, Loss MS Is an unsupervised loss function; n is the number of divided categories, x (r) is the normalized input image, A n Is the average pixel value of the nth class, x o (r) is the nth class segmentation result output through the Softmax layer,
Figure FDA0003576112590000033
for derivation of the nth class of segmentation results, gamma is a fixed parameter,
Figure FDA0003576112590000034
the representative r is any pixel value in the image.
8. The deep learning based prostate MR image semi-supervised segmentation method of claim 1, 6 or 7, wherein floating point numbers 0.or 1 are used to make a set of new labels, 1 is used for labeling of gold standard original prostate MR images, and 0 is used for labeling of gold standard original prostate MR images.
9. A deep learning-based semi-supervised segmentation system for a prostate MR image is characterized by comprising the following components: a computer-readable storage medium and a processor;
the computer-readable storage medium is used for storing executable instructions;
the processor is configured to read executable instructions stored in the computer-readable storage medium and execute the method according to any one of claims 1 to 8.
CN202210344442.2A 2022-03-31 2022-03-31 Prostate MR image semi-supervised segmentation method based on deep learning Pending CN114972171A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210344442.2A CN114972171A (en) 2022-03-31 2022-03-31 Prostate MR image semi-supervised segmentation method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210344442.2A CN114972171A (en) 2022-03-31 2022-03-31 Prostate MR image semi-supervised segmentation method based on deep learning

Publications (1)

Publication Number Publication Date
CN114972171A true CN114972171A (en) 2022-08-30

Family

ID=82977228

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210344442.2A Pending CN114972171A (en) 2022-03-31 2022-03-31 Prostate MR image semi-supervised segmentation method based on deep learning

Country Status (1)

Country Link
CN (1) CN114972171A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117422880A (en) * 2023-12-18 2024-01-19 齐鲁工业大学(山东省科学院) Segmentation method and system combining improved attention mechanism and CV model

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117422880A (en) * 2023-12-18 2024-01-19 齐鲁工业大学(山东省科学院) Segmentation method and system combining improved attention mechanism and CV model
CN117422880B (en) * 2023-12-18 2024-03-22 齐鲁工业大学(山东省科学院) Segmentation method and system combining improved attention mechanism and CV model

Similar Documents

Publication Publication Date Title
Liu et al. Panoptic feature fusion net: a novel instance segmentation paradigm for biomedical and biological images
CN113610822B (en) Surface defect detection method based on multi-scale information fusion
CN111738363B (en) Alzheimer disease classification method based on improved 3D CNN network
CN112149603B (en) Cross-modal data augmentation-based continuous sign language identification method
CN115829991A (en) Steel surface defect detection method based on improved YOLOv5s
CN115760867A (en) Organoid segmentation method and system based on improved U-Net network
CN110599502A (en) Skin lesion segmentation method based on deep learning
CN115311194A (en) Automatic CT liver image segmentation method based on transformer and SE block
Zhang et al. An efficient semi-supervised manifold embedding for crowd counting
CN116229482A (en) Visual multi-mode character detection recognition and error correction method in network public opinion analysis
CN114972171A (en) Prostate MR image semi-supervised segmentation method based on deep learning
Wang et al. Multi-task generative adversarial learning for nuclei segmentation with dual attention and recurrent convolution
Li et al. Image segmentation based on improved unet
Liu et al. WSRD-Net: A convolutional neural network-based arbitrary-oriented wheat stripe rust detection method
Drioua et al. Breast Cancer Detection from Histopathology Images Based on YOLOv5
Ukwuoma et al. LCSB-inception: Reliable and effective light-chroma separated branches for Covid-19 detection from chest X-ray images
CN117036288A (en) Tumor subtype diagnosis method for full-slice pathological image
CN111783796A (en) PET/CT image recognition system based on depth feature fusion
CN116524258A (en) Landslide detection method and system based on multi-label classification
CN116228731A (en) Multi-contrast learning coronary artery high-risk plaque detection method, system and terminal
Huang et al. DeeptransMap: a considerably deep transmission estimation network for single image dehazing
CN115527031B (en) Bone marrow cell image segmentation method, computer device and readable storage medium
Li et al. Beta network for boundary detection under nondeterministic labels
Song et al. Multi-scale Superpixel based Hierarchical Attention model for brain CT classification
CN117197156B (en) Lesion segmentation method and system based on double decoders UNet and Transformer

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination