CN115512108A - Semi-supervised OCT image retina segmentation method based on uncertainty - Google Patents

Semi-supervised OCT image retina segmentation method based on uncertainty Download PDF

Info

Publication number
CN115512108A
CN115512108A CN202211128245.3A CN202211128245A CN115512108A CN 115512108 A CN115512108 A CN 115512108A CN 202211128245 A CN202211128245 A CN 202211128245A CN 115512108 A CN115512108 A CN 115512108A
Authority
CN
China
Prior art keywords
network
uncertainty
image
map
supervised
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211128245.3A
Other languages
Chinese (zh)
Inventor
刘现文
王亚奇
贾刚勇
顾人舒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN202211128245.3A priority Critical patent/CN115512108A/en
Publication of CN115512108A publication Critical patent/CN115512108A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/32Normalisation of the pattern dimensions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06V10/7753Incorporation of unlabelled data, e.g. multiple instance learning [MIL]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10101Optical tomography; Optical coherence tomography [OCT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a semi-supervised OCT image retina segmentation method based on uncertainty. For the labeled data, extracting a feature diagram obtained through a first network to obtain an attention diagram, performing attention enhancement on an input picture, and then sending the input picture into the first network and a second network again for training; for non-label data, in order to reduce the error of a pseudo label, cross supervision of a first network and a second network is adopted, and uncertainty guidance is assisted; obtaining multi-scale characteristics and an uncertainty map through a first network, sending the multi-scale characteristics and the uncertainty map into an uncertainty repairing module together to obtain repaired pseudo labels, then monitoring the output of the first network, and calculating weighted uncertainty repairing loss; and testing the result on the test set after each iteration, and storing the model parameters when the current Dice coefficient is higher than the result of the previous iteration. The method well improves the generalization of the model, solves the problem of false label misleading, and improves the boundary segmentation precision of the false label.

Description

Semi-supervised OCT image retina segmentation method based on uncertainty
Technical Field
The invention belongs to the field of image segmentation, and particularly relates to a semi-supervised OCT image retina segmentation method based on uncertainty.
Background
Optical coherence tomography OCT is a non-invasive, real-time, and low-resolution non-contact imaging technique that is commonly used for high-resolution imaging of the retina. Many ophthalmic and other diseases (e.g., myopia, diabetic retinopathy, and AMD) can cause changes in the retinal physiology, and the retina can be segmented from OCT images by automatic retinal segmentation techniques to better view the features of the relevant area and aid in determining the cause of the changes in the retinal physiology. Therefore, the method has important application value and research value for automatically segmenting the retina in the OCT image.
The traditional retina segmentation method is mostly a method based on a mathematical model and a method based on supervised learning, the constraint condition of the former is very complex, and multiple iterations are time-consuming, and the latter is proved to have strong image segmentation capability by a convolutional neural network with the rise of deep learning technology in recent years. However, the development of automatic retinal segmentation techniques is still not mature, mainly due to the following drawbacks and deficiencies: (1) The retina boundary is easily affected by diseases, for example, the retina structure can generate the phenomena of fault and the like when leakage occurs; (2) The retina OCT image is easily affected by speckle noise, the image resolution is fuzzy, and the image is also affected by an acquisition machine and is difficult to accurately segment; (3) The retina OCT image data is scarce and the manual labeling cost is high.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a semi-supervised OCT image retina segmentation method based on uncertainty.
To achieve the above object, the present invention comprises the steps of:
s1, dividing an OCT image data set into a training set and a testing set, wherein the training set consists of 10% of labeled images and 90% of unlabeled images. And (3) performing data preprocessing on the training set and the test set images through cutting, proportion adjustment and uniform data size, and then inputting the training set into a model for training.
And S2, for the labeled data, obtaining an attention diagram through the feature diagram obtained by the first network through extraction, performing attention enhancement on the input image, and then sending the image into the first network and the second network again for training.
And S3, for the non-label data, in order to reduce the error of the pseudo label, adopting cross supervision of the first network and the second network, and assisting with uncertainty guidance.
And S4, obtaining multi-scale characteristics and an uncertainty map through the first network, sending the multi-scale characteristics and the uncertainty map into an uncertainty repairing module together, obtaining repaired pseudo labels, then monitoring the output of the first network, and calculating the weighted uncertainty repairing loss.
And S5, testing results on the test set after each iteration to obtain a segmented OCT image, wherein the test index is a Dice coefficient, and the current Dice coefficient is higher than the result of the previous iteration.
The first network is a U-net network model, and the second network is a Swin-Unet network model.
Further, the above-mentioned semi-supervised OCT image retinal segmentation method based on uncertainty guidance and repair, wherein: the step S2 specifically includes the following steps:
s21: and inputting the data with the labels into the first network and the second network, and then calculating a first supervision loss by using a Dice function with the corresponding labels.
S22: the feature map obtained after four times of downsampling through the first network is selected, a plurality of attention maps are generated by adopting a dimensionality reduction operation, and then one attention map is randomly selected.
S23: in the attention map selected in step S22, a plurality of square black occlusion areas are randomly generated and superimposed on the original image, so as to obtain an enhanced picture.
S24: and inputting the enhanced picture obtained in the step S23 into the first network and the second network, calculating a second supervision loss by using a Dice function together with the corresponding label, and synthesizing the first supervision loss and the second supervision loss to obtain a total supervised loss function.
The first supervision loss and the second supervision loss calculated in the step S2 can ensure that the first network and the second network learn correct knowledge from the tags on the one hand, and can improve the robustness of the first network and the second network on the other hand.
Further, the above-mentioned semi-supervised OCT image retinal segmentation method based on uncertainty guidance and repair, wherein: the step S3 specifically includes the following steps:
s31: inputting the label-free data into the first network and the second network to obtain an image segmentation probability map, and then carrying out binarization on the image segmentation probability map based on a threshold value of 0.5 to obtain a corresponding image segmentation mask.
S32: starting a dropout function by the first network and the second network, randomly predicting forward for multiple times to obtain multiple image segmentation probability maps and calculating an average probability map, and then calculating corresponding uncertainty maps by using an information entropy calculation formula.
S33: and combining the uncertainty map obtained in the S32 with an MSE function to guide the segmentation mask of the second network and the image segmentation probability map of the first network, and the segmentation mask of the first network and the image segmentation probability map of the second network to calculate and obtain the weighted uncertainty loss.
The weighted uncertainty loss calculated in the step S3 can enable the first network and the second network to calculate the loss according to the segmentation probability map and the segmentation mask of the other network, thereby better avoiding learning of wrong information and improving the utilization rate of the non-tag data.
Further, the above-mentioned semi-supervised OCT image retinal segmentation method based on uncertainty guidance and repair, wherein: the step S4 specifically includes the following steps:
s41: and in the decoder stage of the first network, each stage is subjected to up-sampling by a deconvolution network and spliced after reaching the same dimensionality as that of the next stage, and the like to obtain the multi-scale characteristic diagram.
S42: and inputting the uncertainty map of the first network and the multi-scale feature map obtained in S41 into an uncertainty repairing module to obtain a repaired pseudo label with clearer boundary.
S43: and (5) calculating the repaired pseudo label obtained in the step S42 by using a Dice function, supervising the segmentation probability map obtained in the step S31, and calculating to obtain the weighted uncertainty repair loss.
The weighted uncertainty repair loss calculated in step S4 can further improve the learning effect of the first network on the non-tag data.
Further, the above-mentioned semi-supervised OCT image retinal segmentation method based on uncertainty guidance and repair, wherein: in step S4, the uncertainty repairing module is composed of three convolutional layers and pooling layers having the same structure, and a fourth layer having only convolutional layers. The convolutional layer is composed of 3x3 convolutional kernels, and the pooling layer is composed of 3x3 kernels. And a residual structure is adopted, the input of each layer except the first layer is the output and input splicing of the previous layer, the input of each layer is subjected to image feature extraction through a 3x3 convolution layer and a normalization layer and a Relu activation function, the image is compressed through a 3x3 maximum pooling layer, and an image segmentation mask with a clearer boundary is obtained after repeated operation for multiple times.
The invention has the beneficial effects that: the invention firstly considers the noise problem of OCT image and the influence of retinopathy area, and provides a picture enhancing method based on attention, which can improve the generalization of model; secondly, due to the limitation of semi-supervised learning, the problem of false label misleading can be well solved by the proposed uncertainty guiding in consideration of the uncertainty of the model; and finally, the uncertainty repairing module can improve the boundary segmentation precision of the pseudo label and better enable the model to focus on the boundary area.
Drawings
FIG. 1 is a schematic diagram of an overall network architecture;
FIG. 2 is a schematic diagram of an indeterminate repair module structure;
FIG. 3 is a comparison of the results of the first network passing through the uncertainty remediation module;
FIG. 4 is a graph of the results after attention picture enhancement;
FIG. 5 is a graph of the segmentation results of the reference model and the present invention.
Detailed Description
The technical scheme of the invention is further explained in detail by combining the attached drawings:
the cross-teaching of the former semi-supervised learning framework (CTCT) using CNN/transform is implicit consistency regularization, which can produce pseudo-labels that are more stable and accurate than explicit consistency regularization. The framework benefits from two different learning paradigms, CNN focuses on local information, transformations models remote relations, so cross-teaching helps to learn a unified segmenter with both attributes simultaneously, and also proves effective in partial medical image segmentation. However, based on the data particularity of the task and the influence of uncertainty of the original network on the pseudo label are not considered, the invention provides a semi-supervised OCT image retina segmentation method based on uncertainty by taking the data particularity and the influence as a reference model, and the accuracy of retina segmentation can be better improved.
The embodiment of the invention comprises the following steps:
s1, an OCT image data set is processed by the following steps of 7:3 into a training set and a test set, wherein the training set consists of 10% of labeled images and 90% of unlabeled images, L in fig. 1 is a labeled image, and U is an unlabeled image. And (3) carrying out data preprocessing on the training set and the test set images through cutting, proportion adjustment and unified data size, and then inputting the training set into a model for training.
Because the originally collected OCT images are different in size, the left half of the OCT image is a scanning line schematic diagram, the right half of the OCT image is a required image, and the OCT image cannot be directly input into a network for training, and firstly, the fundus image part is cut off by cutting, and then, the cutting and the scaling are carried out to obtain image data with the size of 224 multiplied by 224.
And S2, for the labeled data, the attention of the picture is enhanced through the attention drawing obtained by the first network to obtain a picture La, and then the picture La is sent to the network again for training.
The semi-supervised training comprises label data and non-label data, and the module only acts on the label data in the step. The method comprises the steps that labeled data are input into a first network, namely a U-net network model, a compressed feature map can be obtained after four times of convolution and down-sampling, a multi-channel feature map contains information of interest of the network, then a plurality of corresponding attention diagrams can be obtained after the feature map is subjected to 1x1 convolution, and then the attention diagrams are selected according to the ratio of the mean value of each attention diagram to the sum of the mean values of all the attention diagrams as sampling probability. Six square blocks of the same size are then randomly selected in the resulting attention map, with their pixel values set to 0. Multiplying the original image by the attention enhancement map yields an enhancement map with a plurality of black areas, which is fed again to the first and second networks as shown in fig. 4. The adopted loss functions are Dice functions, the supervised loss function is the sum of the first supervised loss and the second supervised loss, and the expression is as follows:
L sup =L sup1 +L sup21 (L att1 +L att2 )
wherein L is sup1 、L sup2 The calculated loss of the unenhanced tagged image L and the tag GT of the first network and the second network, respectively, of fig. 1, the sum of which is the first supervised loss; l is a radical of an alcohol att1 、L att2 Calculating loss for the enhancement picture La and the label GT of the first network and the second network in fig. 1 respectively, wherein the sum of the loss and the label GT is a second supervision loss; lambda 1 Are proportional weights.
The method performs corresponding image enhancement aiming at the situations of blurred retina boundaries and faults in the OCT image, can well increase the generalization of the model, and improves the segmentation precision of the model under the situations.
And S3, for the non-label data, in order to reduce the error of the pseudo label, adopting cross supervision of the first network and the second network, and assisting with uncertainty guidance.
In the step, uncertainty used for the model is estimated by adopting a Bayesian deep learning-based technology, and a dropout function in the existing model can be well used for approximating sampling of model parameters. For each input data, carrying out forward propagation for T times to obtain a prediction result, randomly adding random noise to the input data every time, and obtaining an average probability map by each pixel, wherein the calculation formula is as follows:
Figure BDA0003849043470000051
wherein u is i The average prediction probability value for the ith pixel, t the t prediction,
Figure BDA0003849043470000052
the prediction probability of the ith pixel in the t prediction is calculated, and a corresponding uncertainty map can be further calculated:
Figure BDA0003849043470000053
the first network and the second network will calculate respective uncertainty maps U that will filter out unreliable predictions. Specifically, the prediction of the first network and the second network is used, and then the segmentation probability map is binarized based on the threshold value of 0.5 to obtain the corresponding image segmentation mask. And then, combining the obtained masks with the uncertainty graph to supervise the network output probability graph of the other side, wherein the specific calculation method comprises the following steps:
Figure BDA0003849043470000054
wherein, U i And H is a set uncertainty threshold value, and when the uncertainty value of the ith pixel is smaller than H, the 1 is returned, otherwise, the 0 is returned. y is i Predicting the probability, y, of the corresponding pixel of the result for the network i And dividing the corresponding value of the mask pixel for the opposite network. From this, a weighted uncertainty loss with uncertainty constraint, i.e. first network and second network, can be calculatedThe sum of the weighted uncertainty losses for both networks, then:
L unsup =λ 2 (L un1 +L un2 )
wherein L is un1 And L un2 Weighted uncertainty loss, λ, cross-computed for respective output results of a first network and a second network in a graph 2 Are proportional weights.
And S4, obtaining multi-scale characteristics through the first network, sending the multi-scale characteristics and the uncertainty map into the uncertainty repairing module, obtaining the repaired pseudo label, and then monitoring the output of the first network.
The uncertainty repairing module is used for repairing adjacent uncertain pixels by utilizing pixels with high network model certainty, and at the decoder stage of a first network, the information of a picture can be better reflected by considering the multi-scale information of the model, so that the feature map of each stage is subjected to up-sampling by a deconvolution network to the same latitude as that of the next stage and then spliced, and the multi-scale feature map fused with the multi-scale information is obtained by analogy. Then, the uncertainty map of the first network and the multi-scale feature map are spliced together and input into an uncertainty repair module, as shown in fig. 2, the uncertainty repair module is composed of three convolutional layers and a pooling layer with the same structure, and the fourth layer is only composed of convolutional layers. The convolutional layer is composed of 3x3 convolutional kernels, and the pooling layer is composed of 3x3 kernels. And a residual error structure is adopted, and the input of each layer except the first layer is the output and input splicing of the previous layer, so that the pseudo label Ur with a clearer boundary is obtained.
The pseudo label unsupervises the predicted result probability map output by the first network, further enables the first network to learn useful information, calculates the weighted uncertainty repair loss of the pseudo label Ur in FIG. 1 and the predicted result probability map of the first network, namely lambda 3 L reun ,λ 3 For proportional weights, the loss function is the Dice function.
And S5, testing results on the test set after each iteration to obtain a segmented OCT image, wherein the test index is a Dice coefficient, and when the current Dice coefficient is higher than the result of the previous iteration, the model parameters are stored.
In order to verify the beneficial effects of the invention, fig. 4 shows that the original image and the enhanced image subjected to the attention enhancement method in the training phase are compared with the original image, and it can be found that most of black plaques appear in the boundary region of the retina, so that better regularization can be achieved, and the robustness of the model can be improved; in fig. 3, the third column is the initial segmentation probability map of the first network, the fourth column is the corresponding uncertainty map, and the fifth column is the segmentation probability map after being repaired, where the difference between the initial segmentation probability map and the labeled image can be found is a black area in the uncertainty map, and the segmentation probability map after being repaired well eliminates the difference; fig. 5 is a segmentation result graph of the invention and a reference model, which can be found that the segmentation result of the invention is more complete, and the invention can better segment the corresponding boundary between the picture (e) with unclear boundary and the picture (f) with interference caused by lesion.
The above-described embodiments of the present invention do not limit the scope of the present invention. Any modification, equivalent replacement, and improvement made within the spirit and scope of the present invention shall be included in the protection scope of the claims of the present invention.

Claims (5)

1. A semi-supervised OCT image retina segmentation method based on uncertainty is characterized by comprising the following steps:
s1, dividing an OCT image data set into a training set and a test set;
training set images and test set images, performing data preprocessing by cutting, adjusting proportion and unifying data size, and then inputting the training set into a model for training;
s2, extracting a feature diagram obtained through the first network for the labeled data to obtain an attention diagram, and then sending the attention diagram into the first network and the second network again for training;
s21, inputting the data with the labels into a first network and a second network, and then calculating a first supervision loss by using a Dice function with the corresponding labels;
s22, selecting a feature diagram obtained after four times of downsampling through a first network, generating a plurality of attention diagrams by adopting dimension reduction operation, and then randomly selecting one attention diagram;
s23, randomly generating a plurality of square black shielding areas on the attention diagram selected in the step S22, and overlapping the square black shielding areas on the original image to obtain an enhanced image;
s24, inputting the enhanced picture into a first network and a second network, and then calculating a second supervision loss by using a Dice function together with the corresponding label;
synthesizing the first supervision loss and the second supervision loss to obtain a total supervised loss function;
s3, for the non-label data, adopting cross supervision of a first network and a second network and assisting with uncertainty guidance;
s31, inputting label-free data into a first network and a second network to obtain an image segmentation probability map, and then carrying out binarization on the image segmentation probability map based on a threshold value to obtain a corresponding image segmentation mask;
s32, starting a dropout function by the first network and the second network, and randomly predicting forward for multiple times to obtain a plurality of image segmentation probability maps;
calculating an average probability graph, and then calculating a corresponding uncertainty graph by using an information entropy calculation formula;
s33, guiding the image segmentation mask of the second network and the image segmentation probability map of the first network, and guiding the image segmentation mask of the first network and the image segmentation probability map of the second network by combining the uncertainty map with the MSE function, and calculating to obtain weighted uncertainty loss;
s4, obtaining multi-scale characteristics and an uncertainty map through the first network, sending the multi-scale characteristics and the uncertainty map into an uncertainty repairing module together to obtain repaired pseudo labels, then monitoring the output of the first network, and calculating weighted uncertainty repairing loss;
s41, decoding stages of a first network, wherein each stage is subjected to up-sampling by a deconvolution network to the same dimensionality as that of the next stage and then is spliced to obtain a multi-scale characteristic diagram;
s42, inputting the uncertainty map and the multi-scale characteristic map of the first network into an uncertainty repairing module to obtain a repaired pseudo label with a clearer boundary;
s43, calculating a pseudo label by using a Dice function, supervising the segmentation probability map obtained in the step S31, and calculating to obtain a weighted uncertainty repair loss;
s5, testing results on the test set after each iteration to obtain a segmented OCT image, wherein the test index is a Dice coefficient, and when the current Dice coefficient is higher than the result of the previous iteration, model parameters are stored;
the first network is a U-net network model, and the second network is a Swin-Unet network model.
2. The semi-supervised OCT image retinal segmentation method based on uncertainty as recited in claim 1, further comprising: step S1, the training set consists of 10% of labeled images and 90% of unlabeled images.
3. The semi-supervised OCT image retinal segmentation method based on uncertainty as recited in claim 1, further comprising: s4, the uncertainty repairing module consists of three convolutional layers and a pooling layer with the same structure, and a fourth layer only with the convolutional layers;
the convolutional layer is composed of a 3x3 convolutional kernel, the pooling layer is composed of a 3x3 kernel, a residual error structure is adopted, the input of each layer except the first layer is spliced by the output and the input of the previous layer, the input of each layer is subjected to normalization layer and Relu activation function extraction image characteristics through one layer of 3x3 convolutional layer, then the image is compressed through the 3x3 maximum pooling layer, and an image segmentation mask with a clearer boundary is obtained after repeated operation for many times.
4. The semi-supervised OCT image retinal segmentation method based on uncertainty as recited in claim 1, wherein: step S22 selects an attention map according to the ratio of the mean value of each attention map to the sum of the mean values of all attention maps as a sampling probability.
5. The semi-supervised OCT image retinal segmentation method based on uncertainty as recited in claim 1, wherein: step S32 randomly predicts forward many times, and randomly adds random noise to the input data each time.
CN202211128245.3A 2022-09-16 2022-09-16 Semi-supervised OCT image retina segmentation method based on uncertainty Pending CN115512108A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211128245.3A CN115512108A (en) 2022-09-16 2022-09-16 Semi-supervised OCT image retina segmentation method based on uncertainty

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211128245.3A CN115512108A (en) 2022-09-16 2022-09-16 Semi-supervised OCT image retina segmentation method based on uncertainty

Publications (1)

Publication Number Publication Date
CN115512108A true CN115512108A (en) 2022-12-23

Family

ID=84503312

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211128245.3A Pending CN115512108A (en) 2022-09-16 2022-09-16 Semi-supervised OCT image retina segmentation method based on uncertainty

Country Status (1)

Country Link
CN (1) CN115512108A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116402838A (en) * 2023-06-08 2023-07-07 吉林大学 Semi-supervised image segmentation method and system for intracranial hemorrhage

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116402838A (en) * 2023-06-08 2023-07-07 吉林大学 Semi-supervised image segmentation method and system for intracranial hemorrhage
CN116402838B (en) * 2023-06-08 2023-09-15 吉林大学 Semi-supervised image segmentation method and system for intracranial hemorrhage

Similar Documents

Publication Publication Date Title
CN111738363B (en) Alzheimer disease classification method based on improved 3D CNN network
CN116664605B (en) Medical image tumor segmentation method based on diffusion model and multi-mode fusion
CN114170088A (en) Relational reinforcement learning system and method based on graph structure data
CN115512108A (en) Semi-supervised OCT image retina segmentation method based on uncertainty
CN111814644A (en) Video abnormal event detection method based on disturbance visual interpretation
Nakao et al. Selective super-resolution for scene text images
CN116258652A (en) Text image restoration model and method based on structure attention and text perception
Tripathi et al. Denoising of magnetic resonance images using discriminative learning-based deep convolutional neural network
Kim et al. Infrared and visible image fusion using a guiding network to leverage perceptual similarity
Wu et al. Multi-focus image fusion: Transformer and shallow feature attention matters
CN117523203A (en) Image segmentation and recognition method for honeycomb lung disease kitchen based on transducer semi-supervised algorithm
CN117408924A (en) Low-light image enhancement method based on multiple semantic feature fusion network
Song et al. Deep semantic-aware remote sensing image deblurring
CN115100731B (en) Quality evaluation model training method and device, electronic equipment and storage medium
CN116563285A (en) Focus characteristic identifying and dividing method and system based on full neural network
CN116129417A (en) Digital instrument reading detection method based on low-quality image
CN116310394A (en) Saliency target detection method and device
Liu et al. Dual UNet low-light image enhancement network based on attention mechanism
CN114862803A (en) Industrial image anomaly detection method based on fast Fourier convolution
CN114862696A (en) Facial image restoration method based on contour and semantic guidance
Tliba et al. Deep-based quality assessment of medical images through domain adaptation
Chen et al. HINT: High-quality INpainting Transformer with Mask-Aware Encoding and Enhanced Attention
Du et al. License plate super-resolution reconstruction based on improved ESPCN network
Jothi Lakshmi et al. Image SR-based NLM and DCNN improved IBP with cubic B-spline
Wang et al. Underwater image enhancement by combining multi-attention with recurrent residual convolutional U-Net

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination