CN115511795A - Medical image segmentation method based on semi-supervised learning - Google Patents

Medical image segmentation method based on semi-supervised learning Download PDF

Info

Publication number
CN115511795A
CN115511795A CN202211088281.1A CN202211088281A CN115511795A CN 115511795 A CN115511795 A CN 115511795A CN 202211088281 A CN202211088281 A CN 202211088281A CN 115511795 A CN115511795 A CN 115511795A
Authority
CN
China
Prior art keywords
segmentation
network
training
trained
medical image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211088281.1A
Other languages
Chinese (zh)
Inventor
吴俊�
沈博
张瀚文
何明鑫
刘洋
何贵青
蒋晓悦
夏召强
谢红梅
李会方
冯晓毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN202211088281.1A priority Critical patent/CN115511795A/en
Publication of CN115511795A publication Critical patent/CN115511795A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a semi-supervised learning medical image segmentation method, which comprises the following steps: the method comprises the following steps: pre-training; firstly, pre-training a fine repair network aiming at a first original medical image; obtaining a trained fine repair network; step two (fine tuning): combining an encoder in the trained fine repair network with a randomly initialized decoder to obtain a segmentation network; obtaining a segmentation data set by manually labeling the second original medical image set; training the segmentation network by using the segmentation data set to obtain a trained segmentation network; step three: self-training by using the trained segmentation network as a teacher model to obtain a trained student model; step four: and inputting the image to be segmented into the trained student model to obtain a segmentation result. The method solves the technical problem of low segmentation accuracy caused by insufficient labeled data for training the deep learning segmentation model by utilizing a large amount of easily-obtained original data.

Description

Medical image segmentation method based on semi-supervised learning
Technical Field
The invention belongs to the technical field of medical image segmentation, and relates to a medical image segmentation method based on semi-supervised learning.
Background
The invention relates to a medical image segmentation and semi-supervised learning algorithm, comprising the following two parts:
1) Medical image segmentation
Recent research on medical image segmentation mainly focuses on improvement of segmentation models, and CS-Net, CE-Net, MDACN and the like all focus on designing multi-scale information fusion modules to improve medical image segmentation performance. However, the further improvement of the model capability is still restricted by the problem of insufficient medical data set, and an image segmentation method using label-free data is not proposed yet in the background.
CS-Net is a universal segmentation network with a unified curve structure, and is suitable for different medical imaging modes: optical coherence tomography angiography (OCT-a), color fundus images, and Corneal Confocal Microscopy (CCM). The network adds a self-attention mechanism in an encoder and a decoder, and replaces a convolutional neural network based on U-net. Local features are further adaptively combined with their global dependencies using both types of attention modules, spatial attention and channel attention.
CE-Net is a context encoder network that can capture more advanced information and preserve spatial information for two-dimensional medical image segmentation. CE-Net mainly comprises three main components: a feature encoder module, a context extractor, and a feature decoder module. The context extraction module consists of a newly proposed dense hole convolution (DAC) block and a residual multi-core pool (RMP) block.
MDACN is a new multi-discriminator anti-convolution network where both the generator and both discriminators emphasize the multi-scale feature representation. The generator is a U-shaped full convolution network and is provided with multi-scale splitting and splicing blocks, and the two discriminators have different effective receiving domains and are sensitive to features with different scales.
2) Semi-supervised learning algorithm
In real-world medical scenarios, unlabeled data is relatively easy to obtain, whereas labeled data is often difficult to collect, time-consuming and laborious to label. In this case, semi-supervised learning is more suitable for application in real scenes, and has recently become a new direction in the field of deep learning, and the method only needs a small amount of labeled samples and a large amount of unlabeled samples as training data.
Mean Teacher is a method of averaging model weights, rather than averaging predicted label weights. In addition, mean Teacher improves the accuracy of the test and allows training of fewer tags than temporal integration.
Uncertainty-Aware Self-Ensembling is a new semi-supervised framework for uncertain perception for segmenting the left atrium from 3D MR images. The method can efficiently utilize unmarked data by encouraging consistent predictions of the same input under different perturbations. Specifically, the framework is composed of a student model and a teacher model, and the student model learns the teacher model by minimizing a segmentation loss and a consistency loss of the teacher model. The method designs a new uncertainty perception scheme, so that the student model can gradually learn meaningful and reliable targets by using uncertainty information.
Cross Pseudo Supervision is a new method of regularization of consistency, called Cross Pseudo-Supervision. The method adopts different initialization disturbances for the same input image, so that the two segmentation networks have consistency. A pseudo-hot label graph output by one perturbed split network supervises another split network with standard cross-entropy losses, and vice versa.
Deep adaptive Networks is a new depth-confrontation network biomedical image segmentation model aiming to obtain consistently good segmentation results on annotated and non-annotated images. The model consists of two networks: (1) splitting the network (SN) for splitting; (2) an Evaluation Network (EN) to evaluate the segmentation quality. In the training process, the EN is encouraged to distinguish the segmentation result of the unlabeled image from the segmentation result of the labeled image (different scores are given), and the SN is encouraged to generate the segmentation result of the unlabeled image, so that the EN cannot distinguish the unlabeled image from the labeled image. Through the iterative antagonism training process, since the EN continuously 'criticizes' the segmentation result of the unlabeled image, the SN can be trained to generate more and more accurate segmentation on the unlabeled sample.
In summary, research on medical image segmentation problems at home and abroad has been advanced to a certain extent, but the technical problem of insufficient annotation data for training a deep learning segmentation model exists, and the problem restricts further improvement of model performance. However, drawing an accurate label data takes ten minutes to half an hour of a professional doctor, and has the disadvantages of strong subjectivity and the like, so that it is very difficult to solve the problem of insufficient label data by doctor labeling.
Disclosure of Invention
The invention aims to provide a medical image segmentation method based on semi-supervised learning, and aims to solve the technical problem of low segmentation accuracy caused by insufficient annotation data of the existing deep learning segmentation model.
In order to achieve the purpose, the invention adopts the following technical scheme to solve the problem:
a medical image segmentation method based on semi-supervised learning comprises the following steps:
the method comprises the following steps: pre-training; the method comprises the following substeps:
step 1: taking the first original medical image set as a repair data set, randomly covering each image in the repair data set, and inputting the images into a rough repair network for training to obtain a rough repair characteristic diagram;
and 2, step: inputting the rough repair characteristic diagram obtained in the step 1 into an encoder of a fine repair network, and after the fine repair network is trained, obtaining a fine repair characteristic diagram and a trained fine repair network;
and 3, step 3: inputting the fine repair characteristic diagram and the position data randomly covered by the images in the repair data set in the step 1 into a discriminator to obtain an output result of the discriminator.
And 4, step 4: reversely inputting the output result of the discriminator into the fine repairing network to obtain a trained fine repairing network;
step two: fine adjustment: combining the encoder in the trained fine repair network obtained in the step one with a randomly initialized decoder to obtain a segmentation network; obtaining a segmentation data set by manually labeling the second original medical image set; inputting the segmentation data set into a segmentation network, outputting a segmentation result, and reversely inputting the segmentation result and the labels in the segmentation data set into the segmentation network for training to obtain a trained segmentation network;
step three: self-training by adopting a semi-supervised learning algorithm: taking the trained segmentation network obtained in the step two as a teacher model, generating a pseudo label for the first original medical image set in the step one by adopting the teacher model, and combining the pseudo label with the segmentation data set to obtain a new training set; and performing data enhancement on the new training set and inputting the new training set into the student model for training to obtain the trained student model.
Step four: and inputting the medical image to be segmented into the student model trained in the third step to obtain an image segmentation result.
Further, in the step one, the coarse repair network employs gated convolution.
Further, in the first step, the fine repair network uses a typical deep learning segmentation network.
Further, in the second step, when the segmentation network is trained, the weighted sum L = α L of the Dice loss and the two-dimensional cross entropy loss is taken dice +(1-α)l BCE As a final loss function, where α =0.5.
Further, in the third step, the student model adopts a typical deep learning segmentation network.
Further, in the third step, the semi-supervised learning algorithm adopts ST + + algorithm, mean Teacher or noise Student.
Compared with the prior art, the invention has the following technical effects:
1) Because the original data of the medical image is relatively easy to obtain, the method of the invention utilizes a large amount of easily obtained original data to improve the performance of the model and solve the technical problem of low segmentation accuracy caused by insufficient labeled data of the existing training deep learning segmentation model. While freeing the physician time.
2) The label-free data can be effectively utilized, and the performance of the model is improved. And the more unlabeled data the better the performance of our model, until a boundary of performance improvement is reached.
3) The requirement on computing resources is not high, and only a common GPU graphic computing card is needed.
4) The coupling between the pre-training and the self-training and between the segmentation networks is not strong, one step of the pre-training and the self-training and the segmentation networks can be replaced by a more efficient method, and the performance is further improved.
5) The method of the invention has strong universality and can be used on other medical image data sets.
Drawings
FIG. 1 is a flow chart of a method of the present invention;
FIG. 2 is a schematic diagram of a pre-trained model in the method of the present invention.
The invention is further explained below with reference to the drawings and the description of embodiments.
Detailed Description
The medical image segmentation method based on semi-supervised learning, as shown in fig. 1, includes the following steps:
the method comprises the following steps: pre-training; as shown in fig. 2, the method comprises the following substeps:
step 1: taking a first original medical image set (non-label data) as a repair data set, randomly covering each image in the repair data set, and inputting each image into a rough repair network for training to obtain a rough repair characteristic diagram; preferably, the coarse repair network employs gated convolution.
The step trains a coarse repair network to predict low-frequency information (such as the color and texture of a background) of the covered pixels;
and 2, step: inputting the rough repair characteristic diagram obtained in the step 1 into an encoder of a fine repair network, and after the fine repair network is trained, obtaining a fine repair characteristic diagram and a trained fine repair network; preferably, the refinement network uses a typical deep learning split network such as U-Net, unet + +, CE-Net, or CS-Net.
The fine repair network is used to reconstruct more detailed information of the missing pixels on the basis of the coarse repair network.
And step 3: inputting the fine repair characteristic diagram and the position data randomly covered by the images in the repair data set in the step 1 into a discriminator to obtain an output result of the discriminator.
The discriminator is used for judging whether the predicted image is reasonable or not and reversely propagating the antagonistic loss.
And 4, step 4: and reversely inputting the output result of the discriminator into the fine repair network (namely, a decoder of the fine repair network) to obtain the trained fine repair network.
Therefore, the network model learns the relevant information in the relevant medical images before formal training through the first step, so that the model has a good initial value.
Step two: fine adjustment: combining the encoder in the trained fine repair network obtained in the step one with a randomly initialized decoder to obtain a segmentation network; manually labeling the second original medical image set to obtain a segmentation data set; and inputting the segmentation data set into a segmentation network, outputting a segmentation result, and reversely inputting the segmentation result and the labels in the segmentation data set into the segmentation network for training to obtain a trained segmentation network.
Step three: self-training by adopting a semi-supervised learning algorithm: taking the trained segmentation network obtained in the step two as a teacher model, generating a pseudo label for the first original medical image set in the step one by adopting the teacher model, and combining the pseudo label with the segmentation data set to obtain a new training set; and performing data enhancement on the new training set and inputting the new training set into the student model for training to obtain a trained student model.
Preferably, the student model adopts a typical deep learning segmentation network such as U-Net, unet + +, CE-Net or CS-Net. And the data enhancement is carried out in a strong data enhancement mode. The semi-supervised learning algorithm adopts ST + + algorithm, mean Teacher or Noisy Student.
In this step, data enhancement of the new data set can help the student model to go beyond the teacher model.
Step four: and inputting the medical image to be segmented into the student model trained in the third step to obtain an image segmentation result.
Example (b):
1) Algorithm setting
The algorithm of the present invention is implemented using PyTorch. The coarse repair network employs gated convolution. The fine repair network is flexible and may use U-net and Unet + +. The self-training phase uses the ST + + algorithm. Run on NVIDIA GeForce GTX 1080Ti GPU.
2) Model training
As shown in fig. 1, taking a common confocal corneal microscopic image in a medical image as an example, the confocal corneal microscopic image and corresponding small-scale artificial labeling information are taken as input. In the pre-training phase, masks of random shapes are used, the average number is 20, 1500 pieces of unlabeled data are used, and 50 pieces of labeled data are used. When the segmentation network is trained, taking the weighted sum L = alpha L of the Dice loss and the two-dimensional cross entropy loss dice +(1-α)l BCE As a final loss function, where α =0.5. In the pre-training process, the batch size is 4, training 200 rounds. An SGD optimizer was used in the training process, the batch size was 4, and 1000 rounds of training were performed.
3) Model use
After the model training is finished, inputting the corneal confocal microscopic image to be segmented into the model for testing, and obtaining an image segmentation result.
To demonstrate the feasibility and effectiveness of the method of the invention, the invention underwent the following tests:
1. effectiveness verification experiment of image reconstruction pre-training
In order to verify the effectiveness of the pre-training, an ablation experiment method is adopted to verify each module. In the experiment, the first raw image dataset was 1500 partial images selected from the CORN dataset (CORN: CORnal New Database-medical image/ophthalmic image team-even. The second original image data set is a partial image of the CORN data set except the first original data set, and the partial image is subjected to pixel level labeling, wherein the partial image is 50. .
Table 1 shows the performance of the model on CCM30 data sets after adding the rough repair network and the discriminator, respectively, where UNet is the basis for comparison and compares the superiority and inferiority with the pre-training method using super-resolution reconstruction as the proxy task.
TABLE 1
Figure RE-GDA0003955885490000071
It can be seen from the experimental results that the segmentation performance is better after using the pre-training than without using the pre-training, even if the pre-training lacks some modules. Meanwhile, after all the modules proposed by the inventor are used, the pre-training effect is the best, and the pre-training effect is improved by 2 percent compared with the baseline. The image information is learned in advance in the pre-training process, so that the image information has a better initial value, model training is facilitated, and better performance is obtained finally.
2. Validation experiment for self-training
In order to verify the effectiveness of the introduced self-training, an ablation experiment method is adopted to verify each module. Table 2 shows the performance of the models on CCM30 (http:// bio mlab. Dei. Unit.it/Corneal% 20Nerve:20Tortyosit%20Data [ -20Set.htm) after using the ST method and the ST + + method, respectively, as self-training methods, with comparisons being made on a U-net basis.
TABLE 2
Figure RE-GDA0003955885490000072
The experimental result shows that the self-training is improved compared with the baseline. After the pre-training and ST + + methods are used simultaneously, the improvement is 6 percentage points compared with the baseline. The model classification boundary is corrected through label-free data in self-training, so that the model classification boundary accords with clustering hypothesis and smooth hypothesis, namely the classification boundary is more reasonable, and the effect of improving the segmentation performance is achieved.

Claims (6)

1. A medical image segmentation method based on semi-supervised learning is characterized by comprising the following steps:
the method comprises the following steps: pre-training; the method comprises the following substeps:
step 1: taking the first original medical image set as a repair data set, randomly covering each image in the repair data set, and inputting the images into a rough repair network for training to obtain a rough repair characteristic diagram;
step 2: inputting the rough repair characteristic diagram obtained in the step 1 into an encoder of a fine repair network, and after the fine repair network is trained, obtaining a fine repair characteristic diagram and a trained fine repair network;
and 3, step 3: inputting the fine repair characteristic diagram and the position data randomly covered by the images in the repair data set in the step 1 into a discriminator to obtain an output result of the discriminator.
And 4, step 4: reversely inputting the output result of the discriminator into the fine repairing network to obtain a trained fine repairing network;
step two: fine adjustment: combining the encoder in the trained fine repair network obtained in the step one with a randomly initialized decoder to obtain a segmentation network; obtaining a segmentation data set by manually labeling the second original medical image set; inputting the segmentation data set into a segmentation network, outputting a segmentation result, and reversely inputting the segmentation result and the label in the segmentation data set into the segmentation network for training to obtain a trained segmentation network;
step three: self-training by adopting a semi-supervised learning algorithm: taking the trained segmentation network obtained in the step two as a teacher model, generating a pseudo label for the first original medical image set in the step one by adopting the teacher model, and combining the pseudo label with the segmentation data set to obtain a new training set; and taking the randomly initialized segmentation network as a student model, performing data enhancement on the new training set, and inputting the new training set into the student model for training to obtain a trained student model.
Step four: and inputting the medical image to be segmented into the student model trained in the third step to obtain an image segmentation result.
2. The semi-supervised learning based medical image segmentation method of claim 1, wherein in the first step, the coarse repair network employs gated convolution.
3. The medical image segmentation method based on semi-supervised learning of claim 1, wherein in the first step, the fine repair network uses a deep learning segmentation network.
4. The medical image segmentation method based on semi-supervised learning as claimed in claim 1, wherein in the second step, when the segmentation network is trained, the weighted sum L = α L of the Dice loss and the two-dimensional cross entropy loss is taken dice +(1-α)l BCE As a final loss function, where α =0.5.
5. The medical image segmentation method based on semi-supervised learning as recited in claim 1, wherein in step three, the student model adopts a deep learning segmentation network.
6. The medical image segmentation method based on semi-supervised learning as claimed in claim 1, wherein in the third step, the semi-supervised learning algorithm adopts ST + + algorithm, mean Teacher or Noisy Student.
CN202211088281.1A 2022-09-07 2022-09-07 Medical image segmentation method based on semi-supervised learning Pending CN115511795A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211088281.1A CN115511795A (en) 2022-09-07 2022-09-07 Medical image segmentation method based on semi-supervised learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211088281.1A CN115511795A (en) 2022-09-07 2022-09-07 Medical image segmentation method based on semi-supervised learning

Publications (1)

Publication Number Publication Date
CN115511795A true CN115511795A (en) 2022-12-23

Family

ID=84504486

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211088281.1A Pending CN115511795A (en) 2022-09-07 2022-09-07 Medical image segmentation method based on semi-supervised learning

Country Status (1)

Country Link
CN (1) CN115511795A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116091773A (en) * 2023-02-02 2023-05-09 北京百度网讯科技有限公司 Training method of image segmentation model, image segmentation method and device
CN116468746A (en) * 2023-03-27 2023-07-21 华东师范大学 Bidirectional copy-paste semi-supervised medical image segmentation method
CN117095014A (en) * 2023-10-17 2023-11-21 四川大学 Semi-supervised medical image segmentation method, system, equipment and medium
CN117152168A (en) * 2023-10-31 2023-12-01 山东科技大学 Medical image segmentation method based on frequency band decomposition and deep learning

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116091773A (en) * 2023-02-02 2023-05-09 北京百度网讯科技有限公司 Training method of image segmentation model, image segmentation method and device
CN116091773B (en) * 2023-02-02 2024-04-05 北京百度网讯科技有限公司 Training method of image segmentation model, image segmentation method and device
CN116468746A (en) * 2023-03-27 2023-07-21 华东师范大学 Bidirectional copy-paste semi-supervised medical image segmentation method
CN116468746B (en) * 2023-03-27 2023-12-26 华东师范大学 Bidirectional copy-paste semi-supervised medical image segmentation method
CN117095014A (en) * 2023-10-17 2023-11-21 四川大学 Semi-supervised medical image segmentation method, system, equipment and medium
CN117152168A (en) * 2023-10-31 2023-12-01 山东科技大学 Medical image segmentation method based on frequency band decomposition and deep learning
CN117152168B (en) * 2023-10-31 2024-02-09 山东科技大学 Medical image segmentation method based on frequency band decomposition and deep learning

Similar Documents

Publication Publication Date Title
CN115511795A (en) Medical image segmentation method based on semi-supervised learning
Zhao et al. Supervised segmentation of un-annotated retinal fundus images by synthesis
CN112685597B (en) Weak supervision video clip retrieval method and system based on erasure mechanism
CN109214989B (en) Single image super resolution ratio reconstruction method based on Orientation Features prediction priori
You et al. Fundus image enhancement method based on CycleGAN
CN113191969A (en) Unsupervised image rain removing method based on attention confrontation generation network
CN111598842A (en) Method and system for generating model of insulator defect sample and storage medium
CN112365556B (en) Image extension method based on perception loss and style loss
CN111161272A (en) Embryo tissue segmentation method based on generation of confrontation network
CN112233199A (en) fMRI visual reconstruction method based on discrete characterization and conditional autoregression
Dong et al. Semi-supervised domain alignment learning for single image dehazing
Nair et al. T2V-DDPM: Thermal to visible face translation using denoising diffusion probabilistic models
Zhong et al. Multi-scale attention generative adversarial network for medical image enhancement
Ruan et al. An efficient tongue segmentation model based on u-net framework
Hoang et al. Transer: Hybrid model and ensemble-based sequential learning for non-homogenous dehazing
CN105069767B (en) Based on the embedded Image Super-resolution reconstructing method of representative learning and neighborhood constraint
CN112541566B (en) Image translation method based on reconstruction loss
Wang et al. Blind tone-mapped image quality assessment and enhancement via disentangled representation learning
CN116993639A (en) Visible light and infrared image fusion method based on structural re-parameterization
CN116092667A (en) Disease detection method, system, device and storage medium based on multi-mode images
Luo et al. Infrared and visible image fusion based on VPDE model and VGG network
CN116129417A (en) Digital instrument reading detection method based on low-quality image
CN115239943A (en) Training method of image correction model and color correction method of slice image
CN115115900A (en) Training method, device, equipment, medium and program product of image reconstruction model
Zhu et al. HDRD-Net: High-resolution detail-recovering image deraining network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination