CN112837338B - Semi-supervised medical image segmentation method based on generation countermeasure network - Google Patents

Semi-supervised medical image segmentation method based on generation countermeasure network Download PDF

Info

Publication number
CN112837338B
CN112837338B CN202110045408.0A CN202110045408A CN112837338B CN 112837338 B CN112837338 B CN 112837338B CN 202110045408 A CN202110045408 A CN 202110045408A CN 112837338 B CN112837338 B CN 112837338B
Authority
CN
China
Prior art keywords
medical image
semi
supervised
discriminator
generator
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110045408.0A
Other languages
Chinese (zh)
Other versions
CN112837338A (en
Inventor
刘华锋
李楚晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN202110045408.0A priority Critical patent/CN112837338B/en
Publication of CN112837338A publication Critical patent/CN112837338A/en
Application granted granted Critical
Publication of CN112837338B publication Critical patent/CN112837338B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a semi-supervised medical image segmentation method based on a generation countermeasure network, which comprises the following steps: (1) constructing a segmentation generation network and a judgment network; (2) adding self-learning constraint based on cross entropy; (3) adding data distribution to meet constraint; (4) and segmenting the image with the original pixel size. The invention carries out semi-supervised segmentation task in a self-learning mode by establishing a generation pair learning system based on a deep learning network; the generator generates a probability characteristic map of the data without labels, and the characteristic map determines the reliability of the indicator generated by the discriminator; the cross entropy based self-learning constraint and the data distribution conformity constraint respectively promote the utilization rate of the data without labels and the data distribution consistency of the real labels and the generated labels. The method can solve the problem of the dependence of the traditional full supervision method on the marking amount to a certain extent, thereby better assisting intelligent medical analysis and diagnosis under the condition of a small amount of marks.

Description

Semi-supervised medical image segmentation method based on generation countermeasure network
Technical Field
The invention belongs to the technical field of biomedical image segmentation, and particularly relates to a semi-supervised medical image segmentation method based on a generation countermeasure network.
Background
Medical image segmentation is the task of isolating pixel target regions from background information, and plays a crucial role in the field of medical image analysis. In order to better address the segmentation task, a number of methods are continuously proposed, all of which follow a common procedure, namely collecting data, annotating target regions and optimizing model parameters in a fully supervised manner, despite the diversity of model architectures. The improvement of the final performance is seriously influenced by the quantity of training images and the quality of image annotation in the process, so that the quantity of the training images and the quality of image annotation are core factors influencing the final performance, and the more training data annotation, the better performance. For medical images, however, it is very expensive to collect and annotate semantically segmented data sets; firstly, due to limitations of medical resources and ethical issues, it is difficult to obtain a large number of training images; furthermore, the annotation process is made more difficult because the annotator must have expert knowledge. In this case, a natural problem is posed: how to enhance the algorithm without enough comments? Semi-supervised learning methods provide us with a key to solve this problem.
Semi-supervised learning means that we have only a few annotated training data, but we can use a large amount of unlabeled data as an aid to improve performance, and the main methods of these methods can be categorized into three categories: graph-based methods, collaborative training methods, and self-learning methods.
The prior of using the graph as data correlation is a more traditional method, and a nonparametric Bayesian Gaussian random field is designed to segment the image, so that the data distribution of the graph-based method can be better estimated. In the method, the samples and the similarities thereof are respectively regarded as nodes and edges, the percentages of the samples and the similarities thereof are respectively regarded as the nodes and the edges of the graph, and the connected nodes may have the same label; however, performing the operation to match the labels of all graph nodes through multiple loop matching can cause the operation to be cumbersome.
With the popularity of deep learning, a method of collaborative training and self-training gradually becomes a mainstream mode of semi-supervised learning, collaborative training generally utilizes data of different data fields to assist training of each other, and complementary structures of different views or image features can be used as assisting contents.
Unlike cooperative learning, which requires a large number of different data fields to assist, self-learning requires only the data of its own data field, thereby greatly reducing the data volume requirement. The classifier first uses the labeled data for routine supervised learning, and then the remaining unlabeled data can be used for training assistance through a traditional algorithm, a teacher-student mutual learning network, a reconstruction assistance or a confrontation generation learning. The current leading edge comparison method is to use an attention structure-based generation network, data without labels is input into a confidence coefficient network to obtain a feature map, and a part meeting conditions on the feature map is extracted to perform a self-learning process; however, since only a part of feature maps are selected by the method, global information is not considered, and therefore, the learning of the network is likely to be misled.
How to use a small number of tags and be able to absorb enough global information is a big problem at present.
Disclosure of Invention
In view of the above, the present invention provides a semi-supervised medical image segmentation method based on generation countermeasure network, which generates corresponding labels for the data without labels through a generator, and performs confidence determination on the data through a discriminator, and reduces inconsistency of data distribution between the generated data and the real data by using feature conformity constraint.
A semi-supervised medical image segmentation method based on a generation countermeasure network comprises the following steps:
(1) classifying the medical images which are not marked and marked, and normalizing the images into the same size;
(2) inputting the medical image with the label into a generator based on a deep learning neural network to extract characteristic information, and outputting a corresponding characteristic probability map;
(3) adopting a cross entropy function to constrain the neural network of the training generator, and continuously iterating until convergence;
(4) inputting the unmarked medical image into a generator to generate a corresponding characteristic probability graph, fusing the characteristic probability graph with the original medical image after passing through an argmax function, and inputting the fused characteristic probability graph into a discriminator, wherein the discriminator can output an indicator for judgment and a characteristic vector;
(5) if the value of the indicator is larger than the set threshold value, the characteristic probability graph generated in the step (4) is used as a pseudo label of the unmarked medical image, and then the neural network of the generator is subjected to self-learning supervision training;
(6) medical image x to be annotatedaAnd unlabelled medical image xuRespectively input into a generator to correspondingly generate a characteristic probability map PaAnd PuFrom P to PaAnd xaPerforming fusion (conjugation splicing operation) to form faA 1 is to PuAnd xuCarrying out fusion to form fu
(7) Will f is mixedaAnd fuRespectively input into a discriminator which correspondingly outputs two indicators IaAnd IuAnd two feature vectors FaAnd FuThe two eigenvectors are constrained using the L1 norm, and I isaBinary cross entropy constraint with 1, IuAnd carrying out binary cross entropy constraint with 0 to finish the training of the discriminator and finally using the discriminator for image segmentation.
Further, the generator employs a Deeplab generator based on Resnet-101.
Further, the definition of the feature probability map is expressed as follows:
Figure BDA0002894103790000031
wherein: p (i, j, N) is the characteristic probability value of the ith row and the jth column element in the corresponding characteristic probability map for the nth type of image, f (i, j, N) is the characteristic information of the ith row and the jth column element in the nth type of image, and N is the total category number of the medical images.
Further, the expression of the cross entropy function is as follows:
Figure BDA0002894103790000032
wherein: p (i, j, n) is the characteristic probability value of the ith row and jth column element in the characteristic probability map of the nth class image, LceFor the cross entropy function, W and H are the width and height of the image, respectively, ya(i, j, n) is the label information corresponding to the ith row and jth column elements in the nth type image.
Further, the discriminator adopts a four-layer neural network structure.
Further, the neural network of the generator is trained in the steps (3) and (5) by adopting a random gradient descent method according to the cross entropy function.
Further, in the step (7), the neural network of the discriminator is trained by adopting an Adam gradient descent algorithm according to the binary cross entropy.
The invention combines a deep learning segmentation network with a generation countermeasure network, the deep learning segmentation network uses the marked image to execute a conventional segmentation task, and the segmentation network can also be used as a generator of a discriminator to generate a pseudo label and provide input for the discriminator. The whole training process is an end-to-end closed loop, and unlabelled data are continuously iterated in a self-learning mode in a generator and a discriminator, so that the structures of the two parts are optimized simultaneously. Under the self-learning cycle operation, the network can generate high-quality pseudo labels to supervise learning, so that the final test result of the network can achieve the effect similar to full supervision, and the problem of insufficient training annotation data is effectively solved.
Drawings
FIG. 1 is a schematic structural diagram of a semi-supervised model of the present invention.
FIG. 2 is a flowchart illustrating steps of an image segmentation method according to the present invention.
Fig. 3(a) is the training set image visualization (axial slice).
Fig. 3(b) is a pixel-by-pixel label visualization (axial slice) of the training set.
Fig. 3(c) is the generated result (axial slice) labeled with half the training using semi-supervised training.
Fig. 3(d) is the resulting results (axial slices) labeled using half the training using fully supervised training.
Fig. 4(a) is a test set image visualization (axial slice).
Fig. 4(b) is a pixel-by-pixel label visualization (axial slice) of the test set.
Fig. 4(c) is the result of generation (axial slice) using the semi-supervised model.
Fig. 4(d) is the result of generation (axial slice) using the fully supervised model.
Fig. 5(a) is a semi-supervised, fully-supervised, and self-annotated edge contour image (axial slice) generated using half of the training data.
Fig. 5(b) is a semi-supervised, fully-supervised, and self-annotated edge contour image (axial slice) generated using quarter training data.
Fig. 5(c) is a semi-supervised, fully-supervised, and self-annotated edge contour image (axial slice) generated using one-eighth training data.
Detailed Description
In order to more specifically describe the present invention, the following detailed description is provided for the technical solution of the present invention with reference to the accompanying drawings and the specific embodiments.
The semi-supervised model of the invention is shown in FIG. 1, with image x having labelaExtracting characteristic information f from Deeplab generator based on Resnet101, and obtaining corresponding output prediction characteristic map P, which will perform cross entropy loss function calculation with label, and data x without labeluAnd meanwhile, the probability prediction graph P is input into a generator to generate a probability prediction graph P, wherein the probability prediction graph P is defined as follows:
Figure BDA0002894103790000051
wherein: (i, j) is the coordinate position of each corresponding point of the feature map f, and N is the total number of categories of the categories.
To determine the reliability of the prediction map, the prediction map is compared with xuFusing the input discriminator (the discriminator is a four-layer neural network), and outputting an indicator p by the discriminatordIf the value of the indicator is larger than the set threshold value, the probability map is subjected to self-learning training as a pseudo label after argmax is passed, and the training loss function is as follows:
Figure BDA0002894103790000052
in each iteration cycle, the labeled image generates a probability distribution map P through the generatoraThe image without the label also passes through the generator to generate a probability distribution map Pu. In order to make the distribution of the output data uniform, PaWill be in contact with xaWill be fused (ligation splicing operation) to form fa;PuWill be at xuCarrying out fusion to form fu,faAnd fuWill be input to a discriminator which will output two feature vectors FaAnd FuWe use the L1 norm constraint on the two vectors, namely Lfm=‖Fa-FuII, increasing the uniformity of the data distribution, wherein FaAnd FuThe expression is as follows:
Fa=D(C(ya,xa))
Fu=D(C(p,xu))
at the same time, f after fusionaAnd fuWill generate indicator I through the discriminating networkaAnd Iu,IaWill be cross entropy constrained with 1 for otitis, IuCarrying out binary cross entropy constraint on the sum of the two signals and 0, and constraining L by a discriminator based on the binary cross entropydThe following; under this constraint, the discriminator will be able to better discriminate between the pseudo label generated by the generator and the real label.
Figure BDA0002894103790000053
Wherein: dis (disease)a、DisuThe data distribution is marked with a set and is not marked with a set.
The flow of the reconstruction method of the present invention is shown in fig. 2, and firstly, a labeled data set is randomly divided:
(xa,ya)=random_select(x,ratio)
the remaining portion will be the data set that is not labeled, where ratio is
Figure BDA0002894103790000054
Then, a generator G and a discriminator D are constructed:
and for the labeled part of the generator, adopting SGD gradient descent algorithm to carry out cyclic iteration to solve the cross entropy of the objective function, and obtaining the following iteration steps:
Figure BDA0002894103790000061
Figure BDA0002894103790000062
wherein: alpha is the learning rate, w is the matrix parameter of the network,
Figure BDA0002894103790000063
and N is the total number of classified categories, and the convergence speed is accelerated by wholly using a random gradient descent algorithm.
The iterative convergence condition is as follows:
|L′ce-Lce″|≤∈
wherein: l'ceAnd L ″)ceFor the arithmetic function before and after, epsilon is the threshold value of convergence, set to 10-4
We performed experiments below using right ventricular MRI images to verify the effectiveness of this embodiment. When the method is used for segmentation, the full supervision method under the condition of the same data set division is also used for segmentation, segmentation results of the two are compared, and the segmentation results are respectively compared with the data set division result
Figure BDA0002894103790000064
The ratio of (a) is divided into data, and the image is normalized for experiment, and table 1 gives the specific setting conditions:
TABLE 1
Figure BDA0002894103790000065
For the unmarked part, the gradient descent algorithm is also used for the segmentation network for model convergence, and the Adam algorithm is used for the discriminator for gradient descent.
Figure BDA0002894103790000066
mt=β1*mt-1+(1-β1*gt)
Figure BDA0002894103790000071
Figure BDA0002894103790000072
Figure BDA0002894103790000073
Figure BDA0002894103790000074
Wherein: beta is a beta1、β2Alpha, ∈ is the over parameter of 0.9, 0.999, 0.001 and 10 respectively-8And m and v are momentum.
In the experiment, NVIDIA 1080Ti GPU is adopted to carry out accelerated operation, the overall training batchsize is 4, the image is rearranged into 216 x 216 size, the initial learning rates of the generated network and the judgment network are 0.00025 and 0.0001, the threshold value of an indicator is 0.6, and the whole training process needs to iterate 10000 times; the fully supervised training data set is identical to its corresponding semi supervised annotation data and the same is used to generate the segmentation network. Fig. 3(a) and 4(a) are original images, fig. 3(b) -3 (d) and fig. 4(b) -4 (d) are right ventricle result graphs obtained by using labels with proportions of 1/2, 1/4 and 1/8 respectively, and fig. 5(a) -5 (c) are right ventricle segmentation contour lines obtained by using labels with proportions of 1/2, 1/4 and 1/8 respectively, wherein a light-colored curve represents a semi-supervised contour, a dark-colored curve represents a fully supervised contour, and each row represents the same image. It can be seen from fig. 3(a) to 3(d) and 4(a) to 4(d) that the overall result of the segmentation is deteriorated as the amount of the labeled data is reduced, but it can be seen from fig. 5(a) to 5(c) that the overall result of the segmentation is deteriorated as the amount of the labeled data is reduced, but the result of the present invention is more continuous and accurate than the fully supervised method. The semi-supervised segmentation method can utilize data which are not marked to the maximum extent, and achieves better segmentation precision under the condition of a small amount of marks; due to the particularity of the medical data, pixel-by-pixel labeling of the medical data is relatively less, and the semi-supervised method can better utilize data without labeling, so that the semi-supervised method can relieve the problem that a large amount of labels are needed in the traditional full-supervised algorithm to a certain extent, and better assist the analysis of the intelligent medical image under the condition of less labeling amount.
The embodiments described above are presented to enable a person having ordinary skill in the art to make and use the invention. It will be readily apparent to those skilled in the art that various modifications to the above-described embodiments may be made, and the generic principles defined herein may be applied to other embodiments without the use of inventive faculty. Therefore, the present invention is not limited to the above embodiments, and those skilled in the art should make improvements and modifications to the present invention based on the disclosure of the present invention within the protection scope of the present invention.

Claims (7)

1. A semi-supervised medical image segmentation method based on a generation countermeasure network comprises the following steps:
(1) classifying the medical images which are not marked and marked, and normalizing the images into the same size;
(2) inputting the medical image with the label into a generator based on a deep learning neural network to extract characteristic information, and outputting a corresponding characteristic probability map;
(3) adopting a cross entropy function to constrain the neural network of the training generator, and continuously iterating until convergence;
(4) inputting the unmarked medical image into a generator to generate a corresponding characteristic probability graph, fusing the characteristic probability graph with the original medical image after passing through an argmax function, and inputting the fused characteristic probability graph into a discriminator, wherein the discriminator can output an indicator for judgment and a characteristic vector;
(5) if the value of the indicator is larger than the set threshold value, the characteristic probability graph generated in the step (4) is used as a pseudo label of the unmarked medical image, and then the neural network of the generator is subjected to self-learning supervision training;
(6) medical image x to be annotatedaAnd unlabelled medical image xuRespectively input into a generator to correspondingly generate a characteristic probability map PaAnd PuA 1 is to PaAnd xaCarrying out fusion to form faA 1 is to PuAnd xuCarrying out fusion to form fu
(7) Will f isaAnd fuRespectively input into a discriminator which correspondingly outputs two indicators IaAnd IuAnd two feature vectors FaAnd FuThe two eigenvectors are constrained using the L1 norm, and I isaBinary cross entropy constraint with 1, IuAnd carrying out binary cross entropy constraint with 0 to finish the training of the discriminator and finally using the discriminator for image segmentation.
2. The semi-supervised medical image segmentation method of claim 1, wherein: the generator employs a deplab generator based on Resnet-101.
3. The semi-supervised medical image segmentation method of claim 1, wherein: the definition of the feature probability map is expressed as follows:
Figure FDA0003622861770000011
wherein: p (i, j, N) is the characteristic probability value of the ith row and the jth column element in the corresponding characteristic probability map for the nth type of image, f (i, j, N) is the characteristic information of the ith row and the jth column element in the nth type of image, and N is the total category number of the medical images.
4. The semi-supervised medical image segmentation method of claim 1, wherein: the expression of the cross entropy function is as follows:
Figure FDA0003622861770000021
wherein: p (i, j, n) is the characteristic probability value of the ith row and jth column element in the characteristic probability map of the nth class image, LceFor the cross entropy function, W and H are the width and height of the image, respectively, ya(i, j, n) is the label information corresponding to the ith row and jth column elements in the nth type image.
5. The semi-supervised medical image segmentation method of claim 1, wherein: the discriminator adopts a four-layer neural network structure.
6. The semi-supervised medical image segmentation method of claim 1, wherein: and (5) training the neural network of the generator by adopting a random gradient descent method according to the cross entropy function.
7. The semi-supervised medical image segmentation method of claim 1, wherein: and (7) training the neural network of the discriminator by adopting an Adam gradient descent algorithm according to the binary cross entropy.
CN202110045408.0A 2021-01-12 2021-01-12 Semi-supervised medical image segmentation method based on generation countermeasure network Active CN112837338B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110045408.0A CN112837338B (en) 2021-01-12 2021-01-12 Semi-supervised medical image segmentation method based on generation countermeasure network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110045408.0A CN112837338B (en) 2021-01-12 2021-01-12 Semi-supervised medical image segmentation method based on generation countermeasure network

Publications (2)

Publication Number Publication Date
CN112837338A CN112837338A (en) 2021-05-25
CN112837338B true CN112837338B (en) 2022-06-21

Family

ID=75928083

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110045408.0A Active CN112837338B (en) 2021-01-12 2021-01-12 Semi-supervised medical image segmentation method based on generation countermeasure network

Country Status (1)

Country Link
CN (1) CN112837338B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113657455B (en) * 2021-07-23 2024-02-09 西北工业大学 Semi-supervised learning method based on triple play network and labeling consistency regularization
CN114240955B (en) * 2021-12-22 2023-04-07 电子科技大学 Semi-supervised cross-domain self-adaptive image segmentation method
CN116344004B (en) * 2023-05-31 2023-08-08 苏州恒瑞宏远医疗科技有限公司 Image sample data amplification method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110110745A (en) * 2019-03-29 2019-08-09 上海海事大学 Based on the semi-supervised x-ray image automatic marking for generating confrontation network
CN110310345A (en) * 2019-06-11 2019-10-08 同济大学 A kind of image generating method generating confrontation network based on hidden cluster of dividing the work automatically
CN110443815A (en) * 2019-08-07 2019-11-12 中山大学 In conjunction with the semi-supervised retina OCT image layer dividing method for generating confrontation network
CN110503654A (en) * 2019-08-01 2019-11-26 中国科学院深圳先进技术研究院 A kind of medical image cutting method, system and electronic equipment based on generation confrontation network
CN112041912A (en) * 2018-02-12 2020-12-04 美商史柯比人工智慧股份有限公司 Systems and methods for diagnosing gastrointestinal tumors

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20200075344A (en) * 2018-12-18 2020-06-26 삼성전자주식회사 Detector, method of object detection, learning apparatus, and learning method for domain transformation

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112041912A (en) * 2018-02-12 2020-12-04 美商史柯比人工智慧股份有限公司 Systems and methods for diagnosing gastrointestinal tumors
CN110110745A (en) * 2019-03-29 2019-08-09 上海海事大学 Based on the semi-supervised x-ray image automatic marking for generating confrontation network
CN110310345A (en) * 2019-06-11 2019-10-08 同济大学 A kind of image generating method generating confrontation network based on hidden cluster of dividing the work automatically
CN110503654A (en) * 2019-08-01 2019-11-26 中国科学院深圳先进技术研究院 A kind of medical image cutting method, system and electronic equipment based on generation confrontation network
CN110443815A (en) * 2019-08-07 2019-11-12 中山大学 In conjunction with the semi-supervised retina OCT image layer dividing method for generating confrontation network

Also Published As

Publication number Publication date
CN112837338A (en) 2021-05-25

Similar Documents

Publication Publication Date Title
CN112837338B (en) Semi-supervised medical image segmentation method based on generation countermeasure network
WO2022041307A1 (en) Method and system for constructing semi-supervised image segmentation framework
Li et al. Contour knowledge transfer for salient object detection
Huang et al. Foreground-action consistency network for weakly supervised temporal action localization
Wang et al. Tokencut: Segmenting objects in images and videos with self-supervised transformer and normalized cut
CN108765383B (en) Video description method based on deep migration learning
Basak et al. Pseudo-label guided contrastive learning for semi-supervised medical image segmentation
Feng et al. Deep graph cut network for weakly-supervised semantic segmentation
Wang et al. When cnn meet with vit: Towards semi-supervised learning for multi-class medical image semantic segmentation
CN106504255A (en) A kind of multi-Target Image joint dividing method based on multi-tag multi-instance learning
CN112927266B (en) Weak supervision time domain action positioning method and system based on uncertainty guide training
CN114359298A (en) Semi-supervised dynamic self-learning segmentation method for cardiac MRI
CN115311605B (en) Semi-supervised video classification method and system based on neighbor consistency and contrast learning
Chong et al. Erase then grow: Generating correct class activation maps for weakly-supervised semantic segmentation
CN105809671A (en) Combined learning method for foreground region marking and depth order inferring
CN117611601B (en) Text-assisted semi-supervised 3D medical image segmentation method
Wang et al. CWC-transformer: a visual transformer approach for compressed whole slide image classification
Sun et al. Attentional prototype inference for few-shot segmentation
Du et al. Boosting dermatoscopic lesion segmentation via diffusion models with visual and textual prompts
Deng et al. Boosting semi-supervised learning with Contrastive Complementary Labeling
Zhu et al. Inherent consistent learning for accurate semi-supervised medical image segmentation
Jin et al. Inter-and intra-uncertainty based feature aggregation model for semi-supervised histopathology image segmentation
Banerjee et al. Explaining deep-learning models using gradient-based localization for reliable tea-leaves classifications
Chen et al. EVIL: Evidential inference learning for trustworthy semi-supervised medical image segmentation
CN116580225A (en) Rectal cancer CT image classification method based on spatial information drive

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant