CN112396588A - Fundus image identification method and system based on countermeasure network and readable medium - Google Patents

Fundus image identification method and system based on countermeasure network and readable medium Download PDF

Info

Publication number
CN112396588A
CN112396588A CN202011320308.6A CN202011320308A CN112396588A CN 112396588 A CN112396588 A CN 112396588A CN 202011320308 A CN202011320308 A CN 202011320308A CN 112396588 A CN112396588 A CN 112396588A
Authority
CN
China
Prior art keywords
data set
loss function
domain data
source
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011320308.6A
Other languages
Chinese (zh)
Inventor
杨刚
孙蕴哲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Renmin University of China
Original Assignee
Renmin University of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Renmin University of China filed Critical Renmin University of China
Priority to CN202011320308.6A priority Critical patent/CN112396588A/en
Publication of CN112396588A publication Critical patent/CN112396588A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

The invention relates to a fundus image recognition method, a fundus image recognition system and a readable medium based on a countermeasure network, wherein the fundus image recognition method comprises the steps of S1, bringing a source domain data set A into a classification network for training, and obtaining pathological characteristics of the source domain data set A; s2, extracting the context feature of the target domain data set B through a generator, and generating a class source target data set B' according to the context feature and the pathological feature of the source domain data set A; s3, bringing the source domain data set A, the target domain data set B and the class source target data set B' into a discriminator of the confrontation network model for discrimination to obtain a loss function of the confrontation network model; s4, minimizing the loss function, and enabling the generator and the discriminator to continuously resist to obtain an optimal resisting network model; s5 brings the target data set B into the optimal antagonistic network model to obtain an optimal class-source target data set B', which is substituted into the classification network in step S1 to perform fundus image recognition. It solves the problem of fundus image domain adaptation and generates a large number of new images based on the original image.

Description

Fundus image identification method and system based on countermeasure network and readable medium
Technical Field
The invention relates to a method and a system for identifying fundus images based on a countermeasure network, and belongs to the technical field of image processing.
Background
The segmentation, positioning, identification and other technologies of the fundus images are very important for the computer-aided diagnosis of ophthalmic diseases such as glaucoma screening and diabetic retinopathy screening. Therefore, more and more deep learning algorithms and models are introduced to segmentation, localization, and recognition of fundus images.
In the actual identification process, fundus images come from different hospitals, and due to the reasons of different equipment, different illumination conditions during photographing, different data volume and the like, the performance of the same model for different data sets has larger difference, namely, the problem of data domain adaptation exists. For example, a model was trained using the data set from hospital a, which predicts well on the data set from hospital a, but poorly on the data set from hospital B. The difference between the fundus images taken by the hospital A and the hospital B is large, and the model only captures the characteristics of the fundus image of the hospital A.
In order to solve the above problems, the prior art mainly introduces a countermeasure network to classify and identify images:
patent 1(CN110097103A) discloses a semi-supervised image classification method based on a countermeasure network, which utilizes the infinitesimal game of the countermeasure network to achieve nash balance of network training; the classification task is completed on the detection target by adopting an auxiliary classifier network in the discriminator network, and the implementation steps comprise: (1) selecting and downloading an image classification standard training sample set, and carrying out normalization operation; (2) setting supervised learning parameters: firstly, setting the number of training labels for supervised learning aiming at a normalized training sample set; then setting the number of samples of each network training; setting a flag bit for supervising learning according to the number of the labels and the number of samples for each network training; (3) establishing a confrontation network consisting of a generator network, a discriminator network and an auxiliary classifier in parallel, wherein the generator network is set as 5 layers, the discriminator network is set as 4 layers, and the auxiliary classifier is set as 5 layers; (4) training an anti-network; (5) classifying the image to be detected: and inputting the image containing the target to be classified into the trained confrontation network model, outputting the probability value of the category option, and selecting the category option with the highest probability value as a classification result for outputting.
Patent 2(CN110084863A) discloses a multi-domain image conversion method and system based on countermeasure network, wherein the multi-domain image conversion method comprises inputting original x and original y of two specified modes X, Y; respectively encoding and decompressing the original image x and the original image y in the reconstruction training part to respectively obtain original image characteristics, a reconstructed image and reconstruction characteristics, and performing modal identification counterstudy on the characteristics and the image; the cyclic training part generates a reconstruction graph, reconstruction graph characteristics and a cyclic reconstruction graph based on the original graph characteristic exchange modal encoder, performs modal discrimination counterlearning of the characteristics and the graph again, and finally outputs the cyclic reconstruction graph.
In the semi-supervised image classification method based on the countermeasure network proposed in patent 1, the training set and the test set are both from standard images, and the problem of domain adaptation when the test set is from different domains is not discussed. Patent 2 does not consider the relationship between the original image features and the reconstruction features, and if the original image features and the reconstruction features are greatly different, the labels of the reconstructed images are changed.
Disclosure of Invention
In view of the above problems, it is an object of the present invention to provide a fundus image recognition method and system based on a countermeasure network, which generates a new data set containing image features of other data sets from a certain data set for fundus images from different data sets based on a generation countermeasure model, so that the model trained by the new data set can capture image features of different data sets simultaneously, thereby solving the problem of data domain adaptation.
In order to achieve the purpose, the invention adopts the following technical scheme: a fundus image recognition method based on a countermeasure network comprises the following steps: s1, bringing the source domain data set A into a classification network for training to obtain the pathological features of the source domain data set A; s2, extracting the context feature of the target domain data set B through a generator, and generating a class source target data set B' according to the context feature and the pathological feature of the source domain data set A; s3, bringing the source domain data set A, the target domain data set B and the class source target data set B' into a discriminator of the confrontation network model for discrimination to obtain a loss function of the confrontation network model; s4, minimizing the loss function, and enabling the generator and the discriminator to continuously resist to obtain an optimal resisting network model; s5 brings the target data set B into the optimal antagonistic network model to obtain an optimal class-source target data set B', which is substituted into the classification network in step S1 to perform fundus image recognition.
Further, in step S2, the adain block and residual block are used to migrate the pathological features of the source domain data set a to the target domain data set B, and a source-like target data set B' is generated according to the contextual features of the target domain data set B.
Further, the adain block calculation formula adopted in step S2 is as follows:
Figure BDA0002792694890000021
wherein x istAnd xsRespectively representing the target domain data set and the source domain data set, mu and sigma respectively representing the mean and variance of each channel in the spatial dimension, AdaIN (x)t,xs) Is a style migration algorithm.
Further, the loss function in step S3 includes a reconstruction loss function and a countermeasure loss function.
Further, a reconstruction loss function is obtained through the target domain data set B and the class source target data set B ', and the reconstruction loss is used for ensuring that the generated class source target data set B' and the target domain data set B have the same label; the countervailing loss function is obtained from a source domain data set a, a target domain data set B, and a class source target data set B'.
Further, the formula for the calculation of the loss function is:
Figure BDA0002792694890000022
Lrec(G)=||G(xt,xs)-xt||2
wherein, minmaxLadvTo combat the loss function, LrecIs a reconstruction loss function proposed by this patent, gamma is a weighting coefficient for the countermeasures and reconstruction losses, G (x)t,xs) Representing a class source target data set B', x derived from a generating networktRepresenting the target domain data set B, G is the generator and D is the discriminator.
Further, the reconstruction loss function is obtained by a cup optic disc positioning algorithm, which is a data set in the target domainB, positioning and cutting out the optic cup optic disc area in the B and similar source target data set B', and automatically adjusting the reconstruction loss function LrecThe ratio in the loss function, the adaptive generation, modifies the original picture.
The invention also discloses a fundus image recognition system based on the countermeasure network, which comprises: the classification module is used for bringing the source domain data set A into a classification network for training to obtain the pathological features of the source domain data set A; the class source target data set generation module is used for extracting the context characteristics of the target domain data set B through the generator and generating a class source target data set B' according to the context characteristics and the pathological characteristics of the source domain data set A; the loss function generating module is used for bringing the source domain data set A, the target domain data set B and the class source target data set B' into a discriminator of the countermeasure network model for discrimination to obtain a loss function of the countermeasure network model; the optimal model generation module is used for minimizing the loss function, enabling the generator and the discriminator to continuously resist and obtaining an optimal confrontation network model; and the identification module is used for substituting the target data set B into the optimal confrontation network model to obtain an optimal class source target data set B ', and substituting the optimal class source target data set B' into the classification network in the step S1 to carry out fundus image identification.
Further, the loss function generated by the loss function generation module comprises a reconstruction loss function and a counterloss function, and the calculation formula of the loss function is as follows:
Figure BDA0002792694890000031
Lrec(G)=||G(xt,xs)-xt||2
wherein, minmaxLadvTo combat the loss function, LrecIs a reconstruction loss function proposed by this patent, r is a weighting coefficient for the countermeasures and reconstruction losses, G (x)t,xs) Representing a class source target data set B', x derived from a generating networktRepresenting the target domain data set B, G is the generator and D is the discriminator.
The invention discloses a computer-readable storage medium on which a computer program is stored, the computer program being executed by a processor to implement the steps of any one of the above-mentioned countermeasure network-based fundus image recognition methods.
Due to the adoption of the technical scheme, the invention has the following advantages: 1. the invention solves the problem of adaptation of the fundus image domain, and can generate a large amount of new images based on the original images. 2. The new image generated by the invention keeps the consistency of the label with the original image. 3. The invention can flexibly adjust the distribution characteristics of the pixels of the fundus map and can form specific image adjustment suitable for the optic disc area of the fundus map optic cup.
Drawings
FIG. 1 is a schematic diagram of a method for identifying fundus images based on a countermeasure network in an embodiment of the present invention;
FIG. 2 is a diagram illustrating a method for generating a class source target data set B' according to an embodiment of the present invention.
Detailed Description
The present invention is described in detail by way of specific embodiments in order to better understand the technical direction of the present invention for those skilled in the art. It should be understood, however, that the detailed description is provided for a better understanding of the invention only and that they should not be taken as limiting the invention. In describing the present invention, it is to be understood that the terminology used is for the purpose of description only and is not intended to be indicative or implied of relative importance.
Generating a countermeasure model (GAN) is often used to generate new data from some known data set, and the method can learn to mimic any data distribution.
Example one
The embodiment discloses a fundus image recognition method based on a countermeasure network, which comprises the following steps as shown in fig. 1:
s1, the source domain data set A is brought into a classification network for training, and the pathological features of the source domain data set A are obtained.
S2, the generator extracts the context feature of the target domain data set B and generates a class source target data set B' according to the context feature and the pathological feature of the source domain data set A.
In the step, adain block and residual block are adopted to transfer the pathological features of the source domain data set A to the target domain data set B, and a similar source target data set B' is generated according to the context features of the target domain data set B. The Residual block is a module before the Adain block, and an Adain block module is added after every 2 Residual blocks for fusing the distribution of different data. The Residual block has the functions of solving the problem of gradient disappearance in a complex network model and improving the learning capacity of a deep model, and is a common operation in CGAN (conditional countermeasure network).
The adain block has the calculation formula as follows:
Figure BDA0002792694890000041
wherein x istAnd xsRespectively representing the target domain data set and the source domain data set, and mu and sigma respectively representing the mean and variance of each channel in the spatial dimension, which is responsible for working with feature fusion, i.e. the solution of the mean and variance, which itself does not include any features therein. Therefore, the adain block is not only responsible for extracting the contextual characteristics of the target domain data set, but also has the function of promoting the fusion of the pathological characteristics of the source domain data set.
S3, the source domain data set A, the target domain data set B and the class source target data set B' are brought into a discriminator of the confrontation network model for discrimination, and a loss function of the confrontation network model is obtained.
The loss function in the step comprises a reconstruction loss function and a countervailing loss function.
A reconstruction loss function is obtained through a target domain data set B and a class source target data set B ', and the reconstruction loss is used for ensuring that the generated class source target data set B' and the target domain data set B have the same label; the countermeasure loss function obtains the countermeasure loss through the source domain data set A, the target domain data set B and the class source target data set B' to ensure the consistency of data extraction characteristics of different sources, and solves the problem of domain adaptation caused by the difference of the extraction characteristics of different source data.
The formula for the calculation of the loss function is:
Figure BDA0002792694890000051
Lrec(G)=||G(xt,xs)-xt||2
wherein, minmaxLadvTo combat the loss function, LrecIs a reconstruction loss function proposed by this patent, r is a weighting coefficient for the countermeasures and reconstruction losses, G (x)t,xs) Representing a class source target data set B', x derived from a generating networktRepresenting the target domain data set B, G is the generator and D is the discriminator. L isrecThe label consistency of the generated class source target data set B' and the target domain data set B is ensured.
The reconstruction loss function is obtained by a optic cup optic disc positioning algorithm (such as fast-RCNN algorithm), an optic cup optic disc region is positioned and cut out in a target domain data set B and a class source target data set B', and the reconstruction loss function L is automatically adjustedrecThe proportion in the loss function is adaptively generated to change the original picture, the spatial position of the picture in the domain can be adjusted, the focus information of the original picture is kept, and effective expansion of the labeled sample in the target domain is formed.
The gamma coefficient can be set as a fixed value or linear/nonlinear change in the training process, and gradually decreases or increases with the increase of the iteration times of the model so as to adapt to different model settings and achieve the purpose of obtaining a better generated image.
S4 minimizes the loss function, and makes the generator and the discriminator continuously confront each other to obtain the optimal confrontation network model.
S5 substituting the target data set B into the optimal antagonistic network model to obtain the optimal class source target data set B ', substituting the optimal class source target data set B' into the classification network in step S1 for fundus image recognition
Experimental results show that the effect of classifying by using the class source target data set B' is better than that of directly using the target domain data set B, so that the problem of domain adaptation of a training model on a cross-domain data set is solved.
Example two
Based on the same inventive concept, the embodiment discloses the invention also discloses a fundus image recognition system based on a countermeasure network, which comprises:
the classification module is used for bringing the source domain data set A into a classification network for training to obtain the pathological features of the source domain data set A;
the class source target data set generation module is used for extracting the context characteristics of the target domain data set B through the generator and generating a class source target data set B' according to the context characteristics and the pathological characteristics of the source domain data set A;
the loss function generating module is used for bringing the source domain data set A, the target domain data set B and the class source target data set B' into a discriminator of the countermeasure network model for discrimination to obtain a loss function of the countermeasure network model;
the optimal model generation module is used for minimizing the loss function, enabling the generator and the discriminator to continuously resist and obtaining an optimal confrontation network model;
and the identification module is used for substituting the target data set B into the optimal confrontation network model to obtain an optimal class source target data set B ', and substituting the optimal class source target data set B' into the classification network in the step S1 to carry out fundus image identification.
The loss function generated by the loss function generation module comprises a reconstruction loss function and a counterloss function.
Wherein, the calculation formula of the loss function is as follows:
Figure BDA0002792694890000061
Lrec(G)=||G(xt,xs)-xt||2
wherein, minmaxLadvTo combat the loss function, LrecIs a reconstruction loss function proposed by this patent, r is a weighting coefficient for the countermeasures and reconstruction losses, G (x)t,xs) Representation generationClass source target data set B', x obtained by networktRepresenting the target domain data set B, G is the generator and D is the discriminator.
EXAMPLE III
Based on the same inventive concept, the present embodiment discloses a computer-readable storage medium having stored thereon a computer program to be executed by a processor to implement the steps of the countermeasure network-based fundus image recognition method of any one of the above.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting the same, and although the present invention is described in detail with reference to the above embodiments, those of ordinary skill in the art should understand that: modifications and equivalents may be made to the embodiments of the invention without departing from the spirit and scope of the invention, which is to be covered by the claims. The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. An eyeground image recognition method based on a countermeasure network is characterized by comprising the following steps:
s1, bringing a source domain data set A into a classification network for training to obtain pathological features of the source domain data set A;
s2, extracting the context feature of the target domain data set B through a generator, and generating a class source target data set B' according to the context feature and the pathological feature of the source domain data set A;
s3, bringing the source domain data set A, the target domain data set B and the class source target data set B' into a discriminator of a confrontation network model for discrimination to obtain a loss function of the confrontation network model;
s4, minimizing the loss function, and enabling the generator and the discriminator to continuously resist to obtain an optimal resisting network model;
s5 brings the target data set B into the optimal antagonistic network model to obtain an optimal class-source target data set B', which is substituted into the classification network in step S1 to perform fundus image recognition.
2. The antagonistic network based fundus image recognition method according to claim 1, wherein said step S2 employs an adain block and a residual block to migrate pathological features of said source domain data set a into said target domain data set B, and generates a source-like target data set B' according to contextual features of said target domain data set B.
3. The antagonistic network-based fundus image recognition method according to claim 2, wherein the adain block employed in said step S2 is calculated by the formula:
Figure FDA0002792694880000011
wherein x istAnd xsRepresenting the target domain data set and the source domain data set, respectively, and μ and σ represent the mean and variance of each channel in the spatial dimension, respectively.
4. The antagonistic network-based fundus image recognition method according to any one of claims 1 to 3, wherein the loss function in the step 3 includes a reconstruction loss function and an antagonistic loss function.
5. The antagonistic network based fundus image recognition method according to claim 4, wherein said reconstruction loss function is obtained by said target domain data set B and said source-like target data set B ', said reconstruction loss being used to ensure that the generated source-like target data set B' and target domain data set B have the same label; the countervailing loss function is obtained from the source domain data set a, the target domain data set B, and the source-like target data set B'.
6. The antagonistic network-based fundus image recognition method according to claim 5, wherein the calculation formula of the loss function is:
Figure FDA0002792694880000012
Lrec(G)=||G(xt,xs)-xt||2
wherein, minmaxLadvTo combat the loss function, LrecIs a reconstruction loss function proposed by this patent, r is a weighting coefficient for the countermeasures and reconstruction losses, G (x)t,xs) Representing a class source target data set B', x derived from a generating networktRepresenting the target domain data set B, G is the generator and D is the discriminator.
7. The method for fundus image recognition based on antagonistic network as claimed in claim 6, characterized in that said reconstruction loss function is obtained by a cup optic disc localization algorithm which locates and cuts out cup optic disc regions in said target domain data set B and said source-like target data set B', automatically adjusting the reconstruction loss function LrecThe ratio in the loss function, the adaptive generation, modifies the original picture.
8. A fundus image recognition system based on a countermeasure network, comprising:
the classification module is used for bringing a source domain data set A into a classification network for training to obtain pathological features of the source domain data set A;
the class source target data set generation module is used for extracting the context characteristics of the target domain data set B through the generator and generating a class source target data set B' according to the context characteristics and the pathological characteristics of the source domain data set A;
a loss function generating module, configured to bring the source domain data set a, the target domain data set B, and the source-like target data set B' into a discriminator of a countermeasure network model for discrimination, so as to obtain a loss function of the countermeasure network model;
the optimal model generation module is used for minimizing the loss function, enabling the generator and the discriminator to continuously resist and obtaining an optimal confrontation network model;
and the identification module is used for substituting the target data set B into the optimal confrontation network model to obtain an optimal class source target data set B ', and substituting the optimal class source target data set B' into the classification network in the step S1 to carry out fundus image identification.
9. The antagonistic network-based fundus image recognition system according to claim 8, wherein the loss function generated by said loss function generating module includes a reconstruction loss function and an antagonistic loss function, and the calculation formula of said loss function is:
Figure FDA0002792694880000021
Lrec(G)=||G(xt,xs)-xt||2
wherein, minmaxLadvTo combat the loss function, LrecIs a reconstruction loss function proposed by this patent, r is a weighting coefficient for the countermeasures and reconstruction losses, G (x)t,xs) Representing a class source target data set B', x derived from a generating networktRepresenting the target domain data set B, G is the generator and D is the discriminator.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a computer program which is executed by a processor to implement the steps of the countermeasure network-based fundus image recognition method according to any one of claims 1 to 7.
CN202011320308.6A 2020-11-23 2020-11-23 Fundus image identification method and system based on countermeasure network and readable medium Pending CN112396588A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011320308.6A CN112396588A (en) 2020-11-23 2020-11-23 Fundus image identification method and system based on countermeasure network and readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011320308.6A CN112396588A (en) 2020-11-23 2020-11-23 Fundus image identification method and system based on countermeasure network and readable medium

Publications (1)

Publication Number Publication Date
CN112396588A true CN112396588A (en) 2021-02-23

Family

ID=74606948

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011320308.6A Pending CN112396588A (en) 2020-11-23 2020-11-23 Fundus image identification method and system based on countermeasure network and readable medium

Country Status (1)

Country Link
CN (1) CN112396588A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112927136A (en) * 2021-03-05 2021-06-08 江苏实达迪美数据处理有限公司 Image reduction method and system based on convolutional neural network domain adaptation
CN113096137A (en) * 2021-04-08 2021-07-09 济南大学 Adaptive segmentation method and system for OCT (optical coherence tomography) retinal image field
CN114299324A (en) * 2021-12-01 2022-04-08 万达信息股份有限公司 Pathological image classification method and system based on multi-scale domain confrontation network
CN115409764A (en) * 2021-05-28 2022-11-29 南京博视医疗科技有限公司 Multi-mode fundus blood vessel segmentation method and device based on domain self-adaptation

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109753992A (en) * 2018-12-10 2019-05-14 南京师范大学 The unsupervised domain for generating confrontation network based on condition adapts to image classification method
CN110097131A (en) * 2019-05-08 2019-08-06 南京大学 A kind of semi-supervised medical image segmentation method based on confrontation coorinated training
CN110210514A (en) * 2019-04-24 2019-09-06 北京林业大学 Production fights network training method, image completion method, equipment and storage medium
CN110570433A (en) * 2019-08-30 2019-12-13 北京影谱科技股份有限公司 Image semantic segmentation model construction method and device based on generation countermeasure network
CN111259982A (en) * 2020-02-13 2020-06-09 苏州大学 Premature infant retina image classification method and device based on attention mechanism
CN111666846A (en) * 2020-05-27 2020-09-15 厦门大学 Face attribute identification method and device
US10839269B1 (en) * 2020-03-20 2020-11-17 King Abdulaziz University System for fast and accurate visual domain adaptation

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109753992A (en) * 2018-12-10 2019-05-14 南京师范大学 The unsupervised domain for generating confrontation network based on condition adapts to image classification method
CN110210514A (en) * 2019-04-24 2019-09-06 北京林业大学 Production fights network training method, image completion method, equipment and storage medium
CN110097131A (en) * 2019-05-08 2019-08-06 南京大学 A kind of semi-supervised medical image segmentation method based on confrontation coorinated training
CN110570433A (en) * 2019-08-30 2019-12-13 北京影谱科技股份有限公司 Image semantic segmentation model construction method and device based on generation countermeasure network
CN111259982A (en) * 2020-02-13 2020-06-09 苏州大学 Premature infant retina image classification method and device based on attention mechanism
US10839269B1 (en) * 2020-03-20 2020-11-17 King Abdulaziz University System for fast and accurate visual domain adaptation
CN111666846A (en) * 2020-05-27 2020-09-15 厦门大学 Face attribute identification method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
JAEHOON CHOI等: "Self-Ensembling with GAN-based Data Augmentation for Domain Adaptation in Semantic Segmentation", 《2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV)》, pages 6829 - 6839 *
李锡荣: "基于软近邻投票的图像标签相关性计算", 《计算机学报》, vol. 37, no. 6, pages 1365 - 1371 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112927136A (en) * 2021-03-05 2021-06-08 江苏实达迪美数据处理有限公司 Image reduction method and system based on convolutional neural network domain adaptation
CN113096137A (en) * 2021-04-08 2021-07-09 济南大学 Adaptive segmentation method and system for OCT (optical coherence tomography) retinal image field
CN115409764A (en) * 2021-05-28 2022-11-29 南京博视医疗科技有限公司 Multi-mode fundus blood vessel segmentation method and device based on domain self-adaptation
CN115409764B (en) * 2021-05-28 2024-01-09 南京博视医疗科技有限公司 Multi-mode fundus blood vessel segmentation method and device based on domain self-adaption
CN114299324A (en) * 2021-12-01 2022-04-08 万达信息股份有限公司 Pathological image classification method and system based on multi-scale domain confrontation network
CN114299324B (en) * 2021-12-01 2024-03-29 万达信息股份有限公司 Pathological image classification method and system based on multiscale domain countermeasure network

Similar Documents

Publication Publication Date Title
CN112396588A (en) Fundus image identification method and system based on countermeasure network and readable medium
JP7058373B2 (en) Lesion detection and positioning methods, devices, devices, and storage media for medical images
CN109583342B (en) Human face living body detection method based on transfer learning
Yan et al. Modeling annotator expertise: Learning when everybody knows a bit of something
CN108986140B (en) Target scale self-adaptive tracking method based on correlation filtering and color detection
CN111723654B (en) High-altitude parabolic detection method and device based on background modeling, YOLOv3 and self-optimization
Marimont et al. Anomaly detection through latent space restoration using vector quantized variational autoencoders
CN103136504B (en) Face identification method and device
US20200184252A1 (en) Deep Learning Network for Salient Region Identification in Images
JP2019521443A (en) Cell annotation method and annotation system using adaptive additional learning
CN112102237A (en) Brain tumor recognition model training method and device based on semi-supervised learning
CN111597946B (en) Processing method of image generator, image generation method and device
CN111242948B (en) Image processing method, image processing device, model training method, model training device, image processing equipment and storage medium
JP2019526869A5 (en)
CN107832721B (en) Method and apparatus for outputting information
CN111028218B (en) Fundus image quality judgment model training method, fundus image quality judgment model training device and computer equipment
CN113782184A (en) Cerebral apoplexy auxiliary evaluation system based on facial key point and feature pre-learning
CN117392470B (en) Fundus image multi-label classification model generation method and system based on knowledge graph
CN112949456B (en) Video feature extraction model training and video feature extraction method and device
Nirmala et al. HoG based Naive Bayes classifier for glaucoma detection
CN112634221A (en) Image and depth-based cornea level identification and lesion positioning method and system
Zabihi et al. Vessel extraction of conjunctival images using LBPs and ANFIS
CN112785559B (en) Bone age prediction method based on deep learning and formed by mutually combining multiple heterogeneous models
KR102303111B1 (en) Training Data Quality Assessment Technique for Machine Learning-based Software
CN112200005A (en) Pedestrian gender identification method based on wearing characteristics and human body characteristics under community monitoring scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination