CN111950619A - Active learning method based on dual-generation countermeasure network - Google Patents

Active learning method based on dual-generation countermeasure network Download PDF

Info

Publication number
CN111950619A
CN111950619A CN202010779759.XA CN202010779759A CN111950619A CN 111950619 A CN111950619 A CN 111950619A CN 202010779759 A CN202010779759 A CN 202010779759A CN 111950619 A CN111950619 A CN 111950619A
Authority
CN
China
Prior art keywords
pool
image
training
sampling
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010779759.XA
Other languages
Chinese (zh)
Other versions
CN111950619B (en
Inventor
郭继峰
庞志奇
李�禾
李星
费禹潇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northeast Forestry University
Original Assignee
Northeast Forestry University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northeast Forestry University filed Critical Northeast Forestry University
Priority to CN202010779759.XA priority Critical patent/CN111950619B/en
Publication of CN111950619A publication Critical patent/CN111950619A/en
Application granted granted Critical
Publication of CN111950619B publication Critical patent/CN111950619B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to an active learning method based on a dual-generation countermeasure network. The method comprises the following steps: 1: training a model by using the marked pool image and the unmarked pool image, wherein the model training comprises two parts of characterization learning and generation of confrontation; 2: sampling the images in the to-be-selected pool by using the converged model, and then performing manual annotation and image generation on the sampled images; 3: and transferring the image obtained by sampling from the candidate pool to the marking pool, adding the generated image into the candidate pool, and training the sampling model according to the updated candidate pool and the marking pool. The invention introduces a generation countermeasure mechanism in a pool-based method, endows the model with generation capacity, and forms two groups of generation countermeasure networks in the model. The invention also introduces the concept of 'synchronous update' to ensure that the sampling model is synchronously updated along with the sampling process, thereby ensuring that the sample with the most abundant information content at the current stage can be selected for each sampling.

Description

Active learning method based on dual-generation countermeasure network
The technical field is as follows:
the invention relates to the field of active learning, in particular to an active learning method based on a dual-generation countermeasure network.
Background art:
the classification task based on deep learning often needs large-scale labeled samples for training, and the labeling cost of the samples in reality can be prohibitively high or even impossible to obtain in a large scale. To remedy this drawback, researchers have proposed active learning. The purpose of active learning is to select or generate a sample which is most beneficial to model training from an unlabeled data set, and then manually label the selected sample and add the labeled sample to the training set, so that the task model obtains higher performance at lower labeling cost. Practice shows that for an image classification task, active learning can effectively reduce the labeling cost of a sample on the premise of ensuring the model performance.
Currently, mainstream active learning algorithms can be roughly classified into two types: pool-based methods and synthesis-based methods. The idea of the pool-based approach is to use a set sampling strategy to select the sample with the largest information content from the sample pool. Depending on the sampling strategy, the pool-based approach can be subdivided into an uncertainty-based approach and a representation-based approach. Uncertainty-based methods are numerous, for example, uncertainty can be estimated by a probabilistic model in a Bayesian framework, such as a Gaussian process or a Bayesian neural network. Meanwhile, uncertainty heuristics in the non-Bayesian classical active learning method are widely researched, such as distance to decision boundaries and conditional entropy. The expression-based approach performs sample selection by increasing diversity in a given batch.
Although the pool-based active learning method has greatly reduced the labeling cost of the samples compared to the conventional method, the pool-based active learning method has a common problem: and the samples obtained by sampling in the unmarked sample pool are sent into the marked pool after being annotated, and do not participate in the subsequent sampling process. Because the number of samples in the unmarked pool is limited, and the algorithm samples on the basis of the information quantity, the information quantity contained in the unit sample in the unmarked pool is necessarily reduced along with the increase of the sampling times in the above process, thereby reducing the performance improvement rate of the task model.
The synthesis-based approach facilitates model training by actively synthesizing rich-information-bearing samples. Of pioneering significance is GAAL, unlike pool-based methods, which aims to generate new samples useful for the model rather than selecting the sample with the largest amount of information in the pool, and in the ideal case, GAAL generated samples contain a larger amount of information than all the samples that are present. However, since the GAAL acquisition function must be easy to calculate and optimize, this method has a certain limitation in the application of active learning.
The invention content is as follows:
the invention aims to overcome the defects of the conventional pool-based active learning method, and provides an active learning method based on a dual-generation countermeasure network to solve the problem of overhigh labeling cost of a data set in an image classification task based on deep learning.
An active learning method based on a dual-generation countermeasure network is characterized by comprising the following steps:
step 1: training a model by using a labeled pool image and an unlabeled pool image, wherein the model training comprises two parts of characterization learning and generation countermeasure, and the model comprises a generator G and two discriminators D1、D2The two groups form a countermeasure network;
step 2: using the converged model to treat the selection pool XCThe image in (1) is sampled, then the sampled image is manually annotated and image generation is carried out, and the candidate pool XCUsing unlabeled pool XUInitializing, the sampling process is performed by D1Completing, wherein the image generation process is completed by G;
and step 3: transferring the sampled image from the candidate pool to the mark pool, adding the generated image into the candidate pool, and comparing the sampled model D with the updated candidate pool and mark pool1And training, and finally training a task model according to the updated mark pool, wherein the task model is a general image classification model.
The step 1 comprises the following steps:
step 1.1: performing characterization learning by using the labeled pool image and the unlabeled pool image to generate the first half part G of the generator G1And a discriminator D1And (5) training. G1Is to pool the tagsImage x of (1)LAnd image x in unlabeled poolUMapping to the same feature space, extracting the feature matrix of the image, and inputting the extracted feature matrix into D1Attempt to let D1Predicting that all feature matrices are from the mark pool; and D1To distinguish whether the input feature matrix comes from xLAnd output the feature matrix from xLThe probability of (c). G1And D1The objective function for the confrontational training is:
Figure BDA0002619775850000031
wherein xLAnd xURespectively representing a marked image and an unmarked image. At this stage, only for G1And D1Is updated. The purpose of the token learning is to make D1Has the ability to select the most informative sample.
Step 1.2: using the marked pool image and the unmarked pool image to generate a countermeasure, the second half G of the generator G2And a discriminator D2Training is performed, where the goal of G is to generate near-realistic images in an attempt to let D stand2Predicting that all input images are real images; d2The object of (1) is to distinguish between real images and generated images. Specifically, the generation confrontation training process is as follows:
step 1.2.1: and G, taking the real sample as an input, and outputting the generated reconstructed sample. To ensure the difference between the generated image and the original image, the invention is implemented in G2The header introduces a convolution kernel of size 1 x 1. Wherein the weights of the convolution kernels are [0.95,1.05 ]]And the random value does not participate in the parameter updating process, so as to ensure that the characteristics of the generated image and the original image are not completely the same.
Step 1.2.2: d2And taking the real sample or the generated sample as an input, and outputting the probability that the input sample is the real sample. Design D of the present invention2The purpose of (a) is to guide G to generate near-real images, namely: d2With true or generated images as input, and then outputting the probability of a true imageTo guide the training process of G. The invention introduces Wasserstein distance into the original objective function, and the whole objective function is as follows:
Figure BDA0002619775850000032
wherein xrRepresenting the real image sampled from all the sample pools, G (x)r) I.e. xgAnd f (x) is a discriminator function that needs to satisfy the Lipschitz constraint. The invention uses the matrix spectrum norm to make D2The Lipschitz constraint is satisfied on a global scale. Wherein the physical meaning of the spectral norm is defined as:
Figure BDA0002619775850000033
where σ (W) represents the spectral norm of the weight matrix, x represents the input vector for the layer, and represents the amount of change in x. In the generation of confrontation phase, the invention fixes G1Parameters of (2) only for G2And D2Is updated.
The step 2 comprises the following steps:
step 2.1: using converged G1And D1In combination of XCThe sampling in the middle obtains a sample set x with the most abundant information quantitysTo xsAnd manually annotating the unlabeled image. The manual annotation is xsCategory labels are added manually.
Step 2.2: x obtained by using converged G and samplingsTo xsReconstructing to obtain the generated image xgAnd giving the generated image xgAnd xsThe same label.
The step 3 comprises the following steps:
step 3.1: sampling the obtained image xsTransferring from the candidate pool to the mark pool, and generating image xgAdding the sampling model D into a to-be-selected pool, and pairing the sampling model D according to the updated to-be-selected pool and the mark pool1Performing update training to D1Real-time changes in the label pool and the candidate pool can be monitored.
Step 3.2: and training the task model according to the updated mark pool.
Step 3.3: and repeating the processes of the step 2 and the step 3 until the performance of the task model meets the expected standard.
The invention has the beneficial effects that: considering that the current image classification task based on deep learning needs large-scale labeling of samples, the labeling cost of the samples in reality can be prohibitively high, and even large-scale acquisition is impossible. The invention designs an active learning method based on combination of pool and synthesis, which introduces a generation countermeasure mechanism in the pool-based method, endows the model with generation capability, and forms two groups of generation countermeasure networks respectively used for image sampling and image generation in the model. According to the invention, the reconstructed samples are added into the to-be-selected pool to maintain the number of the samples in the to-be-selected pool, so that the to-be-selected pool can continuously provide samples with abundant information for the task model, and the samples are fully utilized. The invention also introduces the concept of 'synchronous update' to ensure that the sampling model is synchronously updated along with the sampling process, thereby ensuring that the sample with the most abundant information content at the current stage can be selected for each sampling.
Description of the drawings:
fig. 1 is a flowchart of an active learning method based on a dual-generation countermeasure network.
FIG. 2 is a block diagram of a model training phase
FIG. 3 is a block diagram of image sampling and image generation
Fig. 4 is a graph of the sampling results of an example of the application of the invention on three data sets.
FIG. 5 is a graph comparing the labeling costs of an example application of the present invention with the benchmark method over three data sets.
FIG. 6 is a graph comparing the performance of the present invention based task model with the baseline method based task model over three data sets.
The specific implementation mode is as follows:
the technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a schematic flow chart of the implementation of the present invention, fig. 2 is a structural diagram of a model training phase, fig. 3 is a structural diagram of image sampling and image generation, as shown in fig. 1, the method includes the following steps:
step 1: training a model by using a labeled pool image and an unlabeled pool image, wherein the model training comprises two parts of characterization learning and generation countermeasure, and the model comprises a generator G and two discriminators D1、D2The two groups formed generate the countermeasure network, as shown in fig. 2;
step 2: using the converged model to treat the selection pool XCThe image in (1) is sampled, then the sampled image is manually annotated and image generation is carried out, and the candidate pool XCUsing unlabeled pool XUInitializing, the sampling process is performed by D1Completing, wherein the image generation process is completed by G;
and step 3: transferring the sampled image from the candidate pool to the mark pool, adding the generated image into the candidate pool, and comparing the sampled model D with the updated candidate pool and mark pool1And training, and finally training a task model according to the updated mark pool, wherein the task model is a general image classification model.
The step 1 comprises the following steps:
step 1.1: performing characterization learning by using the labeled pool image and the unlabeled pool image to generate the first half part G of the generator G1And a discriminator D1And (5) training. G1The purpose of (2) is to label the images x in the poolLAnd image x in unlabeled poolUMapping to the same feature space, extracting the feature matrix of the image, and inputting the extracted feature matrix into D1Attempt to let D1Predicting that all feature matrices are from the mark pool; and D1To distinguish whether the input feature matrix comes from xLAnd output the feature matrix from xLThe probability of (c). G1And D1The objective function for the confrontational training is:
Figure BDA0002619775850000061
wherein xLAnd xURespectively representing a marked image and an unmarked image. At this stage, only for G1And D1Is updated. The purpose of the token learning is to make D1The method has the capability of selecting the most information unambered data.
Step 1.2: using the marked pool image and the unmarked pool image to generate a countermeasure, the second half G of the generator G2And a discriminator D2Training is performed, where the goal of G is to generate near-realistic images in an attempt to let D stand2Predicting that all input images are real images; d2The object of (1) is to distinguish between real images and generated images. Specifically, the generation confrontation training process is as follows:
step 1.2.1: and G, taking the real sample as an input, and outputting the generated reconstructed sample. To ensure the difference between the generated image and the original image, the invention is implemented in G2The header introduces a convolution kernel of size 1 x 1 as shown in fig. 2. Wherein the weights of the convolution kernels are [0.95,1.05 ]]And the random value does not participate in the parameter updating process, so as to ensure that the characteristics of the generated image and the original image are not completely the same.
Step 1.2.2: d2And taking the real sample or the generated sample as an input, and outputting the probability that the input sample is the real sample. Design D of the present invention2The purpose of (a) is to guide G to generate near-real images, namely: d2And taking the real image or the generated image as an input, and then outputting the probability of the real image to guide the training process of G. The invention introduces Wasserstein distance into the original objective function, and the whole objective function is as follows:
Figure BDA0002619775850000062
wherein xrRepresenting the real image sampled from all the sample pools, G (x)r) I.e. xgAnd f (x) is a discriminator function that needs to satisfy the Lipschitz constraint. The invention uses the matrix spectrum norm to make D2The Lipschitz constraint is satisfied on a global scale. Wherein the physical meaning of the spectral norm is defined as:
Figure BDA0002619775850000071
where σ (W) represents the spectral norm of the weight matrix, x represents the input vector for the layer, and represents the amount of change in x. In the generation of confrontation phase, the invention fixes G1Parameters of (2) only for G2And D2Is updated.
The step 2 comprises the following steps:
step 2.1: using converged G1And D1In combination of XCThe sampling in the middle obtains a sample set x with the most abundant information quantitysTo xsAnd manually annotating the unlabeled image. The manual annotation is xsCategory labels are added manually.
Step 2.2: x obtained by using converged G and samplingsTo xsReconstructing to obtain the generated image xgAnd giving the generated image xgAnd xsThe same label.
The step 3 comprises the following steps:
step 3.1: sampling the obtained image xsTransferring from the candidate pool to the mark pool, and generating image xgAdding the sampling model D into a to-be-selected pool, and pairing the sampling model D according to the updated to-be-selected pool and the mark pool1Performing update training to D1Real-time changes in the label pool and the candidate pool can be monitored.
Step 3.2: and training the task model according to the updated mark pool.
Step 3.3: and repeating the processes of the step 2 and the step 3 until the performance of the task model meets the expected standard.
As shown in FIG. 4, as an application example of the present invention, the images marked with red boxes are the generated images as the sampling results on three data sets of CIFAR10(a), CIFAR100(b) and self-ImageNet (c). The proportion of generated images in the sampled images on all three data sets is more than 10%.
As shown in FIG. 5, the comparison of the labeling cost of an application example of the present invention and the benchmark method on three data sets of CIFAR10(a), CIFAR100(b) and self-ImageNet (c) is performed.
As shown in FIG. 6, the performance of the task model based on the present invention compared with the performance of the task model based on the baseline method is shown in the results of comparing three data sets of CIFAR10(a), CIFAR100(b) and self-ImageNet (c).
It should be understood that parts of the specification not set forth in detail are well within the prior art.
While the invention has been described with reference to specific embodiments and procedures, it will be understood by those skilled in the art that the invention is not limited thereto, and that various changes and substitutions may be made without departing from the spirit of the invention. The scope of the invention is only limited by the appended claims.
The embodiments of the invention described herein are exemplary only and should not be taken as limiting the invention, which is described by reference to the accompanying drawings.

Claims (4)

1. An active learning method based on a dual-generation countermeasure network is characterized by comprising the following steps:
step 1: training a model by using the marked pool image and the unmarked pool image, wherein the model training comprises two parts of characterization learning and generation of confrontation;
step 2: sampling the images in the to-be-selected pool by using the converged model, and then performing manual annotation and image generation on the sampled images;
and step 3: and transferring the sampled image from the candidate pool to a marking pool, adding the generated image into the candidate pool, training the sampling model according to the updated candidate pool and the marking pool, and finally training the task model according to the updated marking pool.
2. The active learning method based on dual generation countermeasure network of claim 1, wherein the step 1 comprises the steps of:
step 1.1: performing characterization learning by using the labeled pool image and the unlabeled pool image to generate the first half part G of the generator G1And a discriminator D1And (5) training. G1The purpose of (2) is to label the images x in the poolLAnd image x in unlabeled poolUMapping to the same feature space, extracting the feature matrix of the image, and inputting the extracted feature matrix into D1Attempt to let D1Predicting that all feature matrices are from the mark pool; and D1To distinguish whether the input feature matrix comes from xLAnd output the feature matrix from xLThe probability of (c). G1And D1The objective function for the confrontational training is:
Figure FDA0002619775840000011
wherein xLAnd xURespectively representing a marked image and an unmarked image. At this stage, only for G1And D1Is updated.
Step 1.2: using the marked pool image and the unmarked pool image to generate a countermeasure, the second half G of the generator G2And a discriminator D2Training is performed, where the goal of G is to generate near-realistic images in an attempt to let D stand2Predicting that all input images are real images; d2The object of (1) is to distinguish between real images and generated images. Specifically, the generation confrontation training process is as follows:
step 1.2.1: and G, taking the real sample as an input, and outputting the generated reconstructed sample. To ensure the difference between the generated image and the original image, the invention is implemented in G2The header introduces a convolution kernel of size 1 x 1. Wherein the weights of the convolution kernels are [0.95,1.05 ]]And the random value does not participate in the parameter updating process, so as to ensure that the characteristics of the generated image and the original image are not completely the same.
Step 1.2.2: d2And taking the real sample or the generated sample as an input, and outputting the probability that the input sample is the real sample. Design D of the present invention2The purpose of (a) is to guide G to generate near-real images, namely: d2And taking the real image or the generated image as an input, and then outputting the probability of the real image to guide the training process of G. The invention introduces Wasserstein distance into the original objective function, and the whole objective function is as follows:
Figure FDA0002619775840000021
wherein xrRepresenting the real image sampled from all the sample pools, G (x)r) I.e. xgAnd f (x) is a discriminator function that needs to satisfy the Lipschitz constraint. The invention uses the matrix spectrum norm to make D2The Lipschitz constraint is satisfied on a global scale. Wherein the physical meaning of the spectral norm is defined as:
Figure FDA0002619775840000022
where σ (W) represents the spectral norm of the weight matrix, x represents the input vector for the layer, and represents the amount of change in x. In the generation of confrontation phase, the invention fixes G1Parameters of (2) only for G2And D2Is updated.
3. The active learning method based on dual generation countermeasure network of claim 1, wherein the step 2 comprises the steps of:
step 2.1: using converged G1And D1In combination of XCThe sampling in the middle obtains a sample set x with the most abundant information quantitysTo xsAnd manually annotating the unlabeled image. The manual annotation is xsCategory labels are added manually.
Step 2.2: x obtained by using converged G and samplingsTo xsReconstructing to obtain the generated image xgAnd giving the generated image xgAnd xsThe same label.
4. The active learning method based on dual generation countermeasure network of claim 1, wherein the step 3 comprises the steps of:
step 3.1: sampling the obtained image xsTransferring from the candidate pool to the mark pool, and generating image xgAdding the sampling model D into a to-be-selected pool, and pairing the sampling model D according to the updated to-be-selected pool and the mark pool1Performing update training to D1Real-time changes in the label pool and the candidate pool can be monitored.
Step 3.2: and training the task model according to the updated mark pool.
Step 3.3: and repeating the processes of the step 2 and the step 3 until the performance of the task model meets the expected standard.
CN202010779759.XA 2020-08-05 2020-08-05 Active learning method based on dual-generation countermeasure network Active CN111950619B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010779759.XA CN111950619B (en) 2020-08-05 2020-08-05 Active learning method based on dual-generation countermeasure network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010779759.XA CN111950619B (en) 2020-08-05 2020-08-05 Active learning method based on dual-generation countermeasure network

Publications (2)

Publication Number Publication Date
CN111950619A true CN111950619A (en) 2020-11-17
CN111950619B CN111950619B (en) 2022-09-09

Family

ID=73338012

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010779759.XA Active CN111950619B (en) 2020-08-05 2020-08-05 Active learning method based on dual-generation countermeasure network

Country Status (1)

Country Link
CN (1) CN111950619B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113420866A (en) * 2021-06-23 2021-09-21 新疆大学 Score prediction method based on dual generation countermeasure network
CN114627390A (en) * 2022-05-12 2022-06-14 北京数慧时空信息技术有限公司 Improved active learning remote sensing sample marking method

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108257195A (en) * 2018-02-23 2018-07-06 深圳市唯特视科技有限公司 A kind of facial expression synthetic method that generation confrontation network is compared based on geometry
CN108921123A (en) * 2018-07-17 2018-11-30 重庆科技学院 A kind of face identification method based on double data enhancing
CN109544442A (en) * 2018-11-12 2019-03-29 南京邮电大学 The image local Style Transfer method of production confrontation network based on dual confrontation
CN110599411A (en) * 2019-08-08 2019-12-20 中国地质大学(武汉) Image restoration method and system based on condition generation countermeasure network
CN110930418A (en) * 2019-11-27 2020-03-27 江西理工大学 Retina blood vessel segmentation method fusing W-net and conditional generation confrontation network
CN111028146A (en) * 2019-11-06 2020-04-17 武汉理工大学 Image super-resolution method for generating countermeasure network based on double discriminators
US20200134876A1 (en) * 2018-10-30 2020-04-30 International Business Machines Corporation Generating simulated body parts for images
CN111476294A (en) * 2020-04-07 2020-07-31 南昌航空大学 Zero sample image identification method and system based on generation countermeasure network
CN111881716A (en) * 2020-06-05 2020-11-03 东北林业大学 Pedestrian re-identification method based on multi-view-angle generation countermeasure network

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108257195A (en) * 2018-02-23 2018-07-06 深圳市唯特视科技有限公司 A kind of facial expression synthetic method that generation confrontation network is compared based on geometry
CN108921123A (en) * 2018-07-17 2018-11-30 重庆科技学院 A kind of face identification method based on double data enhancing
US20200134876A1 (en) * 2018-10-30 2020-04-30 International Business Machines Corporation Generating simulated body parts for images
CN109544442A (en) * 2018-11-12 2019-03-29 南京邮电大学 The image local Style Transfer method of production confrontation network based on dual confrontation
CN110599411A (en) * 2019-08-08 2019-12-20 中国地质大学(武汉) Image restoration method and system based on condition generation countermeasure network
CN111028146A (en) * 2019-11-06 2020-04-17 武汉理工大学 Image super-resolution method for generating countermeasure network based on double discriminators
CN110930418A (en) * 2019-11-27 2020-03-27 江西理工大学 Retina blood vessel segmentation method fusing W-net and conditional generation confrontation network
CN111476294A (en) * 2020-04-07 2020-07-31 南昌航空大学 Zero sample image identification method and system based on generation countermeasure network
CN111881716A (en) * 2020-06-05 2020-11-03 东北林业大学 Pedestrian re-identification method based on multi-view-angle generation countermeasure network

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
BOYU等: "Combining neural networks and semantic feature space for email classificatio", 《KNOWLEDGE-BASED SYSTEMS》, 31 July 2009 (2009-07-31), pages 376 - 381, XP026148948, DOI: 10.1016/j.knosys.2009.02.009 *
KANGLIN LIU等: "Lipschitz constrained GANs via boundedness and continuity", 《NEURAL COMPUTING AND APPLICATIONS (2020)》, 24 May 2020 (2020-05-24), pages 18271 - 18283 *
L. ZHANG等: "GAN2C: Information Completion GAN with Dual Consistency Constraints", 《2018 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN)》, 15 October 2018 (2018-10-15), pages 1 - 8 *
张杨忆等: "改进残差块和对抗损失的GAN图像超分辨率重建", 《哈尔滨工业大学学报》 *
张杨忆等: "改进残差块和对抗损失的GAN图像超分辨率重建", 《哈尔滨工业大学学报》, vol. 51, no. 11, 9 August 2019 (2019-08-09), pages 128 - 137 *
贾宇峰等: "条件约束下的自我注意生成对抗网络", 《西安电子科技大学学报》, vol. 46, no. 06, 25 September 2019 (2019-09-25), pages 163 - 170 *
赫工博: "基于多域映射对抗生成网络的人脸表情生成", 《中国优秀硕士学位论文全文数据库 信息科技辑》, no. 12, 15 December 2019 (2019-12-15), pages 138 - 379 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113420866A (en) * 2021-06-23 2021-09-21 新疆大学 Score prediction method based on dual generation countermeasure network
CN113420866B (en) * 2021-06-23 2022-10-11 新疆大学 Score prediction method based on dual generation countermeasure network
CN114627390A (en) * 2022-05-12 2022-06-14 北京数慧时空信息技术有限公司 Improved active learning remote sensing sample marking method
WO2023216725A1 (en) * 2022-05-12 2023-11-16 北京数慧时空信息技术有限公司 Improved active learning remote sensing sample marking method

Also Published As

Publication number Publication date
CN111950619B (en) 2022-09-09

Similar Documents

Publication Publication Date Title
CN113971209B (en) Non-supervision cross-modal retrieval method based on attention mechanism enhancement
CN111950619B (en) Active learning method based on dual-generation countermeasure network
CN114493014B (en) Multi-element time sequence prediction method, system, computer product and storage medium
CN113963165B (en) Small sample image classification method and system based on self-supervision learning
CN112434628B (en) Small sample image classification method based on active learning and collaborative representation
CN116030302A (en) Long-tail image recognition method based on characterization data enhancement and loss rebalancing
CN116258978A (en) Target detection method for weak annotation of remote sensing image in natural protection area
Kong et al. 3lpr: A three-stage label propagation and reassignment framework for class-imbalanced semi-supervised learning
Yao et al. ModeRNN: Harnessing spatiotemporal mode collapse in unsupervised predictive learning
CN114329124A (en) Semi-supervised small sample classification method based on gradient re-optimization
CN117036862B (en) Image generation method based on Gaussian mixture variation self-encoder
CN113836319A (en) Knowledge completion method and system for fusing entity neighbors
Zhang et al. Improving the generalization performance of deep networks by dual pattern learning with adversarial adaptation
Ji et al. Text-to-image generation via semi-supervised training
CN116681921A (en) Target labeling method and system based on multi-feature loss function fusion
CN114821184B (en) Long-tail image classification method and system based on balanced complementary entropy
CN116310621A (en) Feature library construction-based few-sample image recognition method
CN113378942B (en) Small sample image classification method based on multi-head feature cooperation
CN115936171A (en) New energy output prediction method
Xue et al. Fast and unsupervised neural architecture evolution for visual representation learning
CN109919200B (en) Image classification method based on tensor decomposition and domain adaptation
Xie et al. Adapt then Generalize: A Simple Two-Stage Framework for Semi-Supervised Domain Generalization
Zhang et al. Dynamic Nonlinear Mixup with Distance-based Sample Selection
CN116361476B (en) Knowledge graph negative sample synthesis method based on interpolation method
Wei et al. An innovative multi-factor prediction algorithm construction and importance analysis model based on GBDT-LightGBM

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant