CN111950619B - Active learning method based on dual-generation countermeasure network - Google Patents

Active learning method based on dual-generation countermeasure network Download PDF

Info

Publication number
CN111950619B
CN111950619B CN202010779759.XA CN202010779759A CN111950619B CN 111950619 B CN111950619 B CN 111950619B CN 202010779759 A CN202010779759 A CN 202010779759A CN 111950619 B CN111950619 B CN 111950619B
Authority
CN
China
Prior art keywords
pool
image
training
model
sampling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010779759.XA
Other languages
Chinese (zh)
Other versions
CN111950619A (en
Inventor
郭继峰
庞志奇
李�禾
李星
费禹潇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northeast Forestry University
Original Assignee
Northeast Forestry University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northeast Forestry University filed Critical Northeast Forestry University
Priority to CN202010779759.XA priority Critical patent/CN111950619B/en
Publication of CN111950619A publication Critical patent/CN111950619A/en
Application granted granted Critical
Publication of CN111950619B publication Critical patent/CN111950619B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to an active learning method based on a dual-generation countermeasure network. The method comprises the following steps: 1: training a model by using the marked pool image and the unmarked pool image, wherein the model training comprises two parts of characterization learning and generation of confrontation; 2: sampling the images in the to-be-selected pool by using the converged model, and then performing manual annotation and image generation on the sampled images; 3: and transferring the sampled image from the candidate pool to a mark pool, adding the generated image to the candidate pool, and training the sampling model according to the updated candidate pool and the mark pool. The invention introduces a generation countermeasure mechanism in a pool-based method, endows the model with generation capacity, and forms two groups of generation countermeasure networks in the model. The invention also introduces the concept of 'synchronous update' to ensure that the sampling model is synchronously updated along with the sampling process, thereby ensuring that the sample with the most abundant information content at the current stage can be selected for each sampling.

Description

Active learning method based on dual-generation countermeasure network
The technical field is as follows:
the invention relates to the field of active learning, in particular to an active learning method based on a dual-generation countermeasure network.
Background art:
the classification task based on deep learning often needs large-scale labeled samples for training, and the labeling cost of the samples in reality can be prohibitively high or even impossible to obtain in a large scale. To remedy this drawback, researchers have proposed active learning. The purpose of active learning is to select or generate a sample which is most beneficial to model training from an unlabeled data set, and then manually label the selected sample and add the labeled sample to the training set, so that the task model obtains higher performance at lower labeling cost. Practice shows that for an image classification task, active learning can effectively reduce the labeling cost of a sample on the premise of ensuring the model performance.
Currently, mainstream active learning algorithms can be roughly divided into two categories: pool-based methods and synthesis-based methods. The idea of the pool-based approach is to use a set sampling strategy to select the sample with the largest information content from the sample pool. Depending on the sampling strategy, the pool-based approach can be subdivided into an uncertainty-based approach and a representation-based approach. Uncertainty-based methods are numerous, for example, uncertainty can be estimated by a probabilistic model in a Bayesian framework, such as a Gaussian process or a Bayesian neural network. Meanwhile, uncertainty heuristics in the non-Bayesian classical active learning method are widely researched, such as distance to decision boundaries and conditional entropy. The expression-based approach performs sample selection by increasing diversity in a given batch.
Although the pool-based active learning method has greatly reduced the labeling cost of the samples compared to the conventional method, the pool-based active learning method has a common problem: and the samples obtained by sampling in the unmarked sample pool are sent into the marked pool after being annotated, and do not participate in the subsequent sampling process. Because the number of samples in the unmarked pool is limited, and the algorithm samples on the basis of the information quantity, the information quantity contained in the unit sample in the unmarked pool is necessarily reduced along with the increase of the sampling times in the above process, thereby reducing the performance improvement rate of the task model.
The synthesis-based approach facilitates model training by actively synthesizing rich-information-bearing samples. Of pioneering significance is GAAL, unlike pool-based methods, which aims to generate new samples useful for the model rather than selecting the sample with the largest amount of information in the pool, and in the ideal case, GAAL generated samples contain a larger amount of information than all the samples that are present. However, since the acquisition function of GAAL must be easy to calculate and optimize, this method has a certain limitation in the application of active learning.
The invention content is as follows:
the invention aims to overcome the defects of the conventional pool-based active learning method, and provides an active learning method based on a dual-generation countermeasure network to solve the problem of overhigh labeling cost of a data set in an image classification task based on deep learning.
An active learning method based on a dual-generation countermeasure network is characterized by comprising the following steps:
step 1: training a model by using a labeled pool image and an unlabeled pool image, wherein the model training comprises two parts of characterization learning and generation confrontation, and the model comprises two parts of a generator G and two discriminators D 1 、D 2 The two groups form a countermeasure network;
step 2: using the converged model to treat the selection pool X C The image in (1) is sampled, then the sampled image is manually annotated and image generation is carried out, and the candidate pool X is C Using unlabeled pool X U Initializing, the sampling process is performed by D 1 Completing, wherein the image generation process is completed by G;
and step 3: transferring the sampled image from the candidate pool to the mark pool, adding the generated image into the candidate pool, and comparing the sampled model D with the updated candidate pool and mark pool 1 And training, and finally training a task model according to the updated mark pool, wherein the task model is a general image classification model.
The step 1 comprises the following steps:
step 1.1: performing characterization learning by using the labeled pool image and the unlabeled pool image to generate the first half part G of the generator G 1 And a discriminator D 1 And (5) training. G 1 The purpose of (2) is to label the images x in the pool L And image x in unlabeled pool U Mapping to the same feature space, extracting the feature matrix of the image, and extractingThe feature matrix of (1) is input into (D) 1 Attempt to let D 1 Predicting that all feature matrices are from the mark pool; and D 1 To distinguish whether the input feature matrix comes from x L And output the feature matrix from x L The probability of (c). G 1 And D 1 The objective function for the confrontational training is:
Figure BDA0002619775850000031
wherein x is L And x U Respectively representing a marked image and an unmarked image. At this stage, only for G 1 And D 1 Is updated. The purpose of the characterization learning is to make D 1 Has the ability to select the most informative sample.
Step 1.2: using the marked pool image and the unmarked pool image to generate a countermeasure, the second half G of the generator G 2 And a discriminator D 2 Training is performed, where the goal of G is to generate near-realistic images in an attempt to let D 2 Predicting that all input images are real images; d 2 The object of (1) is to distinguish between real images and generated images. Specifically, the generation confrontation training process is as follows:
step 1.2.1: and G, taking the real sample as an input, and outputting the generated reconstructed sample. To ensure the difference between the generated image and the original image, the invention is implemented in G 2 The header introduces a convolution kernel of size 1 x 1. Wherein the weights of the convolution kernels are [0.95,1.05 ]]And the random value does not participate in the parameter updating process, so as to ensure that the characteristics of the generated image and the original image are not completely the same.
Step 1.2.2: d 2 And taking the real sample or the generated sample as an input, and outputting the probability that the input sample is the real sample. Design D of the present invention 2 The purpose of (a) is to guide G to generate near-real images, namely: d 2 And taking the real image or the generated image as an input, and then outputting the probability of the real image to guide the training process of G. The invention introduces Wasserstein distance into the original objective function, and the whole objective function is as follows:
Figure BDA0002619775850000032
wherein x is r Representing the real image sampled from all the sample pools, G (x) r ) I.e. x g And f (x) is the discriminator function that needs to satisfy the Lipschitz constraint. The invention uses the matrix spectrum norm to make D 2 The Lipschitz constraint is satisfied on a global scale. Wherein the physical meaning of the spectral norm is defined as:
Figure BDA0002619775850000033
where σ (W) represents the spectral norm of the weight matrix, x represents the input vector for the layer, and δ represents the amount of change in x. In the generation of confrontation phase, the invention fixes G 1 Parameters of (2) only for G 2 And D 2 Is updated.
The step 2 comprises the following steps:
step 2.1: using converged G 1 And D 1 In combination of X C The sampling in the middle obtains a sample set x with the most abundant information quantity s To x s And manually annotating the unlabeled image. The manual annotation is x s Category labels are added manually.
Step 2.2: using converged G and sampled x s To x s Reconstructing to obtain the generated image x g And giving the generated image x g And x s The same label.
The step 3 comprises the following steps:
step 3.1: sampling the obtained image x s Transferring from the candidate pool to the mark pool, and generating image x g Adding the sampling model D into a candidate pool, and pairing the sampling model D according to the updated candidate pool and the updated mark pool 1 Performing update training to D 1 Real-time changes in the label pool and the candidate pool can be monitored.
Step 3.2: and training the task model according to the updated mark pool.
Step 3.3: and repeating the processes of the step 2 and the step 3 until the performance of the task model meets the expected standard.
The invention has the beneficial effects that: considering that the current image classification task based on deep learning needs large-scale labeling of samples, the labeling cost of the samples in reality can be prohibitively high, and even large-scale acquisition is impossible. The invention designs an active learning method based on combination of pool and synthesis, which introduces a generation countermeasure mechanism in the pool-based method, endows the model with generation capability, and forms two groups of generation countermeasure networks for image sampling and image generation respectively in the model. According to the invention, the reconstructed samples are added into the to-be-selected pool to maintain the number of the samples in the to-be-selected pool, so that the to-be-selected pool can continuously provide samples with abundant information for the task model, and the samples are fully utilized. The invention also introduces the concept of 'synchronous update' to ensure that the sampling model is synchronously updated along with the sampling process, thereby ensuring that the sample with the most abundant information content at the current stage can be selected for each sampling.
Description of the drawings:
fig. 1 is a flowchart of an active learning method based on a dual-generation countermeasure network.
FIG. 2 is a block diagram of a model training phase
FIG. 3 is a block diagram of image sampling and image generation
Fig. 4 is a graph of the sampling results of an example of the application of the invention on three data sets.
FIG. 5 is a graph comparing the labeling costs of an example application of the present invention with the benchmark method over three data sets.
FIG. 6 is a graph comparing the performance of the present invention-based task model with that of the baseline-based task model over three data sets.
The specific implementation mode is as follows:
the technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a schematic diagram of a specific flow implemented by the present invention, fig. 2 is a structural diagram of a model training phase, fig. 3 is a structural diagram of image sampling and image generation, as shown in fig. 1, the method includes the following steps:
step 1: training a model by using a labeled pool image and an unlabeled pool image, wherein the model training comprises two parts of characterization learning and generation countermeasure, and the model comprises a generator G and two discriminators D 1 、D 2 Two groups of the networks form a confrontation network, as shown in fig. 2;
step 2: using the converged model to treat the selection pool X C The image in (1) is sampled, then the sampled image is manually annotated and image generation is carried out, and the candidate pool X is C Using unlabeled pool X U Initializing, the sampling process is performed by D 1 Completing the image generation process by G;
and step 3: transferring the sampled image from the candidate pool to the mark pool, adding the generated image into the candidate pool, and aligning the sampling model D according to the updated candidate pool and the mark pool 1 And training, and finally training a task model according to the updated mark pool, wherein the task model is a general image classification model.
The step 1 comprises the following steps:
step 1.1: performing characterization learning by using labeled pool images and unlabeled pool images to generate a first half G of the generator G 1 And a discriminator D 1 And (5) training. G 1 The purpose of (2) is to label the images x in the pool L And image x in unlabeled pool U Mapping to the same feature space, extracting the feature matrix of the image, and inputting the extracted feature matrix into D 1 Attempt to let D 1 Predicting that all feature matrices are from the mark pool; and D 1 To distinguish whether the input feature matrix comes from x L And then are transported in parallelThe feature matrix is derived from x L The probability of (c). G 1 And D 1 The objective function is:
Figure BDA0002619775850000061
wherein x L And x U Respectively representing a marked image and an unmarked image. At this stage, only for G 1 And D 1 Is updated. The purpose of the characterization learning is to make D 1 The method has the capability of selecting the most information unambered data.
Step 1.2: generating a countermeasure using the labeled pool image and the unlabeled pool image for the second half G of the generator G 2 And a discriminator D 2 Training is performed, where the goal of G is to generate near-realistic images in an attempt to let D stand 2 Predicting that all input images are real images; d 2 The object of (1) is to distinguish between real images and generated images. Specifically, the generation confrontation training process is as follows:
step 1.2.1: and G, taking the real sample as an input, and outputting the generated reconstructed sample. To ensure the difference between the generated image and the original image, the invention is implemented in G 2 The header introduces a convolution kernel of size 1 x 1 as shown in fig. 2. Wherein the weights of the convolution kernels are [0.95,1.05 ]]And the random value does not participate in the parameter updating process, so as to ensure that the characteristics of the generated image and the original image are not completely the same.
Step 1.2.2: d 2 And taking the real sample or the generated sample as an input, and outputting the probability that the input sample is the real sample. Design D of the present invention 2 The purpose of (a) is to guide G to generate near-real images, namely: d 2 And taking the real image or the generated image as an input, and then outputting the probability of the real image to guide the training process of G. The invention introduces Wasserstein distance into the original objective function, and the whole objective function is as follows:
Figure BDA0002619775850000062
wherein x is r Representing the real image sampled from all the sample pools, G (x) r ) I.e. x g And f (x) is the discriminator function that needs to satisfy the Lipschitz constraint. The invention uses the matrix spectrum norm to make D 2 The Lipschitz constraint is satisfied on a global scale. Wherein the physical meaning of the spectral norm is defined as:
Figure BDA0002619775850000071
where σ (W) represents the spectral norm of the weight matrix, x represents the input vector for the layer, and δ represents the amount of change in x. In the generation of confrontation phase, the invention fixes G 1 Parameters of (2) only for G 2 And D 2 Is updated.
The step 2 comprises the following steps:
step 2.1: using converged G 1 And D 1 In combination of (1) in X C The sampling in the middle obtains a sample set x with the most abundant information quantity s To x s And manually annotating the unlabeled image. The manual annotation is x s Category labels are added manually.
Step 2.2: x obtained by using converged G and sampling s To x s Reconstructing to obtain the generated image x g And giving the generated image x g And x s The same label.
The step 3 comprises the following steps:
step 3.1: sampling the obtained image x s Transferring from the candidate pool to the mark pool, and generating an image x g Adding the sampling model D into a to-be-selected pool, and pairing the sampling model D according to the updated to-be-selected pool and the mark pool 1 Performing update training to D 1 Real-time changes in the label pool and the candidate pool can be monitored.
Step 3.2: and training the task model according to the updated mark pool.
Step 3.3: and repeating the processes of the step 2 and the step 3 until the performance of the task model meets the expected standard.
As shown in FIG. 4, as an application example of the present invention, the images marked with red boxes are the generated images, which are the sampling results of CIFAR10(a), CIFAR100(b) and self-ImageNet (c) data sets. The proportion of generated images in the sampled images on all three data sets is more than 10%.
As shown in FIG. 5, the comparison of the labeling cost of an application example of the present invention and the reference method on three data sets of CIFAR10(a), CIFAR100(b) and self-ImageNet (c) is shown.
As shown in FIG. 6, the performance of the task model based on the present invention compared with the task model based on the baseline method is shown in the results of comparing the three data sets of CIFAR10(a), CIFAR100(b) and self-ImageNet (c).
It should be understood that parts of the specification not set forth in detail are well within the prior art.
While the invention has been described with reference to specific embodiments and processes, the scope of the invention is not limited thereto, and it will be understood by those skilled in the art that the description is illustrative only, and that various changes and modifications may be made to the embodiments without departing from the spirit of the invention. The scope of the invention is only limited by the appended claims.
The embodiments of the invention described herein are exemplary only and should not be taken as limiting the invention, which is described by reference to the accompanying drawings.

Claims (1)

1. An active learning method based on a dual-generation countermeasure network is characterized by comprising the following steps:
step 1: training a model by using the marked pool image and the unmarked pool image, wherein the model training comprises two parts of characterization learning and generation of confrontation;
step 2: sampling the images in the to-be-selected pool by using the converged model, and then performing manual annotation and image generation on the sampled images;
and step 3: transferring the sampled image from the candidate pool to a marking pool, adding the generated image into the candidate pool, training a sampling model according to the updated candidate pool and the marking pool, and finally training a task model according to the updated marking pool;
the step 1 comprises the following steps:
step 1.1: performing characterization learning by using the labeled pool image and the unlabeled pool image to generate the first half part G of the generator G 1 And a discriminator D 1 Training is carried out; g 1 The purpose of (2) is to label the images x in the pool L And image x in unlabeled pool U Mapping to the same feature space, extracting the feature matrix of the image, and inputting the extracted feature matrix into D 1 Attempt to let D 1 Predicting that all feature matrices are from the mark pool; and D 1 To distinguish whether the input feature matrix comes from x L And output the feature matrix from x L The probability of (d); g 1 And D 1 The objective function for the confrontational training is:
Figure FDA0003779068490000011
wherein x L And x U Respectively representing a marked image and an unmarked image; at this stage, only for G 1 And D 1 Updating the parameters;
step 1.2: generating a countermeasure using the labeled pool image and the unlabeled pool image for the second half G of the generator G 2 And a discriminator D 2 Training is performed, where the goal of G is to generate near-realistic images in an attempt to let D stand 2 Predicting that all input images are real images; d 2 The object of (1) is to distinguish between real images and generated images; specifically, the generation confrontation training process is as follows:
step 1.2.1: g, taking the real sample as an input, and outputting the generated reconstructed sample; to ensure the difference between the generated image and the original image, in G 2 The head introduces convolution kernel of 1 × 1 size(ii) a Wherein the weight of the convolution kernel is [0.95,1.05 ]]Random values are obtained, and the parameters do not participate in the parameter updating process, so that the generated image and the original image have the same characteristics;
step 1.2.2: d 2 Taking a real sample or a generated sample as an input, and outputting the probability that the input sample is the real sample; design D 2 The purpose of (a) is to guide G to generate near-real images, namely: d 2 Taking a real image or a generated image as an input, and then outputting the probability of the real image to guide the training process of G; introducing Wasserstein distance into an original objective function, wherein the overall objective function is as follows:
Figure FDA0003779068490000021
wherein x r Representing the real image sampled from all the sample pools, G (x) r ) I.e. x g (x) is a discriminator function that needs to satisfy the Lipschitz constraint; using the norm of the matrix spectrum to make D 2 The Lipschitz constraint is satisfied in the global range; wherein the physical meaning of the spectral norm is defined as:
Figure FDA0003779068490000022
wherein σ (W) represents the spectral norm of the weight matrix, x represents the input vector, and δ represents the amount of change in x; in the generation of confrontation stage, G is fixed 1 Parameters of (2) only for G 2 And D 2 Updating the parameters;
the step 2 comprises the following steps:
step 2.1: using converged G 1 And D 1 In combination of X C The sampling in the middle obtains a sample set x with the most abundant information quantity s To x s Manually annotating the medium and non-label images; the manual annotation is x s Manually adding category labels;
step 2.2: x obtained by using converged G and sampling s To x s Reconstructing to obtain the generated image x g And giving the generated image x g And x s The same label;
the step 3 comprises the following steps:
step 3.1: sampling the obtained image x s Transferring from the candidate pool to the mark pool, and generating image x g Adding the sampling model D into a candidate pool, and pairing the sampling model D according to the updated candidate pool and the updated mark pool 1 Performing update training to D 1 The real-time change of the marking pool and the to-be-selected pool can be monitored;
step 3.2: training the task model according to the updated mark pool;
step 3.3: and repeating the step 2 and the step 3 until the performance of the task model meets the expected standard.
CN202010779759.XA 2020-08-05 2020-08-05 Active learning method based on dual-generation countermeasure network Active CN111950619B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010779759.XA CN111950619B (en) 2020-08-05 2020-08-05 Active learning method based on dual-generation countermeasure network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010779759.XA CN111950619B (en) 2020-08-05 2020-08-05 Active learning method based on dual-generation countermeasure network

Publications (2)

Publication Number Publication Date
CN111950619A CN111950619A (en) 2020-11-17
CN111950619B true CN111950619B (en) 2022-09-09

Family

ID=73338012

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010779759.XA Active CN111950619B (en) 2020-08-05 2020-08-05 Active learning method based on dual-generation countermeasure network

Country Status (1)

Country Link
CN (1) CN111950619B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113420866B (en) * 2021-06-23 2022-10-11 新疆大学 Score prediction method based on dual generation countermeasure network
CN114627390B (en) * 2022-05-12 2022-08-16 北京数慧时空信息技术有限公司 Improved active learning remote sensing sample marking method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108257195A (en) * 2018-02-23 2018-07-06 深圳市唯特视科技有限公司 A kind of facial expression synthetic method that generation confrontation network is compared based on geometry
CN108921123A (en) * 2018-07-17 2018-11-30 重庆科技学院 A kind of face identification method based on double data enhancing
CN110599411A (en) * 2019-08-08 2019-12-20 中国地质大学(武汉) Image restoration method and system based on condition generation countermeasure network
CN110930418A (en) * 2019-11-27 2020-03-27 江西理工大学 Retina blood vessel segmentation method fusing W-net and conditional generation confrontation network
CN111028146A (en) * 2019-11-06 2020-04-17 武汉理工大学 Image super-resolution method for generating countermeasure network based on double discriminators
CN111881716A (en) * 2020-06-05 2020-11-03 东北林业大学 Pedestrian re-identification method based on multi-view-angle generation countermeasure network

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11080895B2 (en) * 2018-10-30 2021-08-03 International Business Machines Corporation Generating simulated body parts for images
CN109544442B (en) * 2018-11-12 2023-05-23 南京邮电大学 Image local style migration method of double-countermeasure-based generation type countermeasure network
CN111476294B (en) * 2020-04-07 2022-03-22 南昌航空大学 Zero sample image identification method and system based on generation countermeasure network

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108257195A (en) * 2018-02-23 2018-07-06 深圳市唯特视科技有限公司 A kind of facial expression synthetic method that generation confrontation network is compared based on geometry
CN108921123A (en) * 2018-07-17 2018-11-30 重庆科技学院 A kind of face identification method based on double data enhancing
CN110599411A (en) * 2019-08-08 2019-12-20 中国地质大学(武汉) Image restoration method and system based on condition generation countermeasure network
CN111028146A (en) * 2019-11-06 2020-04-17 武汉理工大学 Image super-resolution method for generating countermeasure network based on double discriminators
CN110930418A (en) * 2019-11-27 2020-03-27 江西理工大学 Retina blood vessel segmentation method fusing W-net and conditional generation confrontation network
CN111881716A (en) * 2020-06-05 2020-11-03 东北林业大学 Pedestrian re-identification method based on multi-view-angle generation countermeasure network

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
BoYu等.Combining neural networks and semantic feature space for email classificatio.《Knowledge-Based Systems》.2009,第376-381页. *
kanglin liu等.Lipschitz constrained GANs via boundedness and continuity.《Neural Computing and Applications (2020)》.2020,第18271-18283页. *
L. Zhang等.GAN2C: Information Completion GAN with Dual Consistency Constraints.《2018 International Joint Conference on Neural Networks (IJCNN)》.2018,第1-8页. *
贾宇峰等.条件约束下的自我注意生成对抗网络.《西安电子科技大学学报》.2019,第46卷(第06期),第163-170页. *
赫工博.基于多域映射对抗生成网络的人脸表情生成.《中国优秀硕士学位论文全文数据库 信息科技辑》.2019,(第12期),第I138-379页. *

Also Published As

Publication number Publication date
CN111950619A (en) 2020-11-17

Similar Documents

Publication Publication Date Title
CN112987664B (en) Flow shop scheduling method based on deep reinforcement learning
CN111950619B (en) Active learning method based on dual-generation countermeasure network
CN113971209B (en) Non-supervision cross-modal retrieval method based on attention mechanism enhancement
CN112434628B (en) Small sample image classification method based on active learning and collaborative representation
Zheng et al. A multi-task transfer learning method with dictionary learning
CN114299362A (en) Small sample image classification method based on k-means clustering
CN113656700A (en) Hash retrieval method based on multi-similarity consistent matrix decomposition
CN116452862A (en) Image classification method based on domain generalization learning
CN115659254A (en) Power quality disturbance analysis method for power distribution network with bimodal feature fusion
Wang et al. Deep Unified Cross-Modality Hashing by Pairwise Data Alignment.
Deng et al. A deep neural network combined with context features for remote sensing scene classification
CN110795934A (en) Sentence analysis model training method and device and sentence analysis method and device
Xu et al. Robust remote sensing scene classification by adversarial self-supervised learning
CN114329124A (en) Semi-supervised small sample classification method based on gradient re-optimization
CN108388918B (en) Data feature selection method with structure retention characteristics
CN113222072A (en) Lung X-ray image classification method based on K-means clustering and GAN
CN116993043A (en) Power equipment fault tracing method and device
CN116681921A (en) Target labeling method and system based on multi-feature loss function fusion
CN113378942B (en) Small sample image classification method based on multi-head feature cooperation
CN114168782B (en) Deep hash image retrieval method based on triplet network
CN114049567B (en) Adaptive soft label generation method and application in hyperspectral image classification
CN109919200B (en) Image classification method based on tensor decomposition and domain adaptation
Xue et al. Fast and unsupervised neural architecture evolution for visual representation learning
Xie et al. Adapt then Generalize: A Simple Two-Stage Framework for Semi-Supervised Domain Generalization
Wang et al. Improve conditional adversarial domain adaptation using self‐training

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant