CN111292384B - Cross-domain diversity image generation method and system based on generative confrontation network - Google Patents

Cross-domain diversity image generation method and system based on generative confrontation network Download PDF

Info

Publication number
CN111292384B
CN111292384B CN202010048320.XA CN202010048320A CN111292384B CN 111292384 B CN111292384 B CN 111292384B CN 202010048320 A CN202010048320 A CN 202010048320A CN 111292384 B CN111292384 B CN 111292384B
Authority
CN
China
Prior art keywords
domain
target
image
code
classifier
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010048320.XA
Other languages
Chinese (zh)
Other versions
CN111292384A (en
Inventor
王志
王豪
惠维
刘新慧
王娇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Jiaotong University
Original Assignee
Xian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Jiaotong University filed Critical Xian Jiaotong University
Priority to CN202010048320.XA priority Critical patent/CN111292384B/en
Publication of CN111292384A publication Critical patent/CN111292384A/en
Application granted granted Critical
Publication of CN111292384B publication Critical patent/CN111292384B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • G06T9/002Image coding using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a cross-domain diversity image generation method and a system based on a generative confrontation network.A reference data set is used as a source domain, and an acquired target data set is used as a target domain; coding the source domain and the target domain to obtain a feature vector; classifying the obtained feature vectors through a classifier to obtain class labels; combining the feature vector, the category label, the domain code vectors corresponding to the source domain and the target domain with the trellis code to obtain a target picture; the cross-domain diversity image generation method and system based on the generation type confrontation network can generate manually controllable images of various styles according to requirements, can meet the style influenced by various factors through different quantity structure collocation of discrete and continuous variables, can produce images with diversity in large batch for requirements of deep neural network training and the like, and can simultaneously realize generation of various domains and various categories for different tasks.

Description

Cross-domain diversity image generation method and system based on generative confrontation network
Technical Field
The invention relates to the technical field of image processing and the like, in particular to a cross-domain diversity image generation method based on a generative confrontation network.
Background
In the information age of rapid development, artificial intelligence technology is widely available in human life, such as face recognition technology and electronic license plate recognition technology in the image field. The new technology is proposed and the new algorithm idea is inseparable, such as the generation countermeasure network gan (generic adaptive network) idea of the image field.
Since the idea of GAN was proposed by Ian Goodfellow in 2014, on the basis of the idea, various improvements and combinations of its structure have been made to achieve different purposes. For example, some changes can generate false pictures with various styles based on the method, but the changes are still made in the style field of the original picture, and the generation of a new image spanning the style field of the image is not really realized; some thought based on the GAN can generate a new image spanning the style field of the image by using the existing image, but the generated new image has a single style and great limitation on diversity, and the style of the generated new image cannot be effectively controlled.
However, the current practical application scenarios are complex and changeable, for example, a large-scale deep neural network is trained, a large-scale data set is often needed in the deep neural network training process, especially, a data set needed for completing a specific task is needed, although a plurality of large-scale data sets exist at present, the data sets can often only meet the special task needs of the user, when a new task or a specific need is met, only a small number of data samples are often met in reality, and at this time, enough large data sets are often not available to meet the requirement of training the neural network in the specific field. Manually collecting data sets is time-consuming and labor-consuming, and has a high cost. Sometimes, for special needs, for example, some specially-designated game scene is needed in a game scene, and some special designs need to be performed on the existing neural network and training samples, for this case, training sample data is difficult to collect or even does not exist in real life, so that it is necessary to "create" an absent image data sample as a training set.
Disclosure of Invention
Aiming at the problem that a picture data set required by training a neural network is large or difficult to obtain in the prior art, the invention provides a cross-domain diversity image generation method based on a generation type countermeasure network, which can simultaneously meet the cross-domain and diversity characteristics of generated images and solve the problem of obtaining the data set of the neural network.
The invention is realized by the following technical scheme:
the cross-domain diversity image generation method based on the generative countermeasure network comprises the following steps:
1) taking the reference data set as a source domain and the acquired target data set as a target domain;
2) coding a source domain and a target domain to obtain a feature vector;
3) classifying the feature vectors obtained in the step 2) through a classifier to obtain class labels;
4) and combining the feature vectors, the category labels, the domain code vectors corresponding to the source domain and the target domain with the style code to obtain the target picture.
Preferably, in step 1), the reference data set and the target data set are divided into a plurality of source domains and a plurality of target domains according to the type of the electric meter.
Preferably, the source domain and the target domain images are cropped to a uniform size in step 2), and then classified.
Preferably, in step 2, the source domain and the target domain which are cut to be uniform in size are input to a neural network, and a feature vector is obtained through convolution operation of an encoder.
Preferably, a classifier is adopted in the step 3) to classify the feature vectors;
the classifier is composed of a full connection layer structure, and is connected with a softmax layer for output.
Preferably, the trellis code in step 4) is a combination of a plurality of discrete variables or a plurality of continuous variables.
Preferably, the generation process of the image is supervised by a discriminator in the step 4, so that the quality of the generation is ensured.
The invention also provides a system of the cross-domain diversity image generation method based on the generative confrontation network, which comprises an encoder, a classifier, a generator and a discriminator;
the encoder is used for encoding the images of the source domain and the target domain to obtain the characteristic vectors corresponding to the source domain and the target domain;
the classifier is used for classifying the feature vectors and generating class labels;
a generator for generating images of different domains, different categories and different styles;
a discriminator for supervising the quality of the generated image of the generator;
the loss function is as follows:
L=λ0*Lcc1*Lc2*Lh3*Ld
wherein λ is0Is a cyclic consistent loss term LCCParameter of (a) ("lambda1For classifier loss term LCParameter of (a)2For mutual information loss term LhParameter of (a)3Is a discriminator loss term LdThe parameter (c) of (c).
Preferably, the generator comprises a domain code α, a hidden code β and a pseudo tag θ;
the domain code alpha is used for controlling the information of a target domain to which the generated picture belongs;
the hidden code beta is used for controlling the style characteristics of the generated image;
the pseudo label theta controls the type information of the generated image.
Compared with the prior art, the invention has the following beneficial technical effects:
the invention provides a cross-domain diversity image generation method based on a generation type countermeasure network, which is characterized in that a source domain and a target domain are simultaneously input to an encoder, the encoder obtains a characteristic vector through a series of operations such as convolution and the like, the characteristic vector obtains a classified result through a classifier to be used as a label, at the moment, the label, the characteristic vector, a domain code alpha of the target domain and a domain code of the source domain are simultaneously sent to a generator G, and the generator generates a desired new image through deconvolution operation; the method utilizes the concept of GAN to carry out migration, purposefully expands the data set, thereby enhancing the network performance, achieving the purpose of expanding the data set, simultaneously realizing free switching of a plurality of domains, generating artificially controllable images of various styles according to the requirements, meeting the style influenced by various factors through different quantity structure collocation of discrete and continuous variables, producing the images with diversity in large batch for the requirements of deep neural network training and the like, and simultaneously realizing the generation of various domains, various categories and a plurality of styles for different tasks.
Drawings
FIG. 1 is a cross-domain diversity image generation system based on a generative confrontation network of the present invention;
FIG. 2 is a graph of the loss of cyclic agreement between two images generated by the operation of an encoder on an image input to the encoder according to the present invention;
FIG. 3 is a diagram of the results of the supervised generator of the present invention with diversity;
FIG. 4 is an MNIST dataset-style image generated by the present invention;
FIG. 5 is an SVHN dataset-style image generated by the present invention;
FIG. 6 is an image of an SVHN dataset generated by the present invention;
fig. 7 shows that the MNIST dataset and SVHN dataset are used as source domains to generate new digital images of electric meters in different categories in various styles.
Detailed Description
The present invention will now be described in further detail with reference to the attached drawings, which are illustrative, but not limiting, of the present invention.
Referring to fig. 1, a cross-domain diversity image generation method based on a generative countermeasure network includes the following steps:
1) taking the acquired reference data set as a source domain s, taking the acquired target data as target domains t respectively, and dividing the target domains into a first target domain, a second target domain and a third target domain in a similar way;
the source domain is an existing reference data set, such as an MNIST data set, and is divided into ten types of 0 to 9, and each picture is a single-channel black-and-white image (with a label) with the size of 28 × 28; for another example, SVHN dataset, again of 0 to 9 classes, each picture is a three channel color image of 32 x 32 size.
The target domain takes an ammeter digital image as an example, and is also 0 to 9 types, and three-channel color images can be scaled to a uniform size and can also be specifically adjusted according to the original size. In consideration of different electric meter types, the digital electric meter images are correspondingly divided into a plurality of types, and the style characteristics of the digital electric meter images corresponding to different electric meter types are inconsistent.
The source domain can be an MNIST data set or an SVHN data set, or the source domain and the SVHN data set can be divided into a source domain I1 and a source domain II 2 at the same time, and the target domains are different types of electric meter digital images and are a plurality of target domains, namely a target domain I1 and a target domain II t2.
2) The source domain and each target domain are simultaneously fed as input parts to the encoder E for encoding.
The source domain and the target domain correspond to a domain code vector respectively, and the plurality of target domains correspond to a target domain code vector respectively.
The source domain is entered with category labels from 0 to 9 and the target domain need not be.
For convenience of operation, the input target domain image is uniformly cropped and scaled to the same size.
3) The source domain and the target domains are sent into an encoder E, and meanwhile, a feature vector Z is obtained through convolution operation of a neural network;
the encoder E performs a convolution operation on the input image to obtain Feature maps (Feature maps) of various sizes, and finally obtains a tensor of 1 × N, that is, a tensor of 1 × N.
4) Classifying the coded feature vector Z through a classifier C, and storing the obtained class label;
and the classifier C is composed of a full connection layer structure and is connected with the output of a softmax layer to obtain and store the class label of the target domain.
The loss function of the classifier is composed of the cross entropy of the samples and labels of the source domain and the sum of the cross entropy of the samples and labels of the target domain, and meanwhile the classifier stores some invariable common characteristics of the target domain of the source domain.
5) And (3) inputting the category labels, the feature vectors Z, the domain code vectors corresponding to the source domain and the target domain obtained in the steps 2) to 4) and combining the style code S into a generator G, and obtaining pictures generated by different combinations by combining different category labels, feature vectors Z, domain code vectors and style codes S and utilizing the operation of a neural network of the generator.
The style code can be a combination of a plurality of discrete or a plurality of continuous variables, images with different styles are generated according to specific combinations, adjustment is carried out according to requirements, for example, a virtual effect of a special scene is required, a scene which does not exist can be generated, the difference from reality is larger, and if the style code is used as an extended training sample, the effect close to the reality image must be considered.
The style code S is used to control the style of the target field image, i.e., the generated image, such as the thickness of the number in the generated image, the inclination angle, and the intensity of the illumination. S is generally composed of a plurality of continuous variables and a plurality of discrete variables, and is set according to the style characteristics of a specific image. Further explaining the practical usage of the stylist code S in combination with the figure, in figure 4 we adopt an MNIST data set, where the stylist code S is composed of 4-dimensional-2 to 2 continuous and uniform distribution, where each dimension is not 0 effect graph, 20 equally divide-2 to 2, each row in the graph has 20 numbers, which respectively correspond to the values of different equally divided results, the inclination angles of the numbers from left to right change sequentially, the illumination intensity changes sequentially, and the number thicknesses change sequentially. Of course, the MNIST dataset is not necessarily set to 4 dimensions, and different dimensions have different style effects, and the setting can be selected according to requirements. We further explain in fig. 5 with SVHN dataset, here again consisting of a uniform distribution of-2 to 2, here a 16-dimensional effect, analogous to the MNIST dataset, again with each dimension not being 0 (all can be set to 0 for comparative experiments right from the beginning), each row varying sequentially from left to right in tilt angle, intensity of illumination, number thickness. In fig. 6, the results of the standard normal distribution experiment using 16 dimensions, each row has richer style variation from left to right.
In step 5, the discriminator D is used for monitoring the generation process of the generator, so that the quality and diversity of the generated pictures can be ensured.
Fig. 4 is an image of the generated MNIST dataset style. The number of the rows is 10 from top to bottom, each row corresponds to different number categories, and the classifier in the network framework gives different category information to newly generated image data, so that different number categories, namely 10 categories from top to bottom with numbers of 0 to 9, can be generated. Wherein, each row is from left to right, the numbers are respectively the changing process from thick to thin, and the numbers change in the inclination angle from left to right, and the style of the numbers is controlled by the style code S of the network frame.
Fig. 4 is an image of the MNIST dataset style generated by the method of the present invention. The number of the rows is 10 from top to bottom, each row corresponds to different number categories, and the classifier in the network framework gives different category information to newly generated image data, so that different number categories, namely 10 categories from top to bottom with numbers of 0 to 9, can be generated. Wherein, each row is from left to right, the numbers are respectively the changing process from thick to thin, and the numbers change in the inclination angle from left to right, and the style of the numbers is controlled by the style code S of the network frame.
Fig. 5 is a SVHN dataset-style image generated by the method of the present invention, wherein there are 10 rows from top to bottom, each row represents a different number category, and each row shows a change from dark to light from left to right.
Fig. 6 is an image of an SVHN dataset generated by the method of the present invention. In addition to the 10 rows from top to bottom representing different categories of numbers, each row, i.e. the same category of numbers, shows a more varied stylistic effect.
FIG. 7 shows the result of the method of the present invention using MNIST data set and SVHN data set as source domain and multiple types of meter numbers as target domain, which generates new digital meter images of different types in multiple styles.
The existing GAN-based variants cannot meet the expansion requirement of a data set, or only can meet a part of the requirements, and cannot completely meet the requirement at the same time, so that the problem of the requirement in real production life is not effectively solved at present, and more in reality, an end-to-end direct generation mode is adopted.
The invention provides a cross-domain diversity image generation method based on a generation type countermeasure network, which utilizes the concept of GAN to migrate through the existing data set so as to achieve the purpose of expanding the data set, not only needs the cross-domain migration, but also can realize the free switching of a plurality of domains, and controls the style of the generated image through a style code so as to realize the purposeful expansion of the data set, thereby enhancing the network performance, for example, improving the identification accuracy in the process of target detection.
Referring to fig. 1 to 3, the present invention further provides a cross-domain diversity image generation system based on the generative countermeasure network, including an encoder e (encoder), a classifier c (classifier), a generator g (generator), and a discriminator d (discriminator)
And the encoder is used for encoding the images of the source domain and the target domain which are input into the neural network to obtain the feature vectors corresponding to the source domain and the target domain.
The classifier is used for classifying the characteristic vectors, storing the class information of the target domain, generating a class label for the target domain and controlling the generation class of the image at the later stage;
the generator is used for generating required images with different styles in different domains and different categories;
and the discriminator is used for monitoring the quality of the generated pictures of the generator and promoting the generation of ideal images.
In order to better control the diversity, domain information and category label information of the generated pictures, the generator comprises the following parts:
a domain code α (source domain vector and target domain vector) for controlling information of a target domain to which the generated picture belongs;
the hidden code beta is used for controlling the style characteristics of the generated image, such as the thickness, the inclination angle and the illumination of the digital image;
a pseudo label θ, a class label of the target domain generated by the classifier, may control class information of the generated image, for example, numbers 0 to 9;
the hidden code specifically comprises the following components:
discrete coding d: for example, 20 classes are included, each class has a probability of 0.05, or 10 classes are included, each class has a probability of 0.1, or in order to generate a class more easily, the class probability is set to be larger, and so on;
continuous coding c: such as a uniform distribution U (-1,1) subject to-1 to 1, a gaussian distribution N (0, 1), and so forth.
Loss function:
L=λ0*Lcc1*Lc2*Lh3*Ld
the loss function states:
parameter lambda0Cyclic consistent loss term LCCParameter of, control LCCThe weight of (c);
cyclic consistent loss LCCFor evaluating the difference between the generated image and the corresponding input image;
parameter lambda1Classifier loss term LCParameter of, control LCThe weight of (c);
classifier loss LCEstimating a pseudo label of the target domain by the label of the source domain;
parameter lambda2Mutual information loss term LhParameter of, control LhThe weight of (c);
mutual information loss LhThe style of the input discrete hidden codes d and the continuous hidden codes c for the output image is controlled;
parameter lambda3The discriminator loss term LdParameter of, control LdThe weight of (c);
discriminator loss LdAnd the method is used for monitoring the truth of the generated image and the category to which the generated image belongs.
All parameter items are specifically set and adjusted according to specific data sets and requirements.
Since the discriminator generator can be combined in different ways to generate images of a plurality of types and different combinations belonging to different domains, a plurality of generators and discriminators are provided correspondingly here.
The invention has the beneficial effects that:
the cross-domain diversity image generation method based on the generation type countermeasure network can purposefully expand the data set, thereby enhancing the network performance, for example, the identification accuracy can be improved in the target detection process. The invention utilizes the thought of GAN to carry out migration through the existing related data set so as to achieve the purpose of expanding the data set, but not only needs cross-domain migration, but also can realize free switching of a plurality of domains, and also can realize diversified styles and artificial control of generated images so as to meet the practical requirements of people. However, various existing GAN-based variants cannot meet our needs, or only can meet a part of the requirements, but cannot simultaneously and completely meet the needs, so that the problem of demands in real production life cannot be effectively solved, and more end-to-end direct generation modes are needed in reality.
The cross-domain diversity image generation system based on the generation type confrontation network can generate manually controllable images in various styles according to requirements, can meet the style influenced by various factors through different quantity and structure collocation of discrete and continuous variables, can produce the images with diversity in large batch for requirements of deep neural network training and the like, and can simultaneously realize generation of various domains and various categories for different tasks.
The above-mentioned contents are only for illustrating the technical idea of the present invention, and the protection scope of the present invention is not limited thereby, and any modification made on the basis of the technical idea of the present invention falls within the protection scope of the claims of the present invention.

Claims (6)

1. The cross-domain diversity image generation method based on the generative countermeasure network is characterized by comprising the following steps:
1) taking the reference data set as a source domain and the acquired target data set as a target domain;
2) coding a source domain and a target domain to obtain a feature vector;
3) classifying the feature vectors obtained in the step 2) through a classifier to obtain class labels;
cutting the source domain image and the target domain image to a uniform size, inputting the source domain image and the target domain image which are cut to the uniform size into a neural network, and obtaining a feature vector through convolution operation of an encoder;
4) inputting the category labels, the feature vectors Z, the domain code vectors corresponding to the source domain and the target domain into a generator G in combination with the style code S, and obtaining pictures generated by different combinations by combining different category labels, feature vectors Z, domain code vectors and style codes S and utilizing the operation of a neural network of the generator;
the form code is a combination of a plurality of discrete variables or a plurality of continuous variables.
2. The method as claimed in claim 1, wherein in step 1), the reference data set and the target data set are divided into a plurality of source domains and a plurality of target domains according to the type of electric meter.
3. The method for generating cross-domain diversity image based on generative countermeasure network as claimed in claim 1, wherein a classifier is adopted to classify the feature vectors in step 3);
the classifier is composed of a full connection layer structure, and is connected with a softmax layer for output.
4. The method as claimed in claim 1, wherein the discriminator is used to monitor the image generation process in step 4 to ensure the quality of the image generation.
5. The system of the cross-domain diversity image generation method based on the generative countermeasure network as claimed in any one of claims 1 to 4, comprising an encoder, a classifier, a generator and a discriminator;
the encoder is used for encoding the images of the source domain and the target domain to obtain the characteristic vectors corresponding to the source domain and the target domain;
the classifier is used for classifying the feature vectors and generating class labels;
the generator is used for obtaining pictures generated by different combinations by combining different category labels, feature vectors Z, domain code vectors and style codes S according to the category labels, the feature vectors Z, the domain code vectors corresponding to the source domain and the target domain and combining the style codes S and utilizing the operation of a neural network of the generator;
a discriminator for supervising the quality of the generated image of the generator;
the loss function is as follows:
L=λ0*Lcc1*Lc2*Lh3*Ld
wherein λ is0Is a cyclic consistent loss term LCCParameter of (a)1For classifier loss term LCParameter of (a)2For mutual information loss term LhParameter of (a)3Is a discriminator loss term LdThe parameter (c) of (c).
6. The system of the cross-domain diversity image generation method based on the generative countermeasure network as claimed in claim 5, wherein the generator comprises a domain code α, a hidden code β and a pseudo label θ;
the domain code alpha is used for controlling the information of a target domain to which the generated picture belongs;
the hidden code beta is used for controlling the style characteristics of the generated image;
the pseudo label theta controls the type information of the generated image.
CN202010048320.XA 2020-01-16 2020-01-16 Cross-domain diversity image generation method and system based on generative confrontation network Active CN111292384B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010048320.XA CN111292384B (en) 2020-01-16 2020-01-16 Cross-domain diversity image generation method and system based on generative confrontation network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010048320.XA CN111292384B (en) 2020-01-16 2020-01-16 Cross-domain diversity image generation method and system based on generative confrontation network

Publications (2)

Publication Number Publication Date
CN111292384A CN111292384A (en) 2020-06-16
CN111292384B true CN111292384B (en) 2022-05-20

Family

ID=71021225

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010048320.XA Active CN111292384B (en) 2020-01-16 2020-01-16 Cross-domain diversity image generation method and system based on generative confrontation network

Country Status (1)

Country Link
CN (1) CN111292384B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111898635A (en) * 2020-06-24 2020-11-06 华为技术有限公司 Neural network training method, data acquisition method and device
CN112184846A (en) * 2020-09-16 2021-01-05 上海眼控科技股份有限公司 Image generation method and device, computer equipment and readable storage medium
CN112733946B (en) * 2021-01-14 2023-09-19 北京市商汤科技开发有限公司 Training sample generation method and device, electronic equipment and storage medium
CN113111947B (en) * 2021-04-16 2024-04-09 北京沃东天骏信息技术有限公司 Image processing method, apparatus and computer readable storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109584337A (en) * 2018-11-09 2019-04-05 暨南大学 A kind of image generating method generating confrontation network based on condition capsule
CN109753992A (en) * 2018-12-10 2019-05-14 南京师范大学 The unsupervised domain for generating confrontation network based on condition adapts to image classification method
CN110111236A (en) * 2019-04-19 2019-08-09 大连理工大学 The method for generating image based on the multiple target sketch that gradual confrontation generates network

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10621760B2 (en) * 2018-06-15 2020-04-14 Adobe Inc. Synthesizing new font glyphs from partial observations
CN110210486B (en) * 2019-05-15 2021-01-01 西安电子科技大学 Sketch annotation information-based generation countermeasure transfer learning method
CN110335193B (en) * 2019-06-14 2022-09-20 大连理工大学 Target domain oriented unsupervised image conversion method based on generation countermeasure network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109584337A (en) * 2018-11-09 2019-04-05 暨南大学 A kind of image generating method generating confrontation network based on condition capsule
CN109753992A (en) * 2018-12-10 2019-05-14 南京师范大学 The unsupervised domain for generating confrontation network based on condition adapts to image classification method
CN110111236A (en) * 2019-04-19 2019-08-09 大连理工大学 The method for generating image based on the multiple target sketch that gradual confrontation generates network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Scene Retrieval from Multiple Resolution Generated Images Based on Text-to-Image GAN;Rintaro Yanagi等;《IEEE Xplore》;20190501;第1-5页 *
基于CycleGAN的图像风格转换;彭鹏;《中国优秀硕士学位论文全文数据库 信息科技辑》;20200115;第2020年卷(第1期);第I138-1836页 *

Also Published As

Publication number Publication date
CN111292384A (en) 2020-06-16

Similar Documents

Publication Publication Date Title
CN111292384B (en) Cross-domain diversity image generation method and system based on generative confrontation network
Hall et al. Online convex optimization in dynamic environments
CN109598268B (en) RGB-D (Red Green blue-D) significant target detection method based on single-stream deep network
Santra et al. Learning a patch quality comparator for single image dehazing
Bhanu et al. Adaptive image segmentation using a genetic algorithm
Lin et al. Automatic facial feature extraction by genetic algorithms
EP3737083A1 (en) Learning-based sampling for image matting
CN111444878A (en) Video classification method and device and computer readable storage medium
TWI226193B (en) Image segmentation method, image segmentation apparatus, image processing method, and image processing apparatus
Chen et al. An empirical investigation of representation learning for imitation
CN112651998A (en) Human body tracking algorithm based on attention mechanism and double-current multi-domain convolutional neural network
Bhanu et al. Self-Optimizing Image Segmentation System Using a Genetic Algorithm.
CN112633234A (en) Method, device, equipment and medium for training and applying face glasses-removing model
CN114581992A (en) Human face expression synthesis method and system based on pre-training StyleGAN
KR20200094938A (en) Data imbalance solution method using Generative adversarial network
CN114627269A (en) Virtual reality security protection monitoring platform based on degree of depth learning target detection
Tangsakul et al. Single image haze removal using deep cellular automata learning
CN109492601A (en) Face comparison method and device, computer-readable medium and electronic equipment
CN111768326A (en) High-capacity data protection method based on GAN amplification image foreground object
Lu et al. A video prediction method based on optical flow estimation and pixel generation
Shariff et al. Artificial (or) fake human face generator using generative adversarial network (gan) machine learning model
Costa et al. Genetic adaptation of segmentation parameters
Hall et al. Online optimization in dynamic environments
Park et al. AN EFFECTIVE COLOR QUANTIZATION METHOD USING COLOR IMPORTANCE-BASED SELF-ORGANIZING MAPS.
Xiu et al. Dynamic-scale graph convolutional network for semantic segmentation of 3d point cloud

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant