CN110310344A - A kind of image generating method and system generating confrontation network based on Virtual Conditional - Google Patents
A kind of image generating method and system generating confrontation network based on Virtual Conditional Download PDFInfo
- Publication number
- CN110310344A CN110310344A CN201910425497.4A CN201910425497A CN110310344A CN 110310344 A CN110310344 A CN 110310344A CN 201910425497 A CN201910425497 A CN 201910425497A CN 110310344 A CN110310344 A CN 110310344A
- Authority
- CN
- China
- Prior art keywords
- vector
- confrontation network
- conditional
- virtual
- generator
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/088—Non-supervised learning, e.g. competitive learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/001—Texturing; Colouring; Generation of texture or colour
Abstract
The present invention provides a kind of image generating method and system that confrontation network is generated based on Virtual Conditional, this method comprises: construction Virtual Conditional generates the generator in confrontation network model;It constructs Virtual Conditional and generates the arbiter fought in network model;It constructs Virtual Conditional and generates the loss function fought in network model;Confrontation network is generated to the Virtual Conditional of construction complete to be trained and test.The jump of analogue noise to digital noise is realized by analog-to-digital conversion function, then the image pattern of decoder generation corresponding conditions (classification) is instructed using amplification offset one-hot encoding, and the Virtual Conditional confrontation that heterogeneous networks configuration is had trained on multiple data sets generates network model, it is compared with the generation distributed mass of baseline model, the robustness that the promotion and model for illustrating model performance select hyper parameter.
Description
Technical field
The present invention relates to deep learning nerual network technique fields, and in particular to one kind generates confrontation net based on Virtual Conditional
The image generating method and system of network.
Background technique
It is that GoodFellow was mentioned in 2014 that production, which fights network (Generative Adversarial Nets, GAN),
A kind of unsupervised production model based on dual training and deep neural network out.It is differentiated by a generator and one
Device is constituted, and can generate the infinite number of sample for obeying training set distribution.From after being suggested, deep learning, artificial is rapidly become
One of research hotspot of smart field, be widely used in image generation, the transformation of image style, image super-resolution, video generate,
The fields such as domain adaptation, semi-supervised learning.
Be originally generated formula confrontation network be unable to control in sample generate sample content, generate sample be completely with
What machine generated.Present condition control class, which generates confrontation network, can specify a part of attribute of sample when generating sample, such as
The classification of sample generates certain types of target sample data.Initial condition, which generates confrontation network, needs the data of label
The set pair analysis model is trained, and is a kind of study for having supervision, and many times class label is difficult to obtain or expensive.Cause
This, the unbalanced image pattern of classification can be learnt unsupervisedly and generate by how realizing, and with small in size, training is fast, generates
The high advantage of distributed mass has important research significance.
Summary of the invention
The purpose of the present invention is to solve drawbacks described above in the prior art, provide a kind of based on Virtual Conditional generation
Fight the image generating method and system of network.
In order to achieve the above object, the present invention the following technical schemes are provided:
A kind of image generating method generating confrontation network based on Virtual Conditional, comprising:
Obtain the multivariate Gaussian noise vector Z of input;
The generator generated in confrontation network model is constructed based on Virtual Conditional, the generator is by the noise vector Z
It is divided into the first noise vector Z ' and the second noise vector Z ";
The second noise vector Z " is mapped as N-dimensional solely hot vector C, and according to preset amplification offset conditions to described
Solely hot vector C amplifies offset to N-dimensional, to obtain only hot vector C ' after amplification offset;
The decoding in the generator is inputted after the first noise vector Z ' and only hot vector C ' are spliced
Device, so that the generator exports dummy copy collection.
Preferably, further includes:
Obtain the true sample set of image;
The arbiter generated in confrontation network model is constructed based on Virtual Conditional;
The true sample set and the dummy copy collection are inputted into the arbiter and carry out network training, to obtain image sample
This.
Preferential, further includes:
The loss function generated in confrontation network model, the expression formula of the loss function are constructed based on Virtual Conditional are as follows:
Wherein, pjIndicate that multivariate Gaussian noise vector Z " is converted to each discrete value of digital noise by analog-to-digital conversion function
Probability, E indicate Z ' distribution, R indicate generator, D indicate arbiter, θRIndicate the parameter of generator, G indicates to generate confrontation
Network.
It is preferably, described that the second noise vector Z " is mapped as only hot vector C, comprising:
The multivariate Gaussian noise vector with continuous probability distribution of input is mapped as having by construction analog-to-digital conversion function
The only hot vector C of the N-dimensional of discrete state number;
Analog-to-digital conversion function are as follows:
Wherein, k-th component is 1 in vector C, other components are 0, Φ-1Indicate inverse function.
It is preferably, described that offset is amplified to the solely hot vector C according to preset amplification offset conditions, comprising:
Amplification factor and offset are determined according to the calculation formula of one-hot encoding amplification and offset;
The calculation formula of one-hot encoding amplification are as follows:
The calculation formula of offset are as follows:
Wherein, b indicates offset, A be amplification factor and b's and, N and L are respectively the dimension of z ' and z ", and δ is hyper parameter,
H and V is intermediate parameters.
It is preferably, described that the true sample set and the dummy copy collection are inputted into the arbiter and carry out network training,
Include:
It generates confrontation network model to Virtual Conditional on multiple data sets to be trained and test, the data set packet
It includes: MNIST and Fashion MNIST;
By index FID effect is generated on the data set to assess, it is described evaluation generate distributed mass index
FID are as follows:
Wherein, mRFor the variance of truthful data, mFFor the variance for generating data, element is total on Tr representing matrix diagonal line
With CRFor the mean value of truthful data, CFFor the mean value for generating data.
Preferably, the decoder is single Neural, is made of several cascade long convolutional layers of Fractional-step, each
The characteristic pattern size of layer is double, and depth minus half, activation primitive Relu, the last layer Tanh, network is in addition to the last layer
Each layer is using batch normalization technology.
Preferably, the arbiter is made of the convolutional neural networks that Leaky Relu activation primitive activates, and using across
It is down-sampled to carry out to walk convolution, each layer of the network in addition to first layer is using batch normalization technology.
The present invention also provides a kind of image generation systems that confrontation network is generated based on Virtual Conditional, comprising:
First input unit, for obtaining the noise vector Z of input;
Model unit is generated, for constructing the generator generated in confrontation network model, the generation based on Virtual Conditional
The noise vector Z is divided into the first noise vector Z ' and the second noise vector Z " by device;
The generation model unit is also used to for the second noise vector Z " being mapped as only hot vector C, and according to default
Amplification offset conditions to it is described solely hot vector C amplify offset, with obtain amplification deviate after only hot vector C ';
The generation model unit inputs after also being spliced the first noise vector Z ' and only hot vector C '
Decoder in the generator, so that the generator exports dummy copy collection.
Preferably, further includes:
Second input unit, for obtaining the true sample set of image;
Discrimination model unit, for constructing the arbiter generated in confrontation network model based on Virtual Conditional;
The discrimination model unit is also used to the true sample set and the dummy copy collection inputting the arbiter and go forward side by side
Row network training, to obtain image pattern.
Preferably, further includes:
Function setup unit, for constructing the loss function generated in confrontation network model, the damage based on Virtual Conditional
Lose the expression formula of function are as follows:
Wherein, pjIndicate that multivariate Gaussian noise vector Z " is converted to each discrete value of digital noise by analog-to-digital conversion function
Probability, E indicate Z ' distribution, R indicate generator, D indicate arbiter, θRIndicate the parameter of generator, G indicates to generate confrontation
Network.
The present invention provides a kind of image generating method and system that confrontation network is generated based on Virtual Conditional, is turned by modulus
Exchange the letters number realizes the jump of analogue noise to digital noise, then instructs decoder to generate corresponding item using amplification offset one-hot encoding
The image pattern of part (classification), and have trained the Virtual Conditional confrontation that heterogeneous networks configure on multiple data sets and generate network mould
The generation distributed mass of type and baseline model compares, what the promotion and model for illustrating model performance selected hyper parameter
Robustness.
Detailed description of the invention
In order to illustrate more clearly of specific embodiments of the present invention, attached drawing needed in the embodiment will be made below
Simply introduce.
Fig. 1 is a kind of image generating method flow chart that confrontation network is generated based on Virtual Conditional provided by the invention;
Fig. 2 is the overall structure figure that Virtual Conditional confrontation generates network in the method for the present invention;
Fig. 3 is the generator structure chart that Virtual Conditional confrontation generates network in the method for the present invention;
Fig. 4 is test result figure of the method for the present invention in MNIST data set;
Fig. 5 is test result figure of the method for the present invention in Fashion MNIST data set;
Fig. 6 is the generation sample graph of the method for the present invention training on MNIST and Fashion MNIST as δ=2;
Fig. 7 is the generation sample graph of the method for the present invention training on MNIST and Fashion MNIST as δ=None.
Specific embodiment
The scheme of embodiment in order to enable those skilled in the art to better understand the present invention with reference to the accompanying drawing and is implemented
Mode is described in further detail the embodiment of the present invention.
There is a problem of that generation classification is unbalanced, generation distributed mass is not high for formula confrontation network is currently generated, this hair
It is bright that a kind of image generating method and system that confrontation network is generated based on Virtual Conditional is provided, mould is realized by analog-to-digital conversion function
Then quasi- noise instructs the figure of decoder generation corresponding conditions (classification) to the jump of digital noise using amplification offset one-hot encoding
Decent, and have trained the Virtual Conditional confrontation that heterogeneous networks configure on multiple data sets and generate network model and baseline mould
The generation distributed mass of type compares, the robustness that the promotion and model for illustrating model performance select hyper parameter.
As shown in Figure 1, a kind of image generating method for generating confrontation network based on Virtual Conditional, comprising:
Step 1: obtaining the multivariate Gaussian noise vector Z of input;
Step 2: the generator generated in confrontation network model is constructed based on Virtual Conditional, the generator is by the noise
Vector Z is divided into the first noise vector z ' and the second noise vector z ";
Step 3: the second noise vector z " being mapped as N-dimensional solely hot vector C, and according to preset amplification offset conditions
To the N-dimensional, solely hot vector C amplifies offset, to obtain only hot vector C ' after amplification offset;
Step 4: being inputted in the generator after the first noise vector z ' and only hot vector C ' are spliced
Decoder so that the generator export dummy copy collection.
Further, this method further include:
Step 5: obtaining the true sample set of image;
Step 6: the arbiter generated in confrontation network model is constructed based on Virtual Conditional;
Step 7: the true sample set and the dummy copy collection being inputted into the arbiter and carry out network training, to obtain
Image pattern.
Specifically, traditional condition GAN needs the data with conditional tag (classification) to be trained network, and the party
Method can carry out automatic cluster to the data of no label using Noise Characteristic jump and the amplification of one-hot encoding offset, and generation is had ready conditions
The data of label, i.e. conditional tag not instead of data are included, are generated by virtual technology.As shown in Fig. 2, production pair
Anti- network generates dummy copy collection by generator G, and arbiter D is to true sample set XrealWith dummy copy collection XfakeDifferentiated, with
To judgement result.Wherein, arbiter can exactly determine to input one to scheme it be from true sample set or dummy copy collection.It is false
What it is such as input is true sample, and network output is just close to 1, and input is dummy copy, and network is exported close to 0.
In practical applications, it includes following for generating confrontation network generation method:
S1, construction Virtual Conditional generate the generator in confrontation network model;
S2, construction Virtual Conditional generate the arbiter in confrontation network model;
S3, construction Virtual Conditional generate the loss function in confrontation network model;
S4, the Virtual Conditional generation confrontation network of construction complete is trained and is tested.
Further, Virtual Conditional is constructed in the step S1 and generate the generator fought in network model, such as Fig. 3 institute
Show, detailed process is as follows:
S11, the M for inputting generator dimension Gaussian noise vector z is divided into two parts, random vector z ' and z ", z ' and z "
Dimension be respectively N and L=M-N;
Wherein, N indicates condition (classification) number.
S12, N-dimensional multivariate Gaussian noise vector z " is mapped as N-dimensional solely hot vector c;
S13, to N-dimensional, solely hot vector c amplifies offset;
Because the amplitude and energy of one-hot encoding are too small relative to Gaussian noise, cause its semantic information not significant, if
Directly by the splicing input decoder of multivariate Gaussian noise and one-hot encoding, decoder is likely to ignore the part of one-hot encoding, only
Image is generated using Gaussian noise.
S14, z ' and only hot vector c ' after amplification offset are spliced, sample is generated in input decoder.
Further, N-dimensional multivariate Gaussian noise vector z " is mapped as N-dimensional solely hot vector c in the step S12, constructed
Analog-to-digital conversion function, function are (can to regard simulation as to make an uproar by the multivariate Gaussian noise vector z " with continuous probability distribution of input
Sound) it is mapped as that there are only heat (One-hot) the vector c (can regard digital noise as) of the N-dimensional of discrete state number, specific mapping relations
Are as follows:
Wherein, piIndicate that multivariate Gaussian noise vector z " is converted to each discrete value of digital noise by analog-to-digital conversion function
Probability, be the parameter that can learn.
Further, to N-dimensional, solely hot vector c amplifies offset in the step S13, and Virtual Conditional confrontation generates network
The calculation formula of middle one-hot encoding amplification and offset is as follows, and by solving equation group, solvable one close with high accuracy
Like solution:
Wherein, b indicates offset, A be amplification factor and b's and, N and L are respectively the dimension of z ' and z ", and δ is hyper parameter,
H and V is intermediate parameters.
Further, z ' and only hot vector c ' after amplification offset are spliced in the step S14, input decoder
Middle generation sample.Decoder is single Neural, is made of several cascade long convolutional layers of Fractional-step, each layer of feature
Figure size is double, and depth minus half, activation primitive is Relu (the last layer Tanh), and each layer of the network in addition to the last layer is equal
Using batch normalization technology, to stablize training, problem-solving pattern loses problem.
Further, in the step S2 construct Virtual Conditional generate confrontation network model in arbiter, arbiter by
The convolutional neural networks of Leaky Relu activation are constituted, and are carried out using convolution is striden down-sampled, and network is in addition to first layer
Each layer is using batch normalization technology, and to stablize training, problem-solving pattern loses problem.
Further, Virtual Conditional is constructed in the step S3 and generate the loss function fought in network model, wherein institute
The expression formula for the loss function stated are as follows:
Wherein, pjIndicate that multivariate Gaussian noise vector z " is converted to each discrete value of digital noise by analog-to-digital conversion function
Probability, E indicate Z ' distribution, R indicate generator, D indicate arbiter, θRIndicate the parameter of generator, G indicates to generate confrontation
Network.
Further, confrontation network is generated to the Virtual Conditional of construction complete in the step S4 to be trained and test,
Confrontation network model is generated to Virtual Conditional on MNIST, Fashion MNIST data set to be trained and test.Pass through instruction
Practice the model with different one-hot encoding magnification levels, tests to generator and input only thermal component (digital noise) to generation distribution
The effect of increased quality.
The training of whole network batch input quantity is set as 64, and training process uses Adam (Adaptive Moment
Estimation) optimizer, initial learning rate are set as 0.0001, and learning rate decaying is set to 1000, and attenuation rate is
0.96.Entire experiment carries out under the deep learning frame of Tensor flow, and experimental situation is the operation of Ubuntu 18.04 system
System is carried out the training of network using the GTX 1080Ti GPU of NVIDIA company 11GB video memory and is added using what Cuda was trained
Speed.
Equal iteration 200k times (each repetitive exercise generator 1 time, arbiter 5 times) is tested per this, is adopted every 10k iteration
Sample simultaneously saves 50k sample.The sample of preservation is used to calculate FID as evaluation and generates the index of distributed mass to assess two
Effect is generated on a public data collection, the appraisal procedure of algorithm according to before generates the finger of distributed mass using FID as evaluation
It marks and is compared with other algorithms.Wherein evaluation index is specific as follows shown:
Different hyper parameter δ ∈ [None, 0.2,0.5,1,1.5,2,3,4], None generation have been attempted on each data set
Table splices the one-hot encoding without amplifying and deviating to Gaussian noise.The model of same group of hyper parameter configuration is repeated random initializtion
Training 5 times, to avoid statistical error.On MNIST the and Fashion MNIST of each model training generation distribution FID value with
The number of iterations change curve difference it is as shown in Figure 4, Figure 5.The shadow region of curve indicates the average FID of many experiments in figure
The range of ± standard deviation.
As δ >=0.5, vcGAN learns faster than baseline model WGAN, converges to lower FID value, it is meant that
The generation of vcGAN is distributed more faithful to true distribution.On MNIST/Fashion MNIST data set, vcGAN has only used 60k/
80k trained iteration has just reached the FID of WGAN 200k iteration of training, has been equivalent to training speed and improves 1.5/2.3 times.
When one-hot encoding is not amplified, or amplification degree too small (δ=0.2) when, the FID of vcGAN is not significant compared to WGAN-GP
It is promoted, shows that one-hot encoding must be amplified to enough amplitudes, the generation quality of multi-modal data could be improved.When δ=0.5,
When 1,1.5,2,3,4, the FID curve difference of vcGAN is little, shows that vcGA is robust to hyper parameter δ.Generally, δ takes 2 left sides
The right side.Fig. 6 and Fig. 7 is δ=2 and the vcGAN generation sample graph of δ=None, each one for arranging corresponding vcGAN in figure respectively
Path is generated, every a line corresponds to the same analogue noise vector.It can be seen that vcGAN learns to training set when δ=2 unsupervisedly
Middle different mode/category, and due to not amplifying to one-hot encoding and deviating when δ=0, difference generates path, and there is no by mode point
Work, renderer have ignored digital noise component.
As it can be seen that the present invention provides a kind of image generating method for generating confrontation network based on Virtual Conditional, turned by modulus
Exchange the letters number realizes the jump of analogue noise to digital noise, then instructs decoder to generate corresponding item using amplification offset one-hot encoding
The image pattern of part (classification), and have trained the Virtual Conditional confrontation that heterogeneous networks configure on multiple data sets and generate network mould
The generation distributed mass of type and baseline model compares, what the promotion and model for illustrating model performance selected hyper parameter
Robustness.
Correspondingly, the present invention also provides a kind of image generation systems that confrontation network is generated based on Virtual Conditional, comprising: the
One input unit, for obtaining the noise vector Z of input.Model unit is generated, for generating confrontation based on Virtual Conditional building
The noise vector Z is divided into the first noise vector Z ' and the second noise vector by the generator in network model, the generator
Z ".The generation model unit is also used to for the second noise vector Z " being mapped as only hot vector C, and according to preset amplification
Offset conditions amplify offset to the solely hot vector C, to obtain only hot vector C ' after amplification offset.The generation model
Unit inputs the decoder in the generator after also being spliced the first noise vector Z ' and only hot vector C ',
So that the generator exports dummy copy collection.
Further, further includes: the second input unit, for obtaining the true sample set of image.Discrimination model unit is used for base
The arbiter generated in confrontation network model is constructed in Virtual Conditional.The discrimination model unit is also used to the true sample set
The arbiter is inputted with the dummy copy collection and carries out network training, to obtain image pattern.
Further, further includes: function setup unit is generated in confrontation network model for being constructed based on Virtual Conditional
Loss function, the expression formula of the loss function are as follows:
Wherein, pjIndicate that multivariate Gaussian noise vector Z " is converted to each discrete value of digital noise by analog-to-digital conversion function
Probability, E indicate Z ' distribution, R indicate generator, D indicate arbiter, θRIndicate the parameter of generator, G indicates to generate confrontation
Network.
As it can be seen that the present invention provides a kind of image generation system for generating confrontation network based on Virtual Conditional, turned by modulus
Exchange the letters number realizes the jump of analogue noise to digital noise, then instructs decoder to generate corresponding item using amplification offset one-hot encoding
The image pattern of part (classification), and have trained the Virtual Conditional confrontation that heterogeneous networks configure on multiple data sets and generate network mould
The generation distributed mass of type and baseline model compares, what the promotion and model for illustrating model performance selected hyper parameter
Robustness.
Structure, feature and effect of the invention, the above institute is described in detail according to diagrammatically shown embodiment above
Only presently preferred embodiments of the present invention is stated, but the present invention does not limit the scope of implementation as shown in the drawings, it is all according to structure of the invention
Think made change or equivalent example modified to equivalent change, when not going beyond the spirit of the description and the drawings,
It should all be within the scope of the present invention.
Claims (11)
1. a kind of image generating method for generating confrontation network based on Virtual Conditional characterized by comprising
Obtain the multivariate Gaussian noise vector Z of input;
The generator generated in confrontation network model is constructed based on Virtual Conditional, the noise vector Z is divided by the generator
First noise vector Z ' and the second noise vector Z ";
The second noise vector Z " is mapped as N-dimensional solely hot vector C, and according to preset amplification offset conditions to the N-dimensional
Only hot vector C amplifies offset, to obtain only hot vector C ' after amplification offset;
The decoder in the generator is inputted after the first noise vector Z ' and only hot vector C ' are spliced, with
Make the generator output dummy copy collection.
2. the image generating method according to claim 1 for generating confrontation network based on Virtual Conditional, which is characterized in that also
Include:
Obtain the true sample set of image;
The arbiter generated in confrontation network model is constructed based on Virtual Conditional;
The true sample set and the dummy copy collection are inputted into the arbiter and carry out network training, to obtain image pattern.
3. the image generating method according to claim 2 for generating confrontation network based on Virtual Conditional, which is characterized in that also
Include:
The loss function generated in confrontation network model, the expression formula of the loss function are constructed based on Virtual Conditional are as follows:
Wherein, pjIndicate analog-to-digital conversion function by multivariate Gaussian noise vector Z " be converted to digital noise each discrete value it is general
Rate, E indicate the distribution of Z ', and R indicates that generator, D indicate arbiter, θRIndicate the parameter of generator, G indicates to generate confrontation net
Network.
4. the image generating method according to claim 3 for generating confrontation network based on Virtual Conditional, which is characterized in that institute
It states and the second noise vector Z " is mapped as only hot vector C, comprising:
The multivariate Gaussian noise vector with continuous probability distribution of input is mapped as having discrete by construction analog-to-digital conversion function
The only hot vector C of the N-dimensional of status number;
Analog-to-digital conversion function are as follows:
Wherein, k-th component is 1 in vector C, other components are 0, Φ-1Indicate inverse function.
5. the image generating method according to claim 4 for generating confrontation network based on Virtual Conditional, which is characterized in that institute
It states and offset is amplified to the solely hot vector C according to preset amplification offset conditions, comprising:
Amplification factor and offset are determined according to the calculation formula of one-hot encoding amplification and offset;
The calculation formula of one-hot encoding amplification are as follows:
The calculation formula of offset are as follows:
Wherein, b indicates offset, A be amplification factor and b's and, N and L are respectively the dimension of z ' and z ", and δ is hyper parameter, h and V
For intermediate parameters.
6. the image generating method according to claim 3 for generating confrontation network based on Virtual Conditional, which is characterized in that institute
It states and the true sample set and the dummy copy collection is inputted into the arbiter and carry out network training, comprising:
It generates confrontation network model to Virtual Conditional on multiple data sets to be trained and test, the data set includes:
MNIST and Fashion MNIST;
Effect is generated on the data set to assess by the index FID that evaluation generates distributed mass, the evaluation, which generates, to divide
The index FID of cloth quality are as follows:
Wherein, mRFor the variance of truthful data, mFFor the variance for generating data, the summation of element, C on Tr representing matrix diagonal lineR
For the mean value of truthful data, CFFor the mean value for generating data.
7. the image generating method according to claim 1 for generating confrontation network based on Virtual Conditional, which is characterized in that institute
Stating decoder is single Neural, is made of several cascade long convolutional layers of Fractional-step, each layer of characteristic pattern size is turned over
Times, depth minus half, activation primitive Relu, the last layer Tanh, each layer of the network in addition to the last layer are returned using batch
One changes technology.
8. the image generating method according to claim 2 for generating confrontation network based on Virtual Conditional, which is characterized in that institute
It states arbiter to be made of the convolutional neural networks that Leaky Relu activation primitive activates, and carries out drop using convolution is striden and adopt
Sample, each layer of the network in addition to first layer is using batch normalization technology.
9. a kind of image generation system for generating confrontation network based on Virtual Conditional characterized by comprising
First input unit, for obtaining the noise vector Z of input;
Model unit is generated, for constructing the generator generated in confrontation network model based on Virtual Conditional, the generator will
The noise vector Z is divided into the first noise vector Z ' and the second noise vector Z ";
The generation model unit is also used to for the second noise vector Z " being mapped as only hot vector C, and is put according to preset
Big offset conditions amplify offset to the solely hot vector C, to obtain only hot vector C ' after amplification offset;
After the generation model unit is also spliced the first noise vector Z ' and only hot vector C ' described in input
Decoder in generator, so that the generator exports dummy copy collection.
10. the image generating method according to claim 9 for generating confrontation network based on Virtual Conditional, which is characterized in that
Further include:
Second input unit, for obtaining the true sample set of image;
Discrimination model unit, for constructing the arbiter generated in confrontation network model based on Virtual Conditional;
The discrimination model unit is also used to the true sample set and the dummy copy collection inputting the arbiter and carries out net
Network training, to obtain image pattern.
11. the image generating method according to claim 10 for generating confrontation network based on Virtual Conditional, which is characterized in that
Further include:
Function setup unit, for constructing the loss function generated in confrontation network model, the loss letter based on Virtual Conditional
Several expression formulas are as follows:
Wherein, pjIndicate analog-to-digital conversion function by multivariate Gaussian noise vector Z " be converted to digital noise each discrete value it is general
Rate, E indicate the distribution of Z ', and R indicates that generator, D indicate arbiter, θRIndicate the parameter of generator, G indicates to generate confrontation net
Network.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910425497.4A CN110310344A (en) | 2019-05-21 | 2019-05-21 | A kind of image generating method and system generating confrontation network based on Virtual Conditional |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910425497.4A CN110310344A (en) | 2019-05-21 | 2019-05-21 | A kind of image generating method and system generating confrontation network based on Virtual Conditional |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110310344A true CN110310344A (en) | 2019-10-08 |
Family
ID=68075539
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910425497.4A Pending CN110310344A (en) | 2019-05-21 | 2019-05-21 | A kind of image generating method and system generating confrontation network based on Virtual Conditional |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110310344A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110728119A (en) * | 2019-12-17 | 2020-01-24 | 广东博智林机器人有限公司 | Poster generation method and device |
CN110941829A (en) * | 2019-11-27 | 2020-03-31 | 北京电子科技学院 | Large-scale hardware Trojan horse library generation system and method based on generation countermeasure network |
CN111724299A (en) * | 2020-05-21 | 2020-09-29 | 同济大学 | Super-realistic painting image style migration method based on deep learning |
CN112614053A (en) * | 2020-12-25 | 2021-04-06 | 哈尔滨市科佳通用机电股份有限公司 | Method and system for generating multiple images based on single image of antagonistic neural network |
CN112966830A (en) * | 2021-03-09 | 2021-06-15 | 中南大学 | Generating a countermeasure network based on conditions of a condition distribution |
-
2019
- 2019-05-21 CN CN201910425497.4A patent/CN110310344A/en active Pending
Non-Patent Citations (2)
Title |
---|
HAIFENG SHI ET AL.: ""Virtual Conditional Generative Adversarial Networks"", 《ARXIV》 * |
MARTIN HEUSEL ET AL.: ""GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium"", 《ARXIV》 * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110941829A (en) * | 2019-11-27 | 2020-03-31 | 北京电子科技学院 | Large-scale hardware Trojan horse library generation system and method based on generation countermeasure network |
CN110941829B (en) * | 2019-11-27 | 2023-03-10 | 北京电子科技学院 | Large-scale hardware Trojan horse library generation system and method based on generation countermeasure network |
CN110728119A (en) * | 2019-12-17 | 2020-01-24 | 广东博智林机器人有限公司 | Poster generation method and device |
CN111724299A (en) * | 2020-05-21 | 2020-09-29 | 同济大学 | Super-realistic painting image style migration method based on deep learning |
CN111724299B (en) * | 2020-05-21 | 2023-08-08 | 同济大学 | Deep learning-based super-reality sense painting image style migration method |
CN112614053A (en) * | 2020-12-25 | 2021-04-06 | 哈尔滨市科佳通用机电股份有限公司 | Method and system for generating multiple images based on single image of antagonistic neural network |
CN112966830A (en) * | 2021-03-09 | 2021-06-15 | 中南大学 | Generating a countermeasure network based on conditions of a condition distribution |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110310344A (en) | A kind of image generating method and system generating confrontation network based on Virtual Conditional | |
Ubbens et al. | The use of plant models in deep learning: an application to leaf counting in rosette plants | |
JP6727340B2 (en) | Information processing apparatus, method, and computer-readable storage medium | |
Espejo-Garcia et al. | Improving weeds identification with a repository of agricultural pre-trained deep neural networks | |
Demir et al. | EEG-GNN: Graph neural networks for classification of electroencephalogram (EEG) signals | |
CN110046249A (en) | Training method, classification method, system, equipment and the storage medium of capsule network | |
CN109815919A (en) | A kind of people counting method, network, system and electronic equipment | |
CN107705806A (en) | A kind of method for carrying out speech emotion recognition using spectrogram and deep convolutional neural networks | |
CN109344759A (en) | A kind of relatives' recognition methods based on angle loss neural network | |
CN106202329A (en) | Sample data process, data identification method and device, computer equipment | |
Karatas et al. | Supervised deep neural networks (DNNs) for pricing/calibration of vanilla/exotic options under various different processes | |
CN110443372A (en) | A kind of transfer learning method and system based on entropy minimization | |
CN108268890A (en) | A kind of hyperspectral image classification method | |
CN109447096A (en) | A kind of pan path prediction technique and device based on machine learning | |
CN114218457B (en) | False news detection method based on forwarding social media user characterization | |
CN109299246A (en) | A kind of file classification method and device | |
Zając et al. | Split batch normalization: Improving semi-supervised learning under domain shift | |
CN108537277A (en) | A kind of image classification knowledge method for distinguishing | |
CN111882042A (en) | Automatic searching method, system and medium for neural network architecture of liquid state machine | |
Ya-Guan et al. | EMSGD: An improved learning algorithm of neural networks with imbalanced data | |
CN107895170A (en) | A kind of Dropout regularization methods based on activation value sensitiveness | |
CN109697511A (en) | Data reasoning method, apparatus and computer equipment | |
CN116648700A (en) | Audio data transcription training learning algorithm identifies visible medical devices in image data | |
Wu et al. | Unconstrained facial expression recogniton based on cascade decision and Gabor filters | |
Lv et al. | YOLOV5-CBAM-C3TR: an optimized model based on transformer module and attention mechanism for apple leaf disease detection |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |