WO2021056843A1 - 神经网络训练方法及装置和图像生成方法及装置 - Google Patents
神经网络训练方法及装置和图像生成方法及装置 Download PDFInfo
- Publication number
- WO2021056843A1 WO2021056843A1 PCT/CN2019/124541 CN2019124541W WO2021056843A1 WO 2021056843 A1 WO2021056843 A1 WO 2021056843A1 CN 2019124541 W CN2019124541 W CN 2019124541W WO 2021056843 A1 WO2021056843 A1 WO 2021056843A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- distribution
- network
- discriminant
- loss
- training
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
- G06F18/2148—Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the process organisation or structure, e.g. boosting cascade
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/217—Validation; Performance evaluation; Active pattern learning techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/047—Probabilistic or stochastic networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N7/00—Computing arrangements based on specific mathematical models
- G06N7/01—Probabilistic graphical models, e.g. probabilistic networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T9/00—Image coding
- G06T9/002—Image coding using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/762—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
- G06V10/7747—Organisation of the process, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/776—Validation; Performance evaluation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20076—Probabilistic image processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Definitions
- the present disclosure relates to the field of computer technology, and in particular to a neural network training method and device, and an image generation method and device.
- GAN Generative Adversarial Networks
- Discriminator In related technologies, Generative Adversarial Networks (GAN) is composed of two modules, namely Discriminator and Generator. Inspired by the zero-sum game, the two networks compete with each other to achieve the best generation effect.
- the discriminator learns to distinguish between real image data and simulated images generated by the generation network by rewarding true targets and penalizing false targets.
- the generator gradually reduces the punishment of false targets by the discriminator, making the discriminator unable to distinguish real images.
- the two play games and evolve with each other, and finally achieve the effect of being fake.
- the generative confrontation network is described by a single scalar output by the discrimination network to describe the authenticity of the input picture, and then the scalar is used to calculate the loss of the network, and then the generative confrontation network is trained.
- the present disclosure proposes a neural network training method and device, and an image generation method and device.
- a neural network training method including:
- the first generated image and the first real image are respectively input to the discriminant network, and the first discriminant distribution of the first generated image and the second discriminant distribution of the first real image are obtained respectively, wherein the first discriminant distribution represents The probability distribution of the real degree of the first generated image, and the second discriminant distribution represents the probability distribution of the real degree of the first real image;
- the first discriminant distribution the second discriminant distribution, the preset first target distribution, and the preset second target distribution
- the first network loss of the discriminant network is determined, wherein the first target distribution To generate the target probability distribution of the image, the second target distribution is the target probability distribution of the real image;
- the generation network and the discriminant network are trained against training.
- the discriminant network can output the discriminative distribution of the input image, describe the authenticity of the input image in the form of probability distribution, and describe the input image as real from dimensions such as color, texture, proportion, and background.
- the probability of the image can consider the authenticity of the input image from many aspects, reduce information loss, provide more comprehensive supervision information and more accurate training direction for neural network training, improve training accuracy, and ultimately improve the quality of the generated image, so that the generation
- the network can be adapted to generate high-definition images.
- the target probability distribution of the generated image and the target probability distribution of the real image are preset to guide the training process.
- the real image and the generated image are guided to approach their respective target probability distributions, and the distinction between the real image and the generated image is increased. Enhance the ability of the discriminant network to distinguish between real images and generated images, thereby improving the quality of the images generated by the generating network.
- the first network loss of the discriminant network is determined according to the first discriminant distribution, the second discriminant distribution, the preset first target distribution, and the preset second target distribution ,include:
- the target probability distribution of the generated image and the target probability distribution of the real image are preset to guide the training process, and the respective distribution losses are determined respectively, and the real image and the generated image are guided to approach their respective target probabilities during the training process.
- Distribution increase the distinction between real images and generated images, provide more accurate angle information for the discriminant network, provide more accurate training directions for the discriminant network, and enhance the ability of the discriminant network to distinguish between real images and generated images, thereby improving the generation network The quality of the generated image.
- determining the first distribution loss of the first generated image according to the first discriminant distribution and the first target distribution includes:
- mapping the first discriminant distribution to the support set of the first target distribution to obtain a first mapping distribution
- the first distribution loss is determined.
- determining the second distribution loss of the first real image according to the second discriminant distribution and the second target distribution includes:
- the second distribution loss is determined.
- determining the first network loss according to the first distribution loss and the second distribution loss includes:
- determining the second network loss of the generating network according to the first discriminant distribution and the second discriminant distribution includes:
- the second network loss is determined.
- the generation network can be trained by reducing the difference between the first discriminant distribution and the second discriminant distribution, so that while the performance of the discriminant network is improved, the performance of the generation network is promoted, thereby generating a more realistic generated image , Making the generation network suitable for generating high-definition images.
- adversarial training of the generation network and the discriminant network includes:
- the trained generating network and the discriminant network are obtained.
- adjusting the network parameters of the discrimination network according to the loss of the first network includes:
- the gradient descent speed of the discriminant network during training can be limited, thereby limiting the training progress of the discriminant network and reducing the probability of the gradient disappearing of the discriminant network , So as to continuously optimize the generation network, improve the performance of the generation network, and make the generated images of the generation network more realistic and suitable for generating high-definition images.
- adversarial training of the generation network and the discriminant network includes:
- the first generated image, at least one third generated image, and at least one real image corresponding to the first random vector input to the generating network in the at least one historical training period are respectively input into the discriminant network of the current training period to obtain at least A fourth discriminant distribution of a first generated image, a fifth discriminant distribution of at least one third generated image, and a sixth discriminant distribution of at least one real image;
- the gradient descent speed of the discriminant network in training can be limited, thereby limiting the training progress of the discriminant network, reducing the probability of the discriminant network appearing gradient disappear, so as to continuously optimize
- the generation network improves the performance of the generation network, and makes the image generated by the generation network more realistic and suitable for the generation of high-definition images.
- determining the training progress parameters of the generation network of the current training period according to the fourth discriminant distribution, the fifth discriminant distribution, and the sixth discriminant distribution includes:
- the ratio of the first difference value to the second difference value is determined as the training progress parameter of the generating network of the current training period.
- an image generation method including:
- the third random vector is input into the generating network obtained after training for processing, and the target image is obtained.
- a neural network training device including:
- a generating module which is used to input the first random vector into the generating network to obtain the first generated image
- the discrimination module is used to input the first generated image and the first real image into a discrimination network respectively to obtain the first discriminant distribution of the first generated image and the second discriminant distribution of the first real image, wherein the The first discriminant distribution represents the probability distribution of the real degree of the first generated image, and the second discriminant distribution represents the probability distribution of the real degree of the first real image;
- the first determining module is configured to determine the first network loss of the discriminant network according to the first discriminant distribution, the second discriminant distribution, the preset first target distribution, and the preset second target distribution, where ,
- the first target distribution is the target probability distribution of the generated image, and the second target distribution is the target probability distribution of the real image;
- a second determining module configured to determine the second network loss of the generating network according to the first discriminant distribution and the second discriminant distribution
- the training module is used to counter-train the generation network and the discrimination network according to the loss of the first network and the loss of the second network.
- the first determining module is further configured to:
- the first determining module is further configured to:
- mapping the first discriminant distribution to the support set of the first target distribution to obtain a first mapping distribution
- the first distribution loss is determined.
- the first determining module is further configured to:
- the second distribution loss is determined.
- the first determining module is further configured to:
- the second determining module is further configured to:
- the second network loss is determined.
- the training module is further configured to:
- the trained generating network and the discriminant network are obtained.
- the training module is further configured to:
- the training module is further configured to:
- the first generated image, at least one third generated image, and at least one real image corresponding to the first random vector input to the generating network in the at least one historical training period are respectively input into the discriminant network of the current training period to obtain at least A fourth discriminant distribution of a first generated image, a fifth discriminant distribution of at least one third generated image, and a sixth discriminant distribution of at least one real image;
- the training module is further configured to:
- the ratio of the first difference value to the second difference value is determined as the training progress parameter of the generating network of the current training period.
- an image generation device which includes:
- the obtaining module is configured to input the third random vector into the generating network obtained after training for processing to obtain a target image.
- an electronic device including:
- a memory for storing processor executable instructions
- the processor is configured to execute the above method.
- a computer-readable storage medium having computer program instructions stored thereon, and the computer program instructions implement the above method when executed by a processor.
- a computer program including computer readable code, and when the computer readable code is executed in an electronic device, a processor in the electronic device is executed to execute the above-mentioned method.
- Fig. 1 shows a flowchart of a neural network training method according to an embodiment of the present disclosure
- Fig. 2 shows a schematic diagram of the application of a neural network training method according to an embodiment of the present disclosure
- Fig. 3 shows a block diagram of a neural network training device according to an embodiment of the present disclosure
- Fig. 4 shows a block diagram of an electronic device according to an embodiment of the present disclosure
- FIG. 5 shows a block diagram of an electronic device according to an embodiment of the present disclosure.
- Fig. 1 shows a flowchart of a neural network training method according to an embodiment of the present disclosure. As shown in Fig. 1, the method includes:
- step S11 the first random vector is input to the generating network to obtain the first generated image
- step S12 the first generated image and the first real image are respectively input to a discriminant network, and the first discriminant distribution of the first generated image and the second discriminant distribution of the first real image are obtained respectively, wherein the The first discriminant distribution represents the probability distribution of the real degree of the first generated image, and the second discriminant distribution represents the probability distribution of the real degree of the first real image;
- step S13 the first network loss of the discriminant network is determined according to the first discriminant distribution, the second discriminant distribution, the preset first target distribution, and the preset second target distribution, wherein The first target distribution is the target probability distribution of the generated image, and the second target distribution is the target probability distribution of the real image;
- step S14 determine the second network loss of the generating network according to the first discriminant distribution and the second discriminant distribution;
- step S15 according to the first network loss and the second network loss, the generation network and the discriminant network are trained against training.
- the discriminant network can output the discriminative distribution of the input image, describe the authenticity of the input image in the form of probability distribution, and describe the input image as real from dimensions such as color, texture, proportion, and background.
- the probability of the image can consider the authenticity of the input image from many aspects, reduce information loss, provide more comprehensive supervision information and more accurate training direction for neural network training, improve training accuracy, and ultimately improve the quality of the generated image, so that the generation
- the network can be adapted to generate high-definition images.
- the target probability distribution of the generated image and the target probability distribution of the real image are preset to guide the training process.
- the real image and the generated image are guided to approach their respective target probability distributions, and the distinction between the real image and the generated image is increased. Enhance the ability of the discriminant network to distinguish between real images and generated images, thereby improving the quality of the images generated by the generating network.
- the neural network training method may be executed by a terminal device or other processing equipment, where the terminal device may be a user equipment (UE), a mobile device, a user terminal, a terminal, a cellular phone , Cordless phones, Personal Digital Assistant (PDA), handheld devices, computing devices, vehicle-mounted devices, wearable devices, etc.
- UE user equipment
- PDA Personal Digital Assistant
- Other processing devices can be servers or cloud servers.
- the neural network training method can be implemented by a processor calling computer-readable instructions stored in a memory.
- the neural network may be a generative confrontation network composed of a generative network and a discriminant network.
- the generation network may be a deep learning neural network such as a convolutional neural network, and the present disclosure does not limit the type and structure of the generation network.
- the discriminant network may be a deep learning neural network such as a convolutional neural network, and the present disclosure does not limit the type and structure of the discriminant network.
- the generation network can process the random vector to obtain the generated image.
- the random vector can be a vector with random numbers for each element, and can be obtained by random sampling and other methods.
- the first random vector may be obtained by random sampling or the like, and the generation network may perform processing such as convolution on the first random vector to obtain a first generated image corresponding to the first random vector.
- the first random vector is a randomly generated vector, therefore, the first generated image is a random image.
- the first real image may be any real image, for example, it may be a real image captured by an image acquisition device (for example, a camera, a camera, etc.).
- the first real image and the first generated image can be input into the discriminant network respectively, and the first discriminant distribution of the first generated image and the second discriminant distribution of the first real image are obtained respectively.
- the first discriminant distribution and the second discriminant distribution are obtained.
- the discriminant distribution can be a parameter in the form of a vector, for example, a probability distribution can be expressed in the form of a vector.
- the first discriminant distribution may indicate the degree of authenticity of the first generated image, that is, the probability that the first generated image is a real image may be described by the first discriminant distribution.
- the second discriminant distribution may indicate the degree of reality of the first real image, that is, the probability that the first real image is a real image can be described by the second discriminant distribution.
- the authenticity of the image is described in the form of a distribution (such as a multi-dimensional vector).
- the authenticity of the image can be considered from multiple aspects such as color, texture, proportion, background, etc., to reduce information loss and provide accurate training directions for training.
- the target probability distribution of the real image (ie, the second target distribution) and the target probability distribution of the generated image (ie, the first target distribution) can be preset.
- the network loss corresponding to the generated image and the network loss corresponding to the real image can be determined according to the target probability distribution of the real image and the target probability distribution of the generated image, and the network loss corresponding to the generated image and the network loss corresponding to the real image can be used respectively Adjust the parameters of the discriminant network so that the second discriminant distribution of the real image is close to the second target distribution and is significantly different from the first target distribution, and the first discriminant distribution of the generated image is close to the first target distribution and is similar to the second target distribution.
- There is a significant difference in distribution which can increase the degree of discrimination between real images and generated images, enhance the ability of the discrimination network to distinguish between real images and generated images, and thereby improve the quality of the images generated by the generation network.
- the anchor distribution of the generated image (ie, the first target distribution) and the anchor distribution of the real image (ie, the second target distribution) can be preset, and the vector representing the anchor distribution of the generated image and the real image
- the vector of the anchor distribution has a significant difference.
- the network parameters of the discriminant network can be adjusted to reduce the difference between the first discriminant distribution and the anchor distribution of the generated image. In this process, the difference between the first discriminant distribution and the anchor distribution of the real image will increase.
- the difference between the second discriminant distribution and the anchor distribution of the real image is also reduced.
- the difference between the second discriminant distribution and the anchor distribution of the generated image will increase. That is, the anchor distributions are respectively preset for the real image and the generated image, so that the distribution difference between the real image and the generated image is increased, so as to improve the distinguishing ability of the discrimination network between the real image and the generated image.
- step S13 may include: determining the first distribution loss of the first generated image according to the first discriminant distribution and the first target distribution; according to the second discriminant distribution and The second target distribution determines the second distribution loss of the first real image; the first network loss is determined according to the first distribution loss and the second distribution loss.
- the first target distribution is an accurate probability distribution, and the difference between the first target distribution and the first discriminant distribution can be determined, so as to determine the first distribution loss.
- the network loss corresponding to the first generated image may be determined according to the first discriminant distribution and the first target distribution.
- determining the first distribution loss of the first generated image according to the first discriminant distribution and the first target distribution includes: mapping the first discriminant distribution to a support set of the first target distribution , Obtain a first mapping distribution; determine a first relative entropy of the first mapping distribution and the first target distribution; determine the first distribution loss according to the first relative entropy.
- the support set of the first discriminant distribution and the first target distribution may be different, that is, the distribution range of the first discriminant distribution is different from that of the first target distribution.
- the distribution range of the first target distribution is different.
- the first discriminant distribution can be mapped to the support set of the first target distribution, or the first target distribution can be mapped to the support set of the first discriminant distribution.
- the first discriminant distribution can be projected by means of linear transformation, for example, the projection matrix can be used to map the first discriminant distribution to the support set of the first target distribution, that is, the vector of the first discriminant distribution can be Linear transformation, the vector obtained after transformation is the first mapping distribution after mapping to the support set of the first target distribution.
- the first relative entropy between the first mapping distribution and the first target distribution may be determined, and the first relative entropy may represent two items in the same support set.
- the difference in probability distribution ie, the difference between the first mapping distribution and the first target distribution.
- the difference between the first mapping distribution and the first target distribution may also be determined by other methods such as JS divergence (Jensen-Shannon divergence) or Wasserstein distance.
- the first distribution loss (that is, the network loss corresponding to the generated image) may be determined according to the first relative entropy.
- the first relative entropy may be determined as the first distribution loss, or the first relative entropy may be subjected to arithmetic processing, for example, the first relative entropy may be weighted, logarithmic, exponential, etc., to obtain the first relative entropy.
- the first distribution loss does not limit the method for determining the first distribution loss.
- the second target distribution is an accurate probability distribution, and the difference between the second target distribution and the second discriminant distribution can be determined, so as to determine the second distribution loss.
- the network loss corresponding to the first real image can be determined according to the second discriminant distribution and the second target distribution.
- determining the second distribution loss of the first real image according to the second discriminant distribution and the second target distribution includes: mapping the second discriminant distribution to a support set of the second target distribution , Obtain a second mapping distribution; determine a second relative entropy of the second mapping distribution and the second target distribution; determine the second distribution loss according to the second relative entropy.
- the support set of the second discriminant distribution and the second target distribution may be different, that is, the distribution range of the second discriminant distribution is different from that of the second target distribution.
- the distribution range of the second target distribution is different.
- the second discriminant distribution can be mapped to the support set of the second target distribution, or the second target distribution can be mapped to the support set of the second discriminant distribution, or the second discriminant distribution and the second target distribution can be mapped to the same support set , So that the distribution range of the second discriminant distribution is the same as the distribution range of the second target distribution, and the difference between the two probability distributions can be compared in the same distribution range.
- the second discriminant distribution can be projected by means of linear transformation, for example, the projection matrix can be used to map the second discriminant distribution to the support set of the second target distribution, that is, the vector of the second discriminant distribution can be performed Linear transformation, the vector obtained after transformation is the second mapping distribution after mapping to the support set of the second target distribution.
- the second relative entropy of the second mapping distribution and the second target distribution may be determined, and the second relative entropy may represent the difference between the two probability distributions in the same support set (ie, the second mapping The difference between the distribution and the second target distribution).
- the calculation method of the second relative entropy is similar to the first relative entropy, and will not be repeated here.
- the difference between the second mapping distribution and the second target distribution can also be determined by other methods such as JS divergence (Jensen-Shannon divergence) or Wasserstein distance.
- the second distribution loss (that is, the network loss corresponding to the generated image) may be determined according to the second relative entropy.
- the second relative entropy may be determined as the second distribution loss, or the second relative entropy may be subjected to arithmetic processing, for example, the second relative entropy may be weighted, logarithmic, exponential, etc., to obtain the The second distribution loss.
- the present disclosure does not limit the determination method of the second distribution loss.
- the first network loss may be determined according to the first distribution loss of the first generated image and the second distribution loss of the second generated image.
- determining the first network loss according to the first distribution loss and the second distribution loss includes: performing a weighted sum process on the first distribution loss and the second distribution loss to obtain the The first network loss.
- the weights of the first distribution loss and the second distribution loss can be the same, that is, the first network loss can be obtained by directly summing the first distribution loss and the second distribution loss.
- the weights of the first distribution loss and the second distribution loss may be different, that is, the first distribution loss and the second distribution loss are respectively multiplied by their respective weights and then summed to obtain the first network loss.
- the weights of the first distribution loss and the second distribution loss may be preset, and the present disclosure does not limit the weights of the first distribution loss and the second distribution loss.
- the target probability distribution of the generated image and the target probability distribution of the real image are preset to guide the training process, and the respective distribution losses are determined respectively, and the real image and the generated image are guided to approach their respective target probabilities during the training process.
- Distribution increase the distinction between real images and generated images, provide more accurate angle information for the discriminant network, provide more accurate training directions for the discriminant network, and enhance the ability of the discriminant network to distinguish between real images and generated images, thereby improving the generation network The quality of the generated image.
- the second network loss of the generating network can also be determined.
- the discriminant network needs to discriminate whether the input image is a real image or a generated image. Therefore, the discriminant network can enhance the ability to distinguish between the real image and the generated image during the training process, that is, make the discrimination distribution of the real image and the generated image close to each other. The probability distribution of the target, thereby increasing the degree of discrimination between the real image and the generated image.
- the goal of the generation network is to make the generated image close to the real image, that is, to make the generated image realistic enough to make it difficult for the discrimination network to distinguish the generated image output by the generation network.
- the performance of the discriminant network and the generation network are strong, that is, the discriminant network has a strong discriminative ability and can distinguish between real images and low-fidelity generated images, and the images generated by the generation network are realistic The degree is so high that it is difficult for the discriminant network to distinguish high-quality generated images.
- the performance improvement of the discrimination network can promote the performance improvement of the generation network, that is, the stronger the ability of the discrimination network to distinguish the real image and the generated image, the higher the fidelity of the image generated by the generation network.
- step S14 may include: determining a third relative entropy of the first discriminant distribution and the second discriminant distribution; and determining the second network loss according to the third relative entropy.
- the third relative entropy of the first discriminant distribution and the second discriminant distribution can be determined, and the third relative entropy represents the difference between the two probability distributions in the same support set (ie, the third mapping distribution The difference with the fourth mapping distribution).
- the calculation method of the third relative entropy is similar to that of the first relative entropy, and will not be repeated here.
- the difference between the first discriminant distribution and the second discriminant distribution can also be determined by other methods such as JS divergence (Jensen-Shannon divergence) or Wasserstein distance, so as to determine the network loss of the generated network through the difference between the two .
- the second network loss may be determined according to the third relative entropy.
- the third relative entropy can be determined as the second network loss, or the third relative entropy can be calculated, for example, the third relative entropy can be weighted, logarithm, exponent, etc., to obtain the second network loss.
- the present disclosure does not limit the method for determining the loss of the second network.
- the support sets of the first discriminant distribution and the second discriminant distribution are different, that is, the distribution ranges of the first discriminant distribution and the second discriminant distribution may be different.
- the support set of the first discriminant distribution and the second discriminant distribution can be overlapped by linear transformation.
- the first discriminant distribution and the second discriminant distribution can be mapped to the target support set, so that the distribution range of the second discriminant distribution is the same as that of the first discriminant distribution.
- the distribution range of the distribution is the same, and the difference between the two probability distributions can be compared in the same distribution range.
- the target support set is the support set of the first discriminant distribution or the support set of the second discriminant distribution.
- the second discriminant distribution can be mapped to the support set of the first discriminant distribution by means of linear transformation, that is, the vector of the second discriminant distribution can be linearly transformed, and the vector obtained after the transformation is the one mapped to the first discriminant distribution Support the fourth mapping distribution after the set, and use the first discriminant distribution as the third mapping distribution.
- the first discriminant distribution can be mapped to the support set of the second discriminant distribution by means of linear transformation, that is, the vector of the first discriminant distribution can be linearly transformed, and the vector obtained after the transformation is mapped to the second discriminant
- the third mapping distribution after the distributed support set, and the second discriminant distribution is used as the fourth mapping distribution.
- the target support set can also be other support sets.
- a support set can be preset, and both the first discriminant distribution and the second discriminant distribution can be mapped to the support set to obtain the third mapping distribution and The fourth mapping distribution. Further, the third relative entropy of the third mapping distribution and the fourth mapping distribution can be calculated.
- the present disclosure does not limit the target support set.
- the generation network can be trained by reducing the difference between the first discriminant distribution and the second discriminant distribution, so that while the performance of the discriminant network is improved, the performance of the generation network is promoted, thereby generating a more realistic generated image , Making the generation network suitable for generating high-definition images.
- the training generation network and the discrimination network can be opposed to the training generation network and the discrimination network according to the first network loss of the discrimination network and the second network loss of the generation network. That is, through training, the performance of the generation network and the discrimination network are simultaneously improved, the discrimination ability of the discrimination network is improved, and the ability of the generation network to generate images with higher fidelity is improved, and the generation network and the discrimination network are in a balanced state.
- step S15 may include: adjusting the network parameters of the discrimination network according to the first network loss; adjusting the network parameters of the generating network according to the second network loss; In the case that the generating network satisfies the training condition, the trained generating network and the discriminating network are obtained.
- the training progress of the discriminant network is usually ahead of the generation network. If the discriminant network progress is faster and the training is completed in advance, the generation network cannot be provided with the gradient in the back propagation. Therefore, the parameters of the generated network cannot be updated, that is, the performance of the generated network cannot be improved. Therefore, the performance of the image generated by the generation network is limited, it is not suitable for generating high-definition images, and the fidelity is low.
- adjusting the network parameters of the discrimination network according to the first network loss includes: inputting a second random vector into a generating network to obtain a second generated image; and interpolating a second real image according to the second generated image Processing to obtain an interpolated image; input the interpolated image into the discriminant network to obtain a third discriminant distribution of the interpolated image; determine the gradient of the network parameter of the discriminant network according to the third discriminant distribution; In the case where the gradient is greater than the gradient threshold, the gradient penalty parameter is determined according to the third discriminant distribution; and the network parameter of the discriminant network is adjusted according to the first network loss and the gradient penalty parameter.
- the second random vector may be obtained through random sampling or the like, and input into the generating network to obtain the second generated image, that is, to obtain an unreal image.
- the second generated image can also be obtained in other ways, for example, a non-real image can be directly generated randomly.
- the second generated image and the second real image may be subjected to interpolation processing to obtain an interpolated image, that is, the interpolated image is a composite image of the real image and the non-real image, and the interpolated image includes some Real images, including some non-real images.
- random nonlinear interpolation may be performed on the second real image and the second generated image to obtain the interpolated image.
- the present disclosure does not limit the method of obtaining the interpolated image.
- the interpolated image can be input to the discriminant network to obtain the third discriminant distribution of the interpolated image, that is, the discriminant network can perform discrimination processing on the composite image of the real image and the unreal image to obtain the third discriminant distributed.
- the third discriminant distribution can be used to determine the gradient of the network parameters of the discriminant network.
- the target probability distribution of the interpolated image can be preset (for example, the probability that the interpolated image is a real image is 50 % Target probability distribution), and use the third discriminant distribution and the relative entropy of the target probability distribution to determine the gradient of the discriminant network's network parameters.
- the relative entropy of the third discriminant distribution and the target probability distribution can be backpropagated, and the relative entropy and the partial differential of each network parameter of the discriminant network can be calculated to obtain the gradient of the network parameter.
- other types of differences such as the JS divergence of the third discriminant distribution and the target probability distribution can also be used to determine the parameter gradient of the discriminant network.
- the gradient penalty parameter can be determined according to the third discriminant distribution.
- the gradient threshold can be a threshold that limits the gradient. If the gradient is large, the gradient may fall faster during the training process (that is, the training step is larger, and the network loss tends to the minimum speed faster), Therefore, the gradient can be restricted by the gradient threshold.
- the gradient threshold may be set to 10, 20, etc., and the present disclosure does not limit the gradient threshold.
- the gradient penalty parameter is used to adjust the gradient of the network parameter that exceeds the gradient threshold, or limit the gradient descent speed, so that the gradient of the parameter is smoother and the gradient descent speed is slowed down.
- the gradient penalty parameter can be determined according to the expected value of the third discriminant distribution.
- the gradient penalty parameter can be a compensation parameter for gradient descent.
- the gradient penalty parameter can be used to adjust the partial differential multiplier, or the gradient penalty parameter can be used to change the direction of gradient descent to limit the gradient, thereby reducing the network of the discriminant network
- the gradient descent speed of the parameter prevents the gradient of the discriminant network from dropping too fast, causing the discriminant network to converge prematurely (that is, the training is completed too quickly).
- the third discriminant distribution is a probability distribution
- the expected value of the probability distribution can be calculated
- the gradient penalty parameter can be determined according to the expected value.
- the expected value can be determined as the multiplier of the partial differential of the network parameter, that is, The expected value is determined as the gradient penalty parameter, and the gradient penalty parameter is used as the gradient multiplier.
- the present disclosure does not limit the determination method of the gradient penalty parameter.
- the network parameters of the discrimination network can be adjusted according to the first network loss and gradient penalty parameters. That is, in the process of backpropagating the loss of the first network to make the gradient drop, the gradient penalty parameter is added to adjust the network parameters of the discriminant network while preventing the gradient from dropping too fast, that is, preventing the discriminant network from being trained prematurely.
- the gradient penalty parameter can be used as the multiplier of the partial differential, that is, the multiplier of the gradient, so as to slow down the gradient descent speed and prevent the judgment that the network is trained too early.
- the network parameter of the judgment network can be adjusted according to the loss of the first network, that is, the loss of the first network is back-propagated. Gradient descent reduces the loss of the first network.
- the network parameters of the discriminant network when adjusting the network parameters of the discriminant network, check whether the gradient of the discriminant network is greater than or equal to the gradient threshold, and set the gradient penalty parameter when the gradient of the discriminant network is greater than or equal to the gradient threshold. It is also possible not to check the gradient of the discriminant network, but to control the training progress of the discriminant network in other ways (for example, suspend the adjustment of the network parameters of the discriminant network, and only adjust the network parameters of the generated network, etc.).
- the gradient descent speed of the discriminant network during training can be limited, thereby limiting the training progress of the discriminant network and reducing the probability of the gradient disappearing of the discriminant network , So as to continuously optimize the generation network, improve the performance of the generation network, and make the generated images of the generation network more realistic and suitable for generating high-definition images.
- the network parameters of the generation network can be adjusted according to the second network loss.
- the loss of the second network is back-propagated to decrease the gradient, so that the loss of the second network is reduced, so as to improve the performance of the generation network. performance.
- the training discriminant network and the generation network can fight against the training discriminant network and the generation network.
- the network parameters of the discrimination network are adjusted through the loss of the first network, the network parameters of the generation network remain unchanged, and the generation network is adjusted by the second network loss.
- the network parameters of the network the network parameters of the judgment network remain unchanged.
- the above training process can be performed iteratively until the discrimination network and the generation network meet the training conditions.
- the training conditions include the discrimination network and the generation network reaching a balanced state.
- the network loss of the discrimination network and the generation network is less than or equal to the expected Set a threshold, or converge to a preset interval.
- the training conditions include the following two conditions to achieve a balanced state: first, the network loss of the generating network is less than or equal to a preset threshold or converges to a preset interval; second, the input image represented by the discriminant distribution of the discriminant network output is The probability of the real image is maximized. At this time, the ability to distinguish between real images and generated images by the network is strong, and the images generated by the generation network are of higher quality and fidelity.
- the training progress of the discriminant network can also be controlled to reduce the probability of gradient disappearance of the discriminant network.
- step S15 may include: inputting the first random vector input to the generating network in at least one historical training period into the generating network of the current training period to obtain at least one third generated image; The first generated image corresponding to the first random vector of the generating network, the at least one third generated image, and the at least one real image are respectively input into the discriminant network of the current training period, and the fourth discriminant distribution of the at least one first generated image, The fifth discriminant distribution of at least one third generated image and the sixth discriminant distribution of at least one real image; the generation network of the current training period is determined according to the fourth discriminant distribution, the fifth discriminant distribution, and the sixth discriminant distribution If the training progress parameter is less than or equal to the training progress threshold, stop adjusting the network parameters of the discrimination network, and only adjust the network parameters of the generating network.
- a buffer area can be opened during the training process, for example, an experience buffer, in which at least one (for example, M, M is a positive integer) can be stored
- the first random vector of the historical training period and the M first generated images generated by the generating network according to the first random vector in the above-mentioned M historical training periods, that is, each historical training period can generate a first random vector through a first random vector.
- the first random vector of M historical training periods and the generated M first generated images can be stored in the buffer area.
- the first random vector and first generated image of the latest training cycle can be used to replace the first random vector and first generated image stored in the buffer area earliest.
- the first random vector input to the generating network in at least one historical training period may be input to the generating network of the current training period to obtain at least one third generated image.
- the m (m is less than or equal to M, and m is a positive integer) first random vectors are input to the generating network of the current training period, and m third generated images are obtained.
- the m third generated images may be separately subjected to the discrimination processing through the discriminant network of the current training period to obtain m fifth discriminant distributions.
- the first generated images of m historical training periods can be discriminated respectively through the discriminant network of the current training period to obtain m fourth discriminant distributions.
- M real images can be randomly sampled from the database, and the m real images can be discriminated respectively through the discriminant network of the current training period to obtain m sixth discriminant distributions.
- the training progress parameters of the generation network of the current training period can be determined according to m fourth discriminant distributions, m fifth discriminant distributions, and m sixth discriminant distributions, that is, determine the discriminant network Whether the training progress is significantly ahead of the generative network, and if a significant lead is determined, adjust the training progress parameters of the generative network to improve the training progress of the generative network and reduce the difference in the training progress of the discriminant network and the generative network, that is, pause the discriminant network
- the generation network is trained separately to increase the progress parameters of the generation network and speed up the progress.
- determining the training progress parameter of the generation network of the current training period according to the fourth discriminant distribution, the fifth discriminant distribution, and the sixth discriminant distribution includes: obtaining at least one of the The first expected value of the fourth discriminant distribution, the second expected value of at least one of the fifth discriminant distribution, and the third expected value of at least one of the sixth discriminant distribution; the first average of the at least one first expected value is obtained respectively Value, at least one second average value of the second expected value, and at least one third average value of the third expected value; determine the first difference between the third average value and the second average value, and the The second difference between the second average value and the first average value; and the ratio of the first difference value and the second difference value is determined as the training progress parameter of the generating network of the current training period.
- the expected values of m fourth discriminant distributions can be calculated respectively to obtain m first expected values
- the expected values of m fifth discriminant distributions can be calculated respectively
- m second expected values can be obtained
- Calculate the expected values of m sixth discriminant distributions respectively and obtain m third expected values.
- the m first expected values can be averaged to obtain the first average value S B
- the m second expected values can be averaged to obtain the second average value S G
- the m-th expected value can be averaged.
- the three expected values are averaged to obtain the third average value S R.
- the first difference between the third average value and the second average value can be determined, and the second difference value between the second average value and the first average value ( S G -S B ).
- the ratio of the first difference value may be a second difference (S R -S G) / ( S G -S B) is determined as the parameters of the current training schedule generation network training period.
- the preset number of training times can also be used as the training progress parameter of the generation network. For example, the generation network and the discriminant network can be trained together for 100 times, the discriminant network training can be suspended, and the generation network can be trained separately for 50 times. Then make the generation network and the discriminant network train 100 times together... until the generation network and the discriminant network meet the training conditions.
- a training progress threshold can be set.
- the training progress threshold is a threshold for determining the training progress of the generated network. If the training progress parameter is less than or equal to the training progress threshold, it indicates that the training progress of the discriminating network is significantly ahead
- the adjustment of the network parameters of the discrimination network can be suspended, and only the network parameters of the generation network can be adjusted.
- the network parameters of the discrimination network and the generation network can be adjusted at the same time, that is, Pause the training of the discriminant network for at least one training cycle, and only train the generation network (that is, adjust the network parameters of the generation network only according to the third network loss, and keep the network parameters of the discrimination network unchanged), until the training progress of the generation network is close to the discrimination network The training progress, and then confront the training generation network and the discriminant network.
- the training speed of the discriminant network can also be reduced, such as extending the training period of the discriminant network or reducing the gradient descent speed of the discriminant network, etc., until the training progress parameter If it is greater than the training progress threshold, the training speed of the discriminant network can be restored.
- the gradient descent speed of the discriminant network in training can be limited, thereby limiting the training progress of the discriminant network, reducing the probability of the discriminant network appearing gradient disappear, so as to continuously optimize
- the generation network improves the performance of the generation network, and makes the image generated by the generation network more realistic and suitable for the generation of high-definition images.
- the generating network can be used to generate an image, and the generated image has a higher fidelity.
- the present disclosure also provides an image generation method that uses the generated confrontation network completed by the above training to generate an image.
- an image generation method includes: obtaining a third random vector; and inputting the third random vector into the generation network obtained after training of the neural network training method described above for processing to obtain a target image.
- the third random vector can be obtained by random sampling, etc., and input the third random vector into the trained generation network.
- the generation network can output target images with high fidelity.
- the target image may be a high-definition image, that is, the trained generation network may be suitable for generating a high-definition image with high fidelity.
- the discriminant network can discriminate the distribution of the input image output, describe the authenticity of the input image in the form of distribution, consider the authenticity of the input image from multiple aspects, reduce information loss, and be neural Network training provides more comprehensive supervision information and more accurate training directions, improves training accuracy, and improves the quality of generated images, making the generation network suitable for generating high-definition images.
- the target probability distribution of the generated image and the target probability distribution of the real image are preset to guide the training process, and their respective distribution losses are determined respectively.
- the real image and the generated image are guided to approach their respective target probability distributions, increasing
- the distinction between real images and generated images enhances the ability of the discriminant network to distinguish between real images and generated images, and trains the generation network by reducing the difference between the first discriminant distribution and the second discriminant distribution, so that while the performance of the discriminant network is improved, Promote the performance improvement of the generation network, thereby generating a higher degree of fidelity generated images, making the generation network suitable for generating high-definition images.
- the gradient descent speed of the discriminant network during training by checking whether the gradient of the discriminant network's network parameters is greater than or equal to the gradient threshold, or checking the training progress of the discriminant network and the generating network, thereby limiting the training of the discriminant network Progress, reduce the probability of the disappearance of the gradient of the discrimination network, thereby continuously optimizing the generation network, improving the performance of the generation network, making the generated image of the generation network more realistic, and suitable for generating high-definition images.
- Fig. 2 shows an application schematic diagram of a neural network training method according to an embodiment of the present disclosure.
- a first random vector can be input to a generating network, and the generating network can output a first generated image.
- the discriminant network may perform discrimination processing on the first generated image and the first real image respectively, and obtain the first discriminant distribution of the first generated image and the second discriminant distribution of the first real image respectively.
- the anchor distribution of the generated image (that is, the first target distribution) and the anchor distribution of the real image (that is, the second target distribution) can be preset.
- the first distribution loss corresponding to the first generated image can be determined according to the first discriminant distribution and the first target distribution.
- the second distribution loss corresponding to the first real image can be determined.
- the first network loss of the discrimination network can be determined by the first distribution loss and the second distribution loss.
- the second network loss of the generating network can be determined by the first discriminant distribution and the second discriminant distribution. Further, the first network loss and the second network loss can be used to fight the training generation network and the discriminant network. That is, the network parameters of the judgment network are adjusted through the first network loss, and the network parameters of the generated network are adjusted through the second network loss.
- the training progress of the discriminant network is usually faster than that of the generation network.
- the generation network cannot continue to be optimized.
- the training progress of the discriminant network can be controlled by detecting the gradient of the discriminant network.
- a real image and the generated image can be interpolated, and the third discriminant distribution of the interpolated image can be determined by the discriminant network, and then according to the first The expected value of the three discriminant distribution determines the gradient penalty parameter.
- the gradient of the discriminant network is greater than or equal to the preset gradient threshold, in order to prevent the gradient of the discriminant network from falling too fast, causing the discriminant network to be trained too quickly, you can perform the loss of the first network In the process of backpropagation making the gradient descent, a gradient penalty parameter is added to limit the gradient descent speed of the discriminant network.
- the training progress of the discriminant network and the generating network can also be checked.
- the M first random vectors input to the generating network in M historical training periods can be input into the generating network of the current training period to obtain M third generated images.
- the training progress parameters of the generation network of the current training period are determined. If the training progress parameter is less than or equal to the training progress threshold, it indicates that the training progress of the discriminant network is significantly ahead of the generation network, and the adjustment of the network parameters of the discriminant network can be suspended, and only the network parameters of the generation network can be adjusted.
- the generation network may be used to generate the target image, and the target image may be a high-definition image with relatively high fidelity.
- the neural network training method can enhance the stability of the generated confrontation and the quality and fidelity of the generated image. It can be applied to scenes such as the generation or synthesis of scenes in games, the transfer or conversion of image styles, and image clustering.
- the present disclosure does not limit the usage scenarios of the neural network training method.
- Fig. 3 shows a block diagram of a neural network training device according to an embodiment of the present disclosure. As shown in Fig. 3, the device includes:
- the generating module 11 is configured to input the first random vector into the generating network to obtain the first generated image
- the discrimination module 12 is configured to input the first generated image and the first real image into a discrimination network respectively, and obtain the first discriminant distribution of the first generated image and the second discriminant distribution of the first real image, respectively.
- the first discriminant distribution represents the probability distribution of the real degree of the first generated image
- the second discriminant distribution represents the probability distribution of the real degree of the first real image
- the first determining module 13 is configured to determine the first network loss of the discriminant network according to the first discriminant distribution, the second discriminant distribution, the preset first target distribution, and the preset second target distribution, wherein, the first target distribution is the target probability distribution of the generated image, and the second target distribution is the target probability distribution of the real image;
- the second determining module 14 is configured to determine the second network loss of the generating network according to the first discriminant distribution and the second discriminant distribution;
- the training module 15 is used to counter-train the generation network and the discrimination network according to the loss of the first network and the loss of the second network.
- the first determining module is further configured to:
- the first determining module is further configured to:
- mapping the first discriminant distribution to the support set of the first target distribution to obtain a first mapping distribution
- the first distribution loss is determined.
- the first determining module is further configured to:
- the second distribution loss is determined.
- the first determining module is further configured to:
- the second determining module is further configured to:
- the second network loss is determined.
- the training module is further configured to:
- the trained generating network and the discriminant network are obtained.
- the training module is further configured to:
- the training module is further configured to:
- the first generated image, at least one third generated image, and at least one real image corresponding to the first random vector input to the generating network in the at least one historical training period are respectively input into the discriminant network of the current training period to obtain at least A fourth discriminant distribution of a first generated image, a fifth discriminant distribution of at least one third generated image, and a sixth discriminant distribution of at least one real image;
- the training module is further configured to:
- the ratio of the first difference value to the second difference value is determined as the training progress parameter of the generating network of the current training period.
- the present disclosure also provides an image generation device that uses the generated confrontation network completed by the above training to generate images.
- an image generation device includes:
- the obtaining module is configured to input the third random vector into the generating network obtained after training for processing to obtain a target image.
- the present disclosure also provides neural network training devices, electronic equipment, computer-readable storage media, and programs, all of which can be used to implement any neural network training method provided in the present disclosure.
- the writing order of the steps does not mean a strict execution order but constitutes any limitation on the implementation process.
- the specific execution order of each step should be based on its function and possibility.
- the inner logic is determined.
- the functions or modules contained in the device provided in the embodiments of the present disclosure can be used to execute the methods described in the above method embodiments.
- specific implementation refer to the description of the above method embodiments. For brevity, here No longer.
- the embodiments of the present disclosure also provide a computer-readable storage medium on which computer program instructions are stored, and the computer program instructions implement the above-mentioned method when executed by a processor.
- the computer-readable storage medium may be a volatile computer-readable storage medium or a non-volatile computer-readable storage medium.
- An embodiment of the present disclosure also provides an electronic device, including: a processor; a memory for storing executable instructions of the processor; wherein the processor is configured as the above method.
- the electronic device can be provided as a terminal, server or other form of device.
- Fig. 4 is a block diagram showing an electronic device 800 according to an exemplary embodiment.
- the electronic device 800 may be a mobile phone, a computer, a digital broadcasting terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, and other terminals.
- the electronic device 800 may include one or more of the following components: a processing component 802, a memory 804, a power component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, and a sensor component 814 , And communication component 816.
- the processing component 802 generally controls the overall operations of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations.
- the processing component 802 may include one or more processors 820 to execute instructions to complete all or part of the steps of the foregoing method.
- the processing component 802 may include one or more modules to facilitate the interaction between the processing component 802 and other components.
- the processing component 802 may include a multimedia module to facilitate the interaction between the multimedia component 808 and the processing component 802.
- the memory 804 is configured to store various types of data to support operations in the electronic device 800. Examples of these data include instructions for any application or method operating on the electronic device 800, contact data, phone book data, messages, pictures, videos, etc.
- the memory 804 can be implemented by any type of volatile or non-volatile storage device or a combination thereof, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable and Programmable read only memory (EPROM), programmable read only memory (PROM), read only memory (ROM), magnetic memory, flash memory, magnetic disk or optical disk.
- SRAM static random access memory
- EEPROM electrically erasable programmable read-only memory
- EPROM erasable and Programmable read only memory
- PROM programmable read only memory
- ROM read only memory
- magnetic memory flash memory
- flash memory magnetic disk or optical disk.
- the power supply component 806 provides power for various components of the electronic device 800.
- the power supply component 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 800.
- the multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and the user.
- the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from the user.
- the touch panel includes one or more touch sensors to sense touch, sliding, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure related to the touch or slide operation.
- the multimedia component 808 includes a front camera and/or a rear camera. When the electronic device 800 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera can receive external multimedia data. Each front camera and rear camera can be a fixed optical lens system or have focal length and optical zoom capabilities.
- the audio component 810 is configured to output and/or input audio signals.
- the audio component 810 includes a microphone (MIC), and when the electronic device 800 is in an operation mode, such as a call mode, a recording mode, and a voice recognition mode, the microphone is configured to receive an external audio signal.
- the received audio signal may be further stored in the memory 804 or transmitted via the communication component 816.
- the audio component 810 further includes a speaker for outputting audio signals.
- the I/O interface 812 provides an interface between the processing component 802 and a peripheral interface module.
- the peripheral interface module may be a keyboard, a click wheel, a button, and the like. These buttons may include, but are not limited to: home button, volume button, start button, and lock button.
- the sensor component 814 includes one or more sensors for providing the electronic device 800 with various aspects of state evaluation.
- the sensor component 814 can detect the on/off status of the electronic device 800 and the relative positioning of the components.
- the component is the display and the keypad of the electronic device 800.
- the sensor component 814 can also detect the electronic device 800 or the electronic device 800.
- the position of the component changes, the presence or absence of contact between the user and the electronic device 800, the orientation or acceleration/deceleration of the electronic device 800, and the temperature change of the electronic device 800.
- the sensor component 814 may include a proximity sensor configured to detect the presence of nearby objects when there is no physical contact.
- the sensor component 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications.
- the sensor component 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
- the communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices.
- the electronic device 800 can access a wireless network based on a communication standard, such as WiFi, 2G, or 3G, or a combination thereof.
- the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel.
- the communication component 816 further includes a near field communication (NFC) module to facilitate short-range communication.
- the NFC module can be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology and other technologies.
- RFID radio frequency identification
- IrDA infrared data association
- UWB ultra-wideband
- Bluetooth Bluetooth
- the electronic device 800 may be implemented by one or more application-specific integrated circuits (ASIC), digital signal processors (DSP), digital signal processing devices (DSPD), programmable logic devices (PLD), field-available A programmable gate array (FPGA), controller, microcontroller, microprocessor, or other electronic components are implemented to implement the above methods.
- ASIC application-specific integrated circuits
- DSP digital signal processors
- DSPD digital signal processing devices
- PLD programmable logic devices
- FPGA field-available A programmable gate array
- controller microcontroller, microprocessor, or other electronic components are implemented to implement the above methods.
- a non-volatile computer-readable storage medium such as the memory 804 including computer program instructions, which can be executed by the processor 820 of the electronic device 800 to complete the foregoing method.
- the embodiments of the present disclosure also provide a computer program product, which includes computer-readable code.
- the processor in the device executes the neural network training method provided in any of the above embodiments. Instructions.
- the embodiments of the present disclosure also provide another computer program product for storing computer-readable instructions, which when executed, cause the computer to perform the operations of the image generation method provided by any of the foregoing embodiments.
- the above-mentioned computer program product can be specifically implemented by hardware, software, or a combination thereof.
- the computer program product is specifically embodied as a computer storage medium.
- the computer program product is specifically embodied as a software product, such as a software development kit (SDK), etc. Wait.
- SDK software development kit
- Fig. 5 is a block diagram showing an electronic device 1900 according to an exemplary embodiment.
- the electronic device 1900 may be provided as a server. 5
- the electronic device 1900 includes a processing component 1922, which further includes one or more processors, and a memory resource represented by a memory 1932, for storing instructions executable by the processing component 1922, such as application programs.
- the application program stored in the memory 1932 may include one or more modules each corresponding to a set of instructions.
- the processing component 1922 is configured to execute instructions to perform the above-described methods.
- the electronic device 1900 may also include a power supply component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to the network, and an input output (I/O) interface 1958 .
- the electronic device 1900 can operate based on an operating system stored in the memory 1932, such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM or the like.
- a non-volatile computer-readable storage medium is also provided, such as the memory 1932 including computer program instructions, which can be executed by the processing component 1922 of the electronic device 1900 to complete the foregoing method.
- the present disclosure may be a system, method and/or computer program product.
- the computer program product may include a computer-readable storage medium loaded with computer-readable program instructions for enabling a processor to implement various aspects of the present disclosure.
- the computer-readable storage medium may be a tangible device that can hold and store instructions used by the instruction execution device.
- the computer-readable storage medium may be, for example, but not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
- Non-exhaustive list of computer-readable storage media include: portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM) Or flash memory), static random access memory (SRAM), portable compact disk read-only memory (CD-ROM), digital versatile disk (DVD), memory stick, floppy disk, mechanical encoding device, such as a printer with instructions stored thereon
- RAM random access memory
- ROM read-only memory
- EPROM erasable programmable read-only memory
- flash memory flash memory
- SRAM static random access memory
- CD-ROM compact disk read-only memory
- DVD digital versatile disk
- memory stick floppy disk
- mechanical encoding device such as a printer with instructions stored thereon
- the computer-readable storage medium used here is not interpreted as the instantaneous signal itself, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (for example, light pulses through fiber optic cables), or through wires Transmission of electrical signals.
- the computer-readable program instructions described herein can be downloaded from a computer-readable storage medium to various computing/processing devices, or downloaded to an external computer or external storage device via a network, such as the Internet, a local area network, a wide area network, and/or a wireless network.
- the network may include copper transmission cables, optical fiber transmission, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers.
- the network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network, and forwards the computer-readable program instructions for storage in the computer-readable storage medium in each computing/processing device .
- the computer program instructions used to perform the operations of the present disclosure may be assembly instructions, instruction set architecture (ISA) instructions, machine instructions, machine-related instructions, microcode, firmware instructions, state setting data, or in one or more programming languages.
- Source code or object code written in any combination, the programming language includes object-oriented programming languages such as Smalltalk, C++, etc., and conventional procedural programming languages such as "C" language or similar programming languages.
- Computer-readable program instructions can be executed entirely on the user's computer, partly on the user's computer, executed as a stand-alone software package, partly on the user's computer and partly executed on a remote computer, or entirely on the remote computer or server carried out.
- the remote computer can be connected to the user's computer through any kind of network-including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (for example, using an Internet service provider to connect to the user's computer) connection).
- LAN local area network
- WAN wide area network
- an electronic circuit such as a programmable logic circuit, a field programmable gate array (FPGA), or a programmable logic array (PLA), can be customized by using the status information of the computer-readable program instructions.
- FPGA field programmable gate array
- PDA programmable logic array
- the computer-readable program instructions are executed to realize various aspects of the present disclosure.
- These computer-readable program instructions can be provided to the processor of a general-purpose computer, a special-purpose computer, or other programmable data processing device, thereby producing a machine that makes these instructions when executed by the processor of the computer or other programmable data processing device , A device that implements the functions/actions specified in one or more blocks in the flowcharts and/or block diagrams is produced. It is also possible to store these computer-readable program instructions in a computer-readable storage medium. These instructions make computers, programmable data processing apparatuses, and/or other devices work in a specific manner. Thus, the computer-readable medium storing the instructions includes An article of manufacture, which includes instructions for implementing various aspects of the functions/actions specified in one or more blocks in the flowcharts and/or block diagrams.
- each block in the flowchart or block diagram may represent a module, program segment, or part of an instruction, and the module, program segment, or part of an instruction contains one or more components for realizing the specified logical function.
- Executable instructions may also occur in a different order than the order marked in the drawings. For example, two consecutive blocks can actually be executed substantially in parallel, or they can sometimes be executed in the reverse order, depending on the functions involved.
- each block in the block diagram and/or flowchart, and the combination of the blocks in the block diagram and/or flowchart can be implemented by a dedicated hardware-based system that performs the specified functions or actions Or it can be realized by a combination of dedicated hardware and computer instructions.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Data Mining & Analysis (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Biomedical Technology (AREA)
- Multimedia (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Probability & Statistics with Applications (AREA)
- Quality & Reliability (AREA)
- Algebra (AREA)
- Computational Mathematics (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- Pure & Applied Mathematics (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
Description
Claims (25)
- 一种神经网络训练方法,其中,包括:将第一随机向量输入生成网络,获得第一生成图像;将所述第一生成图像和第一真实图像分别输入判别网络,分别获得所述第一生成图像的第一判别分布与第一真实图像的第二判别分布,其中,所述第一判别分布表示所述第一生成图像的真实程度的概率分布,所述第二判别分布表示所述第一真实图像的真实程度的概率分布;根据所述第一判别分布、所述第二判别分布、预设的第一目标分布以及预设的第二目标分布,确定所述判别网络的第一网络损失,其中,所述第一目标分布为生成图像的目标概率分布,所述第二目标分布为真实图像的目标概率分布;根据所述第一判别分布和所述第二判别分布,确定所述生成网络的第二网络损失;根据所述第一网络损失和所述第二网络损失,对抗训练所述生成网络和所述判别网络。
- 根据权利要求1所述的方法,其中,根据所述第一判别分布、所述第二判别分布、预设的第一目标分布以及预设的第二目标分布,确定所述判别网络的第一网络损失,包括:根据所述第一判别分布和所述第一目标分布,确定所述第一生成图像的第一分布损失;根据所述第二判别分布和所述第二目标分布,确定所述第一真实图像的第二分布损失;根据所述第一分布损失和所述第二分布损失,确定所述第一网络损失。
- 根据权利要求2所述的方法,其中,根据所述第一判别分布和所述第一目标分布,确定所述第一生成图像的第一分布损失,包括:将所述第一判别分布映射到所述第一目标分布的支撑集,获得第一映射分布;确定所述第一映射分布与所述第一目标分布的第一相对熵;根据所述第一相对熵,确定所述第一分布损失。
- 根据权利要求2所述的方法,其中,根据所述第二判别分布和所述第二目标分布,确定所述第一真实图像的第二分布损失,包括:将所述第二判别分布映射到所述第二目标分布的支撑集,获得第二映射分布;确定所述第二映射分布与所述第二目标分布的第二相对熵;根据所述第二相对熵,确定所述第二分布损失。
- 根据权利要求2所述的方法,其中,根据所述第一分布损失和所述第二分布损失,确定所述第一网络损失,包括:对所述第一分布损失和所述第二分布损失进行加权求和处理,获得所述第一网络损失。
- 根据权利要求1-5中任一项所述的方法,其中,根据所述第一判别分布和所述第二判别分布,确定所述生成网络的第二网络损失,包括:确定所述第一判别分布与所述第二判别分布的第三相对熵;根据所述第三相对熵,确定所述第二网络损失。
- 根据权利要求1-6中任一项所述的方法,其中,根据所述第一网络损失和所述第二网络损失,对抗训练所述生成网络和所述判别网络,包括:根据所述第一网络损失,调整所述判别网络的网络参数;根据所述第二网络损失,调整所述生成网络的网络参数;在所述判别网络和所述生成网络满足训练条件的情况下,获得训练后的所述生成网络和所述判别网络。
- 根据权利要求7所述的方法,其中,根据所述第一网络损失,调整所述判别网络的网络参数,包括:将第二随机向量输入生成网络,获得第二生成图像;根据所述第二生成图像对第二真实图像进行插值处理,获得插值图像;将所述插值图像输入所述判别网络,获得所述插值图像的第三判别分布;根据所述第三判别分布,确定所述判别网络的网络参数的梯度;在所述梯度大于或等于梯度阈值的情况下,根据所述第三判别分布确定梯度惩罚参数;根据所述第一网络损失和所述梯度惩罚参数,调整所述判别网络的网络参数。
- 根据权利要求1-8中任一项所述的方法,其中,根据所述第一网络损失和所述第二网络损失,对抗训练所述生成网络和所述判别网络,包括:将至少一个历史训练周期中输入生成网络的第一随机向量输入当前训练周期的生成网络,获得至少一个第三生成图像;将与所述至少一个历史训练周期中输入生成网络的第一随机向量对应的第一生成图像、至少一个所述第三生成图像以及至少一个真实图像分别输入当前训练周期的判别网络,分别获得至少一个第一生成图像的第四判别分布、至少一个第三生成图像的第五判别分布和至少一个真实图像的第六判别分布;根据所述第四判别分布、所述第五判别分布和所述第六判别分布确定当前训练周期的生成网络的训练进度参数;在所述训练进度参数小于或等于训练进度阈值的情况下,停止调整所述判别网络的网络参数,仅调整所述生成网络的网络参数。
- 根据权利要求9所述的方法,其中,根据所述第四判别分布、所述第五判别分布和所述第六判别分布确定当前训练周期的生成网络的训练进度参数,包括:分别获取至少一个所述第四判别分布的第一期望值、至少一个所述第五判别分布的第二期望值以及至少一个所述第六判别分布的第三期望值;分别获取所述至少一个所述第一期望值的第一平均值、至少一个所述第二期望值的第二平均值以及至少一个所述第三期望值的第三平均值;确定所述第三平均值与所述第二平均值的第一差值以及所述第二平均值与所述第一平均值的第二差值;将所述第一差值与所述第二差值的比值确定为所述当前训练周期的生成网络的训练进度参数。
- 一种图像生成方法,其中,包括:获取第三随机向量;将所述第三随机向量输入根据权利要求1-10中任一项所述的方法训练后获得的生成网络进行处理,获得目标图像。
- 一种神经网络训练装置,其中,包括:生成模块,用于将第一随机向量输入生成网络,获得第一生成图像;判别模块,用于将所述第一生成图像和第一真实图像分别输入判别网络,分别获得所述第一生成图像的第一判别分布与第一真实图像的第二判别分布,其中,所述第一判别分布表示所述第一生成图像的真实程度的概率分布,所述第二判别分布表示所述第一真实图像的真实程度的概率分布;第一确定模块,用于根据所述第一判别分布、所述第二判别分布、预设的第一目标分布以及预设的第二目标分布,确定所述判别网络的第一网络损失,其中,所述第一目标分布为生成图像的目标概率分布,所述第二目标分布为真实图像的目标概率分布;第二确定模块,用于根据所述第一判别分布和所述第二判别分布,确定所述生成网络的第二网络损失;训练模块,用于根据所述第一网络损失和所述第二网络损失,对抗训练所述生成网络和所述判别网络。
- 根据权利要求12所述的装置,其中,所述第一确定模块被进一步配置为:根据所述第一判别分布和所述第一目标分布,确定所述第一生成图像的第一分布损失;根据所述第二判别分布和所述第二目标分布,确定所述第一真实图像的第二分布损失;根据所述第一分布损失和所述第二分布损失,确定所述第一网络损失。
- 根据权利要求13所述的装置,其中,所述第一确定模块被进一步配置为:将所述第一判别分布映射到所述第一目标分布的支撑集,获得第一映射分布;确定所述第一映射分布与所述第一目标分布的第一相对熵;根据所述第一相对熵,确定所述第一分布损失。
- 根据权利要求13所述的装置,其中,所述第一确定模块被进一步配置为:将所述第二判别分布映射到所述第二目标分布的支撑集,获得第二映射分布;确定所述第二映射分布与所述第二目标分布的第二相对熵;根据所述第二相对熵,确定所述第二分布损失。
- 根据权利要求13所述的装置,其中,所述第一确定模块被进一步配置为:对所述第一分布损失和所述第二分布损失进行加权求和处理,获得所述第一网络损失。
- 根据权利要求12-16中任一项所述的装置,其中,所述第二确定模块被进一步配置为:确定所述第一判别分布与所述第二判别分布的第三相对熵;根据所述第三相对熵,确定所述第二网络损失。
- 根据权利要求12-17中任一项所述的装置,其中,所述训练模块被进一步配置为:根据所述第一网络损失,调整所述判别网络的网络参数;根据所述第二网络损失,调整所述生成网络的网络参数;在所述判别网络和所述生成网络满足训练条件的情况下,获得训练后的所述生成网络和所述判别网络。
- 根据权利要求18所述的装置,其中,所述训练模块被进一步配置为:将第二随机向量输入生成网络,获得第二生成图像;根据所述第二生成图像对第二真实图像进行插值处理,获得插值图像;将所述插值图像输入所述判别网络,获得所述插值图像的第三判别分布;根据所述第三判别分布,确定所述判别网络的网络参数的梯度;在所述梯度大于或等于梯度阈值的情况下,根据所述第三判别分布确定梯度惩罚参数;根据所述第一网络损失和所述梯度惩罚参数,调整所述判别网络的网络参数。
- 根据权利要求11-19中任一项所述的装置,其中,所述训练模块被进一步配置为:将至少一个历史训练周期中输入生成网络的第一随机向量输入当前训练周期的生成网络,获得至少一个第三生成图像;将与所述至少一个历史训练周期中输入生成网络的第一随机向量对应的第一生成图像、至少一个所述第三生成图像以及至少一个真实图像分别输入当前训练周期的判别网络,分别获得至少一个第一生成图像的第四判别分布、至少一个第三生成图像的第五判别分布和至少一个真实图像的第六判别分布;根据所述第四判别分布、所述第五判别分布和所述第六判别分布确定当前训练周期的生成网络的训练进度参数;在所述训练进度参数小于或等于训练进度阈值的情况下,停止调整所述判别网络的网络参数,仅调整所述生成网络的网络参数。
- 根据权利要求20所述的装置,其中,所述训练模块被进一步配置为:分别获取至少一个所述第四判别分布的第一期望值、至少一个所述第五判别分布的第二期望值以及至少一个所述第六判别分布的第三期望值;分别获取所述至少一个所述第一期望值的第一平均值、至少一个所述第二期望值的第二平均值以及至少一个所述第三期望值的第三平均值;确定所述第三平均值与所述第二平均值的第一差值以及所述第二平均值与所述第一平均值的第二差值;将所述第一差值与所述第二差值的比值确定为所述当前训练周期的生成网络的训练进度参数。
- 一种图像生成装置,其中,包括:获取模块,用于获取第三随机向量;获得模块,用于将所述第三随机向量输入根据权利要求12-21中任一项所述的装置训练后获得的生成网络进行处理,获得目标图像。
- 一种电子设备,其中,包括:处理器;用于存储处理器可执行指令的存储器;其中,所述处理器被配置为:执行权利要求1至11中任意一项所述的方法。
- 一种计算机可读存储介质,其上存储有计算机程序指令,其中,所述计算机程序指令被处理器执行时实现权利要求1至11中任意一项所述的方法。
- 一种计算机程序,包括计算机可读代码,当所述计算机可读代码在电子设备中运行时,所述电子设备中的处理器执行用于实现权利要求1-11中的任一权利要求所述的方法。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
SG11202103479VA SG11202103479VA (en) | 2019-09-27 | 2019-12-11 | Method and apparatus for neutral network training and method and apparatus for image generation |
KR1020217010144A KR20210055747A (ko) | 2019-09-27 | 2019-12-11 | 신경 네트워크 훈련 방법 및 장치, 이미지 생성 방법 및 장치 |
JP2021518079A JP7165818B2 (ja) | 2019-09-27 | 2019-12-11 | ニューラルネットワークのトレーニング方法及び装置並びに画像生成方法及び装置 |
US17/221,096 US20210224607A1 (en) | 2019-09-27 | 2021-04-02 | Method and apparatus for neutral network training, method and apparatus for image generation, and storage medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910927729.6 | 2019-09-27 | ||
CN201910927729.6A CN110634167B (zh) | 2019-09-27 | 2019-09-27 | 神经网络训练方法及装置和图像生成方法及装置 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/221,096 Continuation US20210224607A1 (en) | 2019-09-27 | 2021-04-02 | Method and apparatus for neutral network training, method and apparatus for image generation, and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021056843A1 true WO2021056843A1 (zh) | 2021-04-01 |
Family
ID=68973281
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/124541 WO2021056843A1 (zh) | 2019-09-27 | 2019-12-11 | 神经网络训练方法及装置和图像生成方法及装置 |
Country Status (7)
Country | Link |
---|---|
US (1) | US20210224607A1 (zh) |
JP (1) | JP7165818B2 (zh) |
KR (1) | KR20210055747A (zh) |
CN (1) | CN110634167B (zh) |
SG (1) | SG11202103479VA (zh) |
TW (1) | TWI752405B (zh) |
WO (1) | WO2021056843A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114881884A (zh) * | 2022-05-24 | 2022-08-09 | 河南科技大学 | 一种基于生成对抗网络的红外目标样本增强方法 |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2594070B (en) * | 2020-04-15 | 2023-02-08 | James Hoyle Benjamin | Signal processing system and method |
US11272097B2 (en) * | 2020-07-30 | 2022-03-08 | Steven Brian Demers | Aesthetic learning methods and apparatus for automating image capture device controls |
KR102354181B1 (ko) * | 2020-12-31 | 2022-01-21 | 주식회사 나인티나인 | 비쥬얼라이징 구현 가능한 건설 사업 정보 관리 시스템 및 이의 제어 방법 |
CN112990211B (zh) * | 2021-01-29 | 2023-07-11 | 华为技术有限公司 | 一种神经网络的训练方法、图像处理方法以及装置 |
TWI766690B (zh) * | 2021-05-18 | 2022-06-01 | 詮隼科技股份有限公司 | 封包產生方法及封包產生系統之設定方法 |
KR102636866B1 (ko) * | 2021-06-14 | 2024-02-14 | 아주대학교산학협력단 | 공간 분포를 이용한 휴먼 파싱 방법 및 장치 |
CN114501164A (zh) * | 2021-12-28 | 2022-05-13 | 海信视像科技股份有限公司 | 音视频数据的标注方法、装置及电子设备 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107293289A (zh) * | 2017-06-13 | 2017-10-24 | 南京医科大学 | 一种基于深度卷积生成对抗网络的语音生成方法 |
CN109377448A (zh) * | 2018-05-20 | 2019-02-22 | 北京工业大学 | 一种基于生成对抗网络的人脸图像修复方法 |
CN109377452A (zh) * | 2018-08-31 | 2019-02-22 | 西安电子科技大学 | 基于vae和生成式对抗网络的人脸图像修复方法 |
CN109919921A (zh) * | 2019-02-25 | 2019-06-21 | 天津大学 | 基于生成对抗网络的环境影响程度建模方法 |
US20190228268A1 (en) * | 2016-09-14 | 2019-07-25 | Konica Minolta Laboratory U.S.A., Inc. | Method and system for cell image segmentation using multi-stage convolutional neural networks |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100996209B1 (ko) * | 2008-12-23 | 2010-11-24 | 중앙대학교 산학협력단 | 변화값 템플릿을 이용한 객체 모델링 방법 및 그 시스템 |
US8520958B2 (en) * | 2009-12-21 | 2013-08-27 | Stmicroelectronics International N.V. | Parallelization of variable length decoding |
JP6318211B2 (ja) * | 2016-10-03 | 2018-04-25 | 株式会社Preferred Networks | データ圧縮装置、データ再現装置、データ圧縮方法、データ再現方法及びデータ転送方法 |
EP3336800B1 (de) * | 2016-12-19 | 2019-08-28 | Siemens Healthcare GmbH | Bestimmen einer trainingsfunktion zum generieren von annotierten trainingsbildern |
US10665326B2 (en) * | 2017-07-25 | 2020-05-26 | Insilico Medicine Ip Limited | Deep proteome markers of human biological aging and methods of determining a biological aging clock |
CN108495110B (zh) * | 2018-01-19 | 2020-03-17 | 天津大学 | 一种基于生成式对抗网络的虚拟视点图像生成方法 |
CN108510435A (zh) * | 2018-03-28 | 2018-09-07 | 北京市商汤科技开发有限公司 | 图像处理方法及装置、电子设备和存储介质 |
CN108615073B (zh) * | 2018-04-28 | 2020-11-03 | 京东数字科技控股有限公司 | 图像处理方法及装置、计算机可读存储介质、电子设备 |
CN108805833B (zh) * | 2018-05-29 | 2019-06-18 | 西安理工大学 | 基于条件对抗网络的字帖二值化背景噪声杂点去除方法 |
CN109933677A (zh) * | 2019-02-14 | 2019-06-25 | 厦门一品威客网络科技股份有限公司 | 图像生成方法和图像生成系统 |
CN109920016B (zh) * | 2019-03-18 | 2021-06-25 | 北京市商汤科技开发有限公司 | 图像生成方法及装置、电子设备和存储介质 |
-
2019
- 2019-09-27 CN CN201910927729.6A patent/CN110634167B/zh active Active
- 2019-12-11 SG SG11202103479VA patent/SG11202103479VA/en unknown
- 2019-12-11 KR KR1020217010144A patent/KR20210055747A/ko not_active Application Discontinuation
- 2019-12-11 JP JP2021518079A patent/JP7165818B2/ja active Active
- 2019-12-11 WO PCT/CN2019/124541 patent/WO2021056843A1/zh active Application Filing
-
2020
- 2020-01-14 TW TW109101220A patent/TWI752405B/zh not_active IP Right Cessation
-
2021
- 2021-04-02 US US17/221,096 patent/US20210224607A1/en not_active Abandoned
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190228268A1 (en) * | 2016-09-14 | 2019-07-25 | Konica Minolta Laboratory U.S.A., Inc. | Method and system for cell image segmentation using multi-stage convolutional neural networks |
CN107293289A (zh) * | 2017-06-13 | 2017-10-24 | 南京医科大学 | 一种基于深度卷积生成对抗网络的语音生成方法 |
CN109377448A (zh) * | 2018-05-20 | 2019-02-22 | 北京工业大学 | 一种基于生成对抗网络的人脸图像修复方法 |
CN109377452A (zh) * | 2018-08-31 | 2019-02-22 | 西安电子科技大学 | 基于vae和生成式对抗网络的人脸图像修复方法 |
CN109919921A (zh) * | 2019-02-25 | 2019-06-21 | 天津大学 | 基于生成对抗网络的环境影响程度建模方法 |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114881884A (zh) * | 2022-05-24 | 2022-08-09 | 河南科技大学 | 一种基于生成对抗网络的红外目标样本增强方法 |
CN114881884B (zh) * | 2022-05-24 | 2024-03-29 | 河南科技大学 | 一种基于生成对抗网络的红外目标样本增强方法 |
Also Published As
Publication number | Publication date |
---|---|
JP2022504071A (ja) | 2022-01-13 |
JP7165818B2 (ja) | 2022-11-04 |
CN110634167B (zh) | 2021-07-20 |
TW202113752A (zh) | 2021-04-01 |
KR20210055747A (ko) | 2021-05-17 |
CN110634167A (zh) | 2019-12-31 |
SG11202103479VA (en) | 2021-05-28 |
US20210224607A1 (en) | 2021-07-22 |
TWI752405B (zh) | 2022-01-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021056843A1 (zh) | 神经网络训练方法及装置和图像生成方法及装置 | |
TWI717923B (zh) | 面部識別方法及裝置、電子設備和儲存介質 | |
WO2020192252A1 (zh) | 图像生成方法及装置、电子设备和存储介质 | |
TWI747325B (zh) | 目標對象匹配方法及目標對象匹配裝置、電子設備和電腦可讀儲存媒介 | |
WO2021051650A1 (zh) | 人脸和人手关联检测方法及装置、电子设备和存储介质 | |
TWI736179B (zh) | 圖像處理方法、電子設備和電腦可讀儲存介質 | |
US20210012143A1 (en) | Key Point Detection Method and Apparatus, and Storage Medium | |
WO2021051949A1 (zh) | 一种图像处理方法及装置、电子设备和存储介质 | |
CN105335684B (zh) | 人脸检测方法及装置 | |
WO2021139120A1 (zh) | 网络训练方法及装置、图像生成方法及装置 | |
CN109165738B (zh) | 神经网络模型的优化方法及装置、电子设备和存储介质 | |
US11734804B2 (en) | Face image processing method and apparatus, electronic device, and storage medium | |
TW202105202A (zh) | 影片處理方法及裝置、電子設備、儲存媒體和電腦程式 | |
CN110909815A (zh) | 神经网络训练、图像处理方法、装置及电子设备 | |
TWI735112B (zh) | 圖像生成方法、電子設備和儲存介質 | |
TW202032425A (zh) | 圖像處理方法及裝置、電子設備和儲存介質 | |
EP3657497A1 (en) | Method and device for selecting target beam data from a plurality of beams | |
WO2021036013A1 (zh) | 检测器的配置方法及装置、电子设备和存储介质 | |
CN112598063A (zh) | 神经网络生成方法及装置、电子设备和存储介质 | |
CN109698794A (zh) | 一种拥塞控制方法、装置、电子设备及存储介质 | |
CN110135349A (zh) | 识别方法、装置、设备及存储介质 | |
CN109447258B (zh) | 神经网络模型的优化方法及装置、电子设备和存储介质 | |
WO2021082381A1 (zh) | 人脸识别方法及装置、电子设备和存储介质 | |
WO2020224448A1 (zh) | 交互方法及装置、音箱、电子设备和存储介质 | |
WO2016041315A1 (zh) | Pwm数据的处理方法及装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
ENP | Entry into the national phase |
Ref document number: 2021518079 Country of ref document: JP Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 20217010144 Country of ref document: KR Kind code of ref document: A |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19947310 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19947310 Country of ref document: EP Kind code of ref document: A1 |