WO2020082572A1 - 生成式对抗网络的训练方法、相关设备及介质 - Google Patents

生成式对抗网络的训练方法、相关设备及介质 Download PDF

Info

Publication number
WO2020082572A1
WO2020082572A1 PCT/CN2018/123519 CN2018123519W WO2020082572A1 WO 2020082572 A1 WO2020082572 A1 WO 2020082572A1 CN 2018123519 W CN2018123519 W CN 2018123519W WO 2020082572 A1 WO2020082572 A1 WO 2020082572A1
Authority
WO
WIPO (PCT)
Prior art keywords
sample
probability
generator
data
training
Prior art date
Application number
PCT/CN2018/123519
Other languages
English (en)
French (fr)
Inventor
王少军
许开河
肖京
杨坤
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2020082572A1 publication Critical patent/WO2020082572A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections

Definitions

  • the present application relates to the field of artificial intelligence, and in particular to a training method, related equipment and medium for a generative confrontation network.
  • GAN Generative adversarial networks
  • the generated images have achieved good results in diversity and authenticity.
  • GAN has not been able to make breakthrough progress in the field of text sequence.
  • the most fundamental problem is that the text is discrete, and the text cannot be continuously changed.
  • the image can be continuously changed, for example, by changing the pixels (0-255) Increase or decrease some values to change the image. Therefore, how to apply GAN to text to improve the text generation effect becomes the key.
  • Embodiments of the present application provide a training method, related equipment, and media for a generative adversarial network, which are helpful to solve the problem that the generative adversarial network cannot be applied to the text field due to text dispersion.
  • an embodiment of the present application provides a training method for a generative adversarial network.
  • the generative adversarial network includes a generator and a discriminator. The method includes:
  • the sample database includes at least one sample data, and the at least one first sample are all real data;
  • the discriminator is used according to the first sample and used Displaying an expression to identify the first probability that the second sample is real data, and the output information includes the first probability;
  • the generator is trained according to the output information of the discriminator.
  • an embodiment of the present application provides a training device for a generative confrontation network, the device including a unit for performing the method of the first aspect.
  • an embodiment of the present application provides a network training device, including a processor and a memory, where the processor and the memory are connected to each other, wherein the memory is used to store a computer program that supports the network training device to perform the above method,
  • the computer program includes program instructions, and the processor is configured to call the program instructions to perform the method of the first aspect described above.
  • the network training device may further include a user interface and / or a communication interface.
  • an embodiment of the present application provides a computer-readable storage medium that stores a computer program, where the computer program includes program instructions, which when executed by a processor causes The processor executes the method of the first aspect described above.
  • the embodiments of the present application can directly obtain the explicit expression of the optimal discriminator in the generative adversarial network, and then train the generator of the generative adversarial network according to the discriminatory result of the discriminator, without using a neural network to approximate
  • the discriminator in the generative adversarial network which avoids a series of problems such as the suboptimal solution caused by the existence of the neural network discriminator and the difficulty of training convergence in the traditional method, and solves the problem of the generative adversarial network caused by the discrete text It cannot be applied to the text field, and reduces the training complexity.
  • FIG. 1 is a schematic structural diagram of a generative confrontation network provided by an embodiment of the present application.
  • FIG. 2 is a schematic flowchart of a training method of a generative confrontation network provided by an embodiment of the present application
  • FIG. 3 is a schematic flowchart of another training method of a generative confrontation network provided by an embodiment of the present application.
  • FIG. 4 is a schematic structural diagram of a training device for a generative confrontation network provided by an embodiment of the present application
  • FIG. 5 is a schematic structural diagram of a network training device provided by an embodiment of the present application.
  • the technical solution of the present application can be applied to a network training device.
  • the network training device may include various terminals, servers, and other devices for training the generative confrontation network GAN.
  • the terminal involved in this application may be a mobile phone, computer, tablet, personal computer, etc. This application is not limited.
  • GAN can be divided into two parts, that is, a generator (generator, referred to as "G” for generating data such as images or text, can also be called a generation model, a generation network, or the rest of the name) and used to identify images or
  • G generator
  • D discriminant model
  • This application proposes a new method to train and generate an adversarial network.
  • the generator and the discriminator are both neural network models, and the minimum and maximum games are used to implement the generator and discriminator.
  • this application no longer uses two neural network models, but determines the discriminator based on a preset explicit expression (that is, the discriminator is not Neural network model), through the pre-selected training data set to achieve GAN training, as shown in Figure 1, is a schematic diagram of a GAN architecture provided by this application.
  • the generator may be a neural network model, and the discriminator is not a neural network model. The details are described below.
  • FIG. 2 is a schematic flowchart of a training method for a generative adversarial network provided by an embodiment of the present application. Specifically, the method of this embodiment can be applied to a GAN.
  • the GAN includes a generator and a discriminator.
  • the discriminator is determined according to a preset explicit expression, not a neural network model.
  • the training method of the generative adversarial network may include the following steps:
  • the sample database may include at least one sample data, the at least one sample data may be real data (real samples), and the at least one first sample are all real data.
  • real data can be pre-selected as the training data of the GAN, so as to realize the training of the GAN.
  • the sample involved in this application may refer to a piece of data, such as text, text (sequence), and so on.
  • the at least one first sample constitutes a training data set, which can be described below as a training data set.
  • a sample for training the GAN that is, at least one first sample may be determined.
  • the at least one first sample may be selected from a preset sample database.
  • the at least one first sample may be randomly selected from the sample database; for another example, the at least one first sample may be selected from the sample database according to the characteristic information of the GAN.
  • the at least one first sample may also be generated by a real model, for example, a randomly initialized LSTM is used as the real model to generate the at least one first sample, that is, generate Real data distribution; another example, randomly generating at least one first sample; another example, generating the at least one first sample based on the GAN's feature information such as the need for sequences of different lengths, etc., not all here Enumeration.
  • a real model for example, a randomly initialized LSTM is used as the real model to generate the at least one first sample, that is, generate Real data distribution; another example, randomly generating at least one first sample; another example, generating the at least one first sample based on the GAN's feature information such as the need for sequences of different lengths, etc., not all here Enumeration.
  • This application takes the first sample from the sample database as an example for illustration.
  • the discriminator may be determined according to a preset explicit expression, which may be used to indicate the probability of discriminating the input sample as real data, rather than being a neural network discriminator.
  • the discriminator can learn the data distribution of the training data set, that is, the data experience distribution, which helps the discriminator to correctly predict which data is true to achieve Discriminator training.
  • the discriminator can be used to identify the probability that the second sample is real data based on the (at least one) first sample and using the display expression, that is, the first probability, and the output information includes the first probability. That is, the discriminator compares the input sample, such as the second sample, with the at least one first sample to identify the probability that the input sample, such as the second sample, is true data. That is, the discriminator can compare the input sample with the sample in the training data set, and determine the first probability according to the comparison result.
  • the first probability may be determined according to a second probability and a third probability
  • the second probability may be the distribution probability of the second sample in the training data set, that is, the sample input to the discriminator comes from the data The probability of empirical distribution
  • the third probability may be the probability that the generator generates the second sample, that is, the sample input to the discriminator is the probability generated by the generator.
  • the first probability when the second sample is the first sample in the training data set, that is, the sample input to the discriminator belongs to the training data set, the first probability may be based on the second probability and the third The probability is determined; when the second sample is not the first sample in the training data set, that is, the sample input to the discriminator does not belong to the training data set, the first probability may be zero.
  • the ratio between the second probability and the target sum value may be used as the first probability, where the target sum value is the second probability The sum of this third probability. That is, the first probability may be the ratio of the second probability to the target sum value.
  • the discriminator function has an explicit expression:
  • the discriminator D * G (x) is determined by the characteristic functions of p (x) and p g (x).
  • D * G (x) can represent the probability that the discriminator judges (identifies) the sample x as real data, which is the above-mentioned first probability
  • p (x) can represent the probability that the sample x comes from the empirical distribution of data, that is The aforementioned second probability
  • p g (x) may represent the probability that the generator generates x, that is, the aforementioned third probability.
  • the (optimal) discriminator D * G (x) can be expressed as follows:
  • p g (x) it can be determined as follows: the generator calculates the probability of occurrence of each word in the sample (sequence) x, and then determines p g (x) according to the probability of occurrence of each word .
  • Long Short-Term Memory LSTM
  • LSTM Long Short-Term Memory
  • the above explicit expression can be substituted into the value function of the GAN, that is, the first probability is substituted into the value function to implement the training of the generator. It enables the parameters of the generator to be directly optimized according to the unique characteristics of the empirical distribution of data in the training data set.
  • the generator may be a neural network generator, that is, the generator may be a neural network model.
  • the value function can be:
  • q) Jensen-Shannon divergence
  • the generator can be trained based on the output (gradient) of the discriminator.
  • D * G (x) the objective function optimized for the generator function becomes an optimization for a divergence .
  • the process of generator training can be regarded as the following optimization task, that is, the objective function in the training process is as follows:
  • any samples generated by generator G that are different from the training data set C can be regarded as forgery and discarded, because the corresponding value on D is 0 and the value function Is 0. It is therefore different from existing methods that use a neural network as a discriminator generator to generate arbitrary samples, and the discriminator gives confidence values from true or false.
  • the optimal discriminator D * G (x) ignores any samples that are different from the samples in the training data set C, only considers the same samples as in the training data set C, and the generator assigns a probability to each sample and Evaluate the impact of each sample on the value function. This means that the generator only needs the same samples as in the training data set C to maximize the value function, as shown in equation (1). Therefore, by directly optimizing the JS divergence between the distribution of the generator model and the empirical distribution p (x) from the training data set, the minimum and maximum optimization process of the generator and discriminator can be implicitly replaced to achieve the GAN training.
  • p G (x)) can be derived, that is, the corresponding to JSD (p (x)
  • the probability of the sample x on the generator is expressed as
  • the optimal (best) discriminator is expressed as According to the above formula (1), for a given sample x ⁇ C, there is an effective algorithm pair
  • the gradient is calculated as follows:
  • the generator can be trained based on the gradient obtained by the discriminator output information. It can be seen that this application does not need to use the neural network to approximate the discriminator in the GAN model, but directly obtains the explicit expression of the optimal discriminator and brings it into the value function. According to the unique characteristics of the empirical data distribution, the The parameters of the device are directly optimized. This application only needs to consider the samples on the training data set, which is easy to process, without using two neural networks, without using REINFORCE or Monte Carlo search and other algorithms to optimize GAN, avoiding the traditional method due to the neural network discriminator. There are a series of problems that may lead to sub-optimal solutions and training is not easy to converge, which solves the problem that GAN cannot be applied to the text field due to text dispersion.
  • a generative adversarial network without a neural network discriminator can automatically generate text data.
  • the quality of the generated data is superior to the existing text generation tools in terms of diversity and smoothness, such as SeqGAN, RankGan, LeakGan and other text generation tools.
  • the network training device can use a pre-selected training data set and a discriminator that is a preset explicit expression to discriminate the samples input to the discriminator, and then train the generator according to the discrimination result information, so It is possible to directly obtain the explicit expression of the optimal discriminator in GAN and train the generator of GAN according to the output of the discriminator without using a neural network to approximate the discriminator in GAN, which helps Solve the problem that GAN cannot be applied to the text field due to text dispersion, and reduce the training complexity.
  • FIG. 3 is a schematic flowchart of another training method of a generative adversarial network provided by an embodiment of the present application.
  • the training method of the generative adversarial network may include the following steps:
  • the feature information can be used to characterize the characteristics of the GAN (generator) to be trained.
  • the feature information may include the type of data used by the generator, the GAN application scenario information, the probability information of the length of the text (sequence) generated by the generator, the ratio information of the length of the text generated by the generator, etc. Enumerate.
  • the sample database may include various sample data, and each sample data is real data derived from a real data source.
  • the determined training data set includes the at least one first sample, that is, real data selected from the sample database.
  • each sample data in the sample database may carry its own type information (such as belonging to the text, image, etc.), label information (such as domain label, scene keyword label, text length label, etc.); and // Or, optionally, each sample data in the sample database can also be divided into sub-databases according to the text length, and a length label is set for the text data of each sub-database, the length of the text data of each sub-database is in the same length interval range Inside, etc., not listed here.
  • the network training device may determine the type of data used by the generator to generate, that is, the target type, and select at least one first sample from the sample database according to the target type, and then generate A first sample of the training data set.
  • the sample database may include sample data of multiple data types, and the data type of each first sample in the at least one first sample is the same as the target type. That is to say, when selecting the training data set, the same type of sample data can be selected in combination with the type of data used by the generator in the GAN to achieve personalized training of the GAN and improve training flexibility and reliability.
  • text data can be selected as a sample, and a training data set including the selected text data can be determined.
  • image data may be selected as a sample, and a training data set including the selected image data may be determined.
  • the network training device may obtain the application scene information of the GAN, and determine the corresponding tags according to the application scene information, for example, from a preset set of tags according to the keywords of the application scene information A label corresponding to the keyword, that is, a target label, further selects at least one first sample from a sample database according to the target label, and generates the training data set including the at least one first sample.
  • the tag set may include multiple tags and keywords corresponding to each tag, and the keyword corresponding to the target tag includes the keyword of the application scenario information; the sample database may include multiple sample tags corresponding to For sample data, the sample label of the first sample is the same as the target label.
  • different GAN scenes can also be used to select the sample data under the corresponding label in the scene field, so as to realize the personalized training of the GAN and improve the training flexibility and reliability.
  • banking-related data such as the text data under the banking label as the sample or the text data under the banking label as the sample, and determine to include the banking label Text data training data set.
  • politically relevant data can be selected from the sample database, such as the text data under the political label as a sample to determine the training data set that includes the text data under the political label, etc. Wait, not listed here.
  • the first sample may be a text sequence, so that the network training device may separately determine the probability that the generator generates text sequences of various lengths, and determine the probability according to the probability of generating text sequences of various lengths.
  • the proportion of the text sequences of each length to be selected, the probability corresponding to the text sequence under each length and the ratio correspond to each other; further, at least one first can be selected from the sample database according to the proportion of the text sequences of each length to be selected Samples and generate the training data set including the at least one first sample.
  • the sample database may include sample data corresponding to text sequences of various lengths, the proportion of sample data of each length in the at least one first sample in the training data set and the text sequence of each length to be selected Matches the ratio (that is, the sample data corresponding to each length occupies the same proportion in the training data set as the text sequence corresponding to the length to be selected). That is to say, when selecting the training data set, the training data set can also be selected in combination with the length of the sentence, so that the reliability of the training can be improved and targeted.
  • the probability may be set by the staff based on experience, or may be determined through big data analysis.
  • the network training device can determine the corresponding proportion according to the set probability of each length of the text sequence, and then from the sample database according to the proportion corresponding to each length (such as from the sample data under each length label, or from each length Select a corresponding proportion of sample data; in another example, the network training device can select historical data in the application scenario by acquiring the application scenario information of the GAN and according to the application scenario information, and then select the application scenario according to the selected application scenario.
  • Historical data predicts the probability of the GAN generator to generate sentences of different lengths (for example, the probability of each length of sentences in the historical data within a preset time period is determined separately, and the determined probability of each length of sentences is used as the generation
  • the generator generates the probability of text sequences of various lengths), and selects a corresponding proportion of sample data as the training data set according to the probability, which can further improve the flexibility and reliability of GAN training.
  • a number threshold may also be set in advance.
  • a corresponding number of sample data may be selected as the training data set according to the data threshold.
  • the correspondence between the feature information of the GAN and the number threshold can be preset, that is, the number threshold corresponding to different feature information can be different, so that in different training scenarios, the corresponding number of sample data can be flexibly selected as training
  • the data set is used to realize the training of GAN, which further improves the training reliability.
  • the discriminator is determined according to a preset explicit expression instead of a neural network discriminator, and the explicit expression can be used to indicate the probability of discriminating the input sample as real data.
  • the sample input to the discriminator that is, the input of the discriminator may be the first sample in the training data set or the second sample generated by the generator.
  • the output information that is, the output of the discriminator may include a first probability corresponding to the explicit expression, and the first probability is a probability that the discriminator discriminates the input sample, such as the second sample, as real data.
  • the explicit expression corresponding to the discriminator can be substituted into the value function of the GAN, that is, the first probability is substituted into the value function to realize the training of the generator .
  • the generator may be a neural network generator, that is, the generator may be a neural network model.
  • steps 303-305 reference may be made to the relevant description of steps 202-204 in the embodiment shown in FIG. 2 above, which is not repeated here.
  • a prompt message may be output to prompt the user to select a network training method, such as whether to select text-based GAN training or image-based GAN training. If it is the former, the GAN can be trained according to the method shown in FIG. 2 or FIG. 3 (hereinafter referred to as method 1). If it is the latter, the GAN can be trained according to the method 1; or according to the minimum and maximum game (such as ) To realize the training of the generator (minimal game) and the discriminator (maximum game) (hereinafter referred to as mode 2), so as to realize the training of GAN, for example, further output a prompt message to prompt the user to select a specific training method .
  • a network training method such as whether to select text-based GAN training or image-based GAN training.
  • the network training device can also automatically select the training method by acquiring feature information of the GAN (such as the type of data the generator uses to generate, the application field of the GAN, etc.), for example, for the GAN used to generate the text sequence, Option 1 can be used to train the GAN. For the GAN used to generate the image, option 2 can be used to train the GAN, and so on, which are not listed here.
  • feature information of the GAN such as the type of data the generator uses to generate, the application field of the GAN, etc.
  • Option 1 can be used to train the GAN.
  • option 2 can be used to train the GAN, and so on, which are not listed here.
  • the network training device can acquire the feature information of the GAN, select sample data as the training data set according to the feature information of the GAN, and then use the sample data in the training data set and the preset explicit expression
  • the discriminator of the discriminator discriminates the samples input to the discriminator, and trains the generator according to the discrimination result information, so that the flexibility and reliability of GAN training can be improved by flexibly selecting the training data set, and by directly obtaining the most
  • the explicit expression of the optimal discriminator the generator of the GAN is trained according to the output of the discriminator without using a neural network to approximate the discriminator in the GAN, which helps to solve the GAN cannot be Applied to the field of text, and reduces the training complexity.
  • FIG. 4 is a schematic structural diagram of a training device for a generative confrontation network provided by an embodiment of the present application.
  • the device may be installed in a network training device and used to perform the training method of the above-mentioned generative confrontation network.
  • the network training device 400 of this embodiment may include: an obtaining unit 401 and a training unit 402;
  • the obtaining unit 401 is configured to select at least one first sample from a preset sample database, where the sample database includes at least one sample data, and the at least one first sample is all real data;
  • An obtaining unit 401 configured to obtain a second sample generated by the generator, and use the discriminator to discriminate the second sample to obtain discriminated output information; wherein, the discriminator is used according to the A first sample and using a display expression to identify a first probability that the second sample is real data, and the output information includes the first probability;
  • the training unit 402 is further configured to train the generator according to the output information of the discriminator.
  • the first probability is determined according to the second probability and the third probability; wherein, the second probability is the second sample A distribution probability in the training data set composed of the at least one first sample, the third probability is a probability that the generator generates the second sample;
  • the first probability is zero.
  • the first probability is a ratio between the second probability and a target sum value
  • the target sum value is the second probability Sum with the third probability
  • the obtaining unit 401 may be further specifically configured to determine a target type of data used by the generator to generate, and select at least one first sample from a sample database according to the target type.
  • the sample database includes sample data of multiple data types, and the data type of each first sample in the at least one first sample is the same as the target type.
  • the obtaining unit 401 may be further specifically configured to obtain application scenario information of the generative confrontation network, and determine a target label from a preset label set according to keywords of the application scenario information, the label set It includes multiple tags and keywords corresponding to each tag, and the keyword corresponding to the target tag includes the keyword of the application scene information; at least one first sample is selected from the sample database according to the target tag.
  • the sample database includes sample data corresponding to various sample tags respectively, and the sample tag of the first sample is the same as the target tag.
  • the first sample is a text sequence
  • the obtaining unit 401 may also be specifically configured to separately determine the probability that the generator generates text sequences of various lengths, and determine the proportion of text sequences of various lengths to be selected according to the probability of generating text sequences of various lengths, each The probability corresponding to the text sequence under the length corresponds to the ratio one by one; according to the proportion of the text sequence to be selected, at least one first sample is selected from the sample database.
  • the sample database includes sample data corresponding to text sequences of various lengths, the proportion of the sample data of each length in the at least one first sample in the training data set and each length to be selected The proportion of the text sequence matches.
  • the training unit 402 may be specifically configured to substitute the explicit expression into the value function of the generative adversarial network, so as to compare the explicit probability with the first probability corresponding to the explicit expression.
  • Neural network generator for training may be specifically configured to substitute the explicit expression into the value function of the generative adversarial network, so as to compare the explicit probability with the first probability corresponding to the explicit expression.
  • the network training device may implement part or all of the steps in the training method of the generative confrontation network in the embodiments shown in FIG. 2 to FIG. 3 through the above units.
  • the embodiments of the present application are device embodiments corresponding to the method embodiments, and the description of the method embodiments is also applicable to the embodiments of the present application.
  • the network training device can use the pre-selected training data set and the discriminator that is the preset explicit expression to discriminate the samples input to the discriminator, and then train the generator according to the discrimination result information, so that It is possible to directly obtain the explicit expression of the optimal discriminator in GAN and train the generator of GAN according to the output of the discriminator without using a neural network to approximate the discriminator in GAN, which helps Solve the problem that GAN cannot be applied to the text field due to text dispersion, and reduce the training complexity.
  • FIG. 5 is a schematic structural diagram of a network training device according to an embodiment of the present application.
  • the network training device can be used to perform the above method.
  • the network training device 500 in this embodiment may include: one or more processors 501 and a memory 502.
  • the network training device may further include one or more user interfaces 503, and / or one or more communication interfaces 504.
  • the processor 501, the user interface 503, the communication interface 504, and the memory 502 may be connected by a bus 505, or may be connected by other means.
  • the bus mode is used as an example in FIG.
  • the memory 502 is used to store a computer program, and the computer program includes program instructions, and the processor 501 is used to execute the program instructions stored in the memory 502.
  • the processor 501 can be used to call the program instructions to perform the following steps: select at least one first sample from a preset sample database, the sample database includes at least one sample data, and the at least one first sample are all Real data; training the discriminator through the at least one first sample; obtaining the second sample generated by the generator, and using the discriminator to discriminate the second sample to obtain the discriminated output Information; wherein, the discriminator is used to identify the first probability that the second sample is real data based on the first sample and using a display expression, and the output information includes the first probability; according to the The output information of the discriminator trains the generator.
  • the first probability is determined according to the second probability and the third probability; wherein, the second probability is the second sample Distribution probability in the training data set composed of the at least one first sample, the third probability is the probability that the generator generates the second sample; when the second sample is not the same as the first At this time, the first probability is zero.
  • the first probability is a ratio between the second probability and a target sum value
  • the target sum value is the second The sum of the probability and the third probability
  • the processor 501 when the processor 501 executes selecting at least one first sample from a preset sample database, it may also be used to perform the following steps: determine a target type of data used by the generator to generate, and according to the The target type selects at least one first sample from the sample database; wherein, the sample database includes sample data of multiple data types, and the data type of each of the first samples in the at least one first sample Same as the target type.
  • the processor 501 when the processor 501 executes the selection of at least one first sample from a preset sample database, it may also be used to perform the following steps: obtain application scenario information of the generative confrontation network, and according to the application
  • the keyword of the scene information determines the target tag from the preset tag set, the tag set includes multiple tags and keywords corresponding to each tag, and the keyword corresponding to the target tag includes the key of the application scene information Word; select at least one first sample from a sample database according to the target tag; wherein, the sample database includes sample data corresponding to multiple sample tags respectively, the sample tag of the first sample and the target tag the same.
  • the first sample is a text sequence; when the processor 501 executes the selecting at least one first sample from a preset sample database, it may also be used to perform the following steps: separately determine that the generator generates Probability of text sequences of various lengths, and determine the proportion of text sequences of each length to be selected according to the probability of generating text sequences of various lengths.
  • the probability corresponding to the text sequence of each length corresponds to the ratio one by one;
  • the proportion of text sequences of each length to be selected, and at least one first sample is selected from a sample database; wherein, the sample database includes sample data corresponding to text sequences of each length, and the at least one first sample
  • the proportion of the sample data of each length in the training data set matches the proportion of the text sequence of each length to be selected.
  • the generator may be a neural network generator.
  • the processor 501 when the processor 501 executes the training of the generator through the output information of the discriminator, it may specifically perform the following steps: substitute the explicit expression into the value of the generative confrontation network Function to train the neural network generator with the first probability corresponding to the explicit expression.
  • the processor 501 may be a central processing unit (Central Processing Unit, CPU), and the processor may also be other general-purpose processors, digital signal processors (Digital Signal Processors, DSPs), application specific integrated circuits (Application Specific Integrated) Circuit (ASIC), ready-made programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, etc.
  • the general-purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
  • the user interface 503 may include an input device and an output device.
  • the input device may include a touch panel, a microphone, and the like
  • the output device may include a display (LCD, etc.), a speaker, and the like.
  • the communication interface 504 may include a receiver and a transmitter for communicating with other devices.
  • the memory 502 may include a read-only memory and a random access memory, and provide instructions and data to the processor 501. A portion of the memory 502 may also include non-volatile random access memory. For example, the memory 502 may also store the above-mentioned explicit expressions and so on.
  • the processor 501 and the like described in the embodiments of the present application can execute the implementation described in the method embodiments shown in FIGS. 2 to 3 above, and can also execute each of the methods described in FIG. The implementation of the unit is not repeated here.
  • An embodiment of the present application also provides a computer-readable storage medium that stores a computer program, and when the computer program is executed by a processor, the computer program can be implemented as described in the embodiments corresponding to FIGS. 2 to 3. Part or all of the steps in the training method of the generative adversarial network may also implement the functions of the apparatus or network training device of the embodiment shown in FIG. 4 or FIG. 5 of the present application, and details are not described here.
  • An embodiment of the present application further provides a computer program product containing instructions, which when run on a computer, causes the computer to perform some or all of the steps in the above method, which is not repeated here.
  • the computer-readable storage medium may be an internal storage unit of the network training device described in any of the foregoing embodiments, such as a hard disk or a memory of the network training device.
  • the computer-readable storage medium may also be an external storage device of the network training device, such as a plug-in hard disk equipped on the network training device, a smart memory card (Smart, Media, Card, SMC), and secure digital , SD) card, flash card (Flash Card), etc.
  • the size of the sequence numbers of the above processes does not mean that the execution order is sequential, and the execution order of each process should be determined by its function and inherent logic, and should not correspond to the implementation process of the embodiments of the application Constitute any limitation.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Machine Translation (AREA)

Abstract

本申请实施例公开了一种生成式对抗网络的训练方法、相关设备及介质,应用于人工智能领域。其中,该方法包括:从预设的样本数据库选取至少一个第一样本;通过所述至少一个第一样本训练判别器;获取所述生成器生成的第二样本,并使用所述判别器对所述第二样本进行鉴别,以得到鉴别后的输出信息,所述判别器用于根据所述第一样本并采用显示表达式来鉴别所述第二样本为真实数据的第一概率,所述输出信息包括所述第一概率;根据所述判别器的所述输出信息训练生成器。采用本申请,有助于解决因文本离散性导致生成式对抗网络无法应用到文本领域的问题。

Description

生成式对抗网络的训练方法、相关设备及介质
本申请要求于2018年10月24日提交中国专利局、申请号为201811247859.7、申请名称为“生成式对抗网络的训练方法、相关设备及介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及人工智能领域,尤其涉及一种生成式对抗网络的训练方法、相关设备及介质。
背景技术
生成式对抗网络(Generative Adversarial Network,缩写:GAN)作为训练深度生成模型最有效的方法之一,被广泛应用于生成图像,其生成的图像在多样性和真实性上都取得了较好的效果。然而,GAN在文本序列领域一直没能取得突破性的进展,最根本的问题在于文本是离散的,无法让文本产生连续的变化,而图像是可以连续地进行变化的,比如可以通过将像素点(0-255)增加或减少一些数值来实现改变图像。因此,如何将GAN应用于文本,以提升文本生成效果成为关键。
发明内容
本申请实施例提供一种生成式对抗网络的训练方法、相关设备及介质,有助于解决因文本离散性导致生成式对抗网络无法应用到文本领域的问题。
第一方面,本申请实施例提供了一种生成式对抗网络的训练方法,所述生成式对抗网络包括生成器和判别器,所述方法包括:
从预设的样本数据库选取至少一个第一样本,所述样本数据库包括至少一个样本数据,所述至少一个第一样本均为真实数据;
通过所述至少一个第一样本训练所述判别器;
获取所述生成器生成的第二样本,并使用所述判别器对所述第二样本进行鉴别,以得到鉴别后的输出信息;其中,所述判别器用于根据所述第一样本并采用显示表达式来鉴别所述第二样本为真实数据的第一概率,所述输出信息包括所述第一概率;
根据所述判别器的所述输出信息训练所述生成器。
第二方面,本申请实施例提供了一种生成式对抗网络的训练装置,该装置包括用于执行上述第一方面的方法的单元。
第三方面,本申请实施例提供了一种网络训练设备,包括处理器和存储器,所述处理器和存储器相互连接,其中,所述存储器用于存储支持网络训练设备执行上述方法的计算机程序,所述计算机程序包括程序指令,所述处理器被配置用于调用所述程序指令,执行上述第一方面的方法。可选的,该网络训练设备还可包括用户接口和/或通信接口。
第四方面,本申请实施例提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序包括程序指令,所述程序指令当被处理器执行时使所述处理器执行上述第一方面的方法。
本申请实施例能够通过直接得到生成式对抗网络中最优的判别器的显式表达式,进而根据该判别器的鉴别结果对该生成式对抗网络的生成器进行训练,而无需采用神经网络逼近生成式对抗网络中的判别器,这就避免了传统方法中因神经网络判别器的存在可能导致的次优解以及训练不容易收敛等一系列问题,解决了因文本离散性导致生成式对抗网络无法应用到文本领域的问题,且降低了训练复杂度。
附图说明
为了更清楚地说明本申请实施例技术方案,下面将对实施例描述中所需要使用的附图进行说明。
图1是本申请实施例提供的一种生成式对抗网络的架构示意图;
图2是本申请实施例提供的一种生成式对抗网络的训练方法的流程示意图;
图3是本申请实施例提供的另一种生成式对抗网络的训练方法的流程示意图;
图4是本申请实施例提供的一种生成式对抗网络的训练装置的结构示意图;
图5是本申请实施例提供的一种网络训练设备的结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行描述。
本申请的技术方案可应用于网络训练设备中,该网络训练设备可包括各种终端、服务器等设备,用于对生成式对抗网络GAN进行训练。本申请涉及的终端可以是手机、电脑、平板、个人计算机等,本申请不做限定。
GAN可分为两个部分,即用于生成图像或文本等数据的生成器(generator,简称“G”,还可以称为生成模型、生成网络或其余名称)和用于通过训练以鉴别图像或文字等数据的真实性的判别器(discriminator,简称“D”,还可以称为判别模型、判别网络或其余名称),并能够根据判别器的反馈信息(梯度)来训练生成器,使得生成器得以按照与训练数据相同的分布来生成数据。本申请提出了一种新的方法来训练生成对抗网络,相比于现有技术中该生成器和判别器都是神经网络模型,并通过极小极大博弈来实现对生成器和判别器的训练,以及采用REINFORCE或Monte Carlo search等算法进行GAN优化的GAN训练方式,本申请不再使用两个神经网络模型,而是根据预设的显式表达式确定判别器(即该判别器不为神经网络模型),通过预先选取的训练数据集来实现对GAN的训练,如图1所示,是本申请提供的一种GAN的架构示意图。在本申请中,该生成器可以为一个神经网络模型,判别器不为神经网络模型。以下详细说明。
请参见图2,图2是本申请实施例提供的一种生成式对抗网络的训练方法的流程示意图。具体的,本实施例的方法可应用于GAN,该GAN包括生成器和判别器,该判别器是根据预设的显式表达式确定出的,而不是神经网络模型。如图2所示,该生成式对抗网络的训练方法可以包括以下步骤:
201、从预设的样本数据库选取至少一个第一样本。
其中,该样本数据库中可包括至少一个样本数据,该至少一个样本数据可以为真实数据(真实样本),且该至少一个第一样本均为真实数据。也就是说,可预先选取真实数据作为该GAN的训练数据,以实现对GAN的训练。可以理解,本申请涉及的样本可以是指一个数据,例如文字、文本(序列)等等。为简化描述,假设该至少一个第一样本构成训练数据集,下面可以以训练数据集进行描述。
具体的,在训练GAN之前,可确定用于训练该GAN的样本,即至少一个第一样本。可选的,该至少一个第一样本可以是从预设的样本数据库中选取的。例如,该至少一个第一样本可以是从该样本数据库随机选择的;又如,该至少一个第一样本可以是根据该GAN的特征信息从该样本数据库选择出的。或者,可选的,在一些实施例中,该至少一个第一样本还可以是通过真实模型生成的,例如使用随机初始化的LSTM作为真实模型,来生成该至少一个第一样本,即生成真实数据分布;又如,随机生成至少一个第一样本;又如,根据该GAN的特征信息如对于不同长度的序列的需求生成该至少一个第一样本,等等,此处不一一列举。本申请以从样本数据库选取该第一样本为例进行 说明。
202、通过该至少一个第一样本训练判别器。
在本申请中,该判别器可以是根据预设的显式表达式确定出的,该显式表达式可用于指示将输入的样本鉴别为真实数据的概率,而不再为神经网络判别器。通过利用训练数据集中的真实数据来对判别器进行训练,使得判别器能够学习到该训练数据集的数据分布,即数据经验分布,有助于判别器正确地预测哪些数据是真的,以实现对判别器的训练。
203、获取该生成器生成的第二样本,并使用该判别器对该第二样本进行鉴别,以得到鉴别后的输出信息。
其中,该判别器可用于根据该(至少一个)第一样本并采用显示表达式来鉴别第二样本为真实数据的概率即第一概率,该输出信息包括该第一概率。也即,该判别器通过将输入的样本如第二样本和该至少一个第一样本进行对比后将该输入的样本如第二样本鉴别为真实数据的概率。也就是说,判别器可通过将输入的样本和训练数据集中的样本进行对比,并根据该对比结果来确定该第一概率。
可选的,该第一概率可以是根据第二概率和第三概率确定出的,该第二概率可以为该第二样本在该训练数据集中的分布概率,也即输入判别器的样本来自数据经验分布的概率;该第三概率可以为该生成器生成该第二样本的概率,也即输入判别器的样本为生成器生成的概率。在一些实施例中,当该第二样本为该训练数据集中的该第一样本,即输入判别器的样本属于训练数据集时,该第一概率可以是根据该第二概率和该第三概率确定出的;当该第二样本不为该训练数据集中的该第一样本,即输入判别器的样本不属于该训练数据集时,该第一概率可以为零。
进一步可选的,在根据第二概率和第三概率确定第一概率时,可以将该第二概率与目标和值之间的比值作为该第一概率,其中该目标和值为该第二概率与该第三概率之和。也就是说,该第一概率可以为该第二概率与目标和值的比值。
例如,对于任何给定的生成器G,判别器函数有显式表达式:
D * G(x)=f(p(x),p g(x))
也即,判别器D * G(x)由p(x)和p g(x)的特征函数确定出。其中,D * G(x)可表示判别器把样本x判断(鉴别)成真实数据的概率,也即上述的第一概率;p(x)可表示样本x来自数据经验分布的概率,也即上述的第二概率;p g(x)可表示 生成器生成x的概率,也即上述的第三概率。可选的,在一些实施例中,(最优)判别器D * G(x)可表示如下:
Figure PCTCN2018123519-appb-000001
x∈C(否则D * G(x)为0)
可选的,对于p(x),可通过如下方式确定出:假设样本(序列)x来自训练数据集C,可通过统计的方法计算样本x在该训练数据集中出现的概率,即p(x)=样本序列*出现次数/样本集的大小。例如,p(x)可表示x在训练数据集C={x1,...,xN}(一共N个样本)上的经验分布,则
Figure PCTCN2018123519-appb-000002
x∈C(否则p(x)为0)
可选的,对于p g(x),可通过如下方式确定出:生成器计算样本(序列)x中每一个字出现的概率,进而根据该每一个字出现的概率确定出p g(x)。例如,可采用长短期记忆网络(Long Short-Term Memory,简称LSTM)来计算p g(x)。例如,对于样本序列x,可通过LSTM算法计算样本序列x中每一个字xi产生的概率,则p g(x)=∏p(x i),即p g(x)可以为该样本序列x中每一个字的概率的乘积。
204、根据该判别器的该输出信息训练该生成器。
在对GAN中的生成器进行训练时,可以将上述的显式表达式代入该GAN的价值函数,即将该第一概率代入该价值函数,以实现对该生成器的训练。使得能够根据训练数据集中数据经验分布的独特的特性,来对生成器的参数进行直接优化。可选的,该生成器可以为神经网络生成器,即该生成器可以为神经网络模型。
例如,该价值函数可以为:
Figure PCTCN2018123519-appb-000003
其中,JSD(p||q)(Jensen-Shannon散度)可以是指两个分布p和q之间的JS散度。从而可以基于判别器的输出(梯度)来训练生成器,具体可通过将D * G(x)带入该价值函数,使得对生成器函数优化的目标函数变成了对一种散度的优化。例如,生成器训练的过程可以看成是如下的优化任务,也即训练过程中的目标函数如下:
Figure PCTCN2018123519-appb-000004
也就是说,生成器G生成的任何与训练数据集C不同的样本都可视为伪造并丢弃,因为D上的相应值为0且价值函数
Figure PCTCN2018123519-appb-000005
为0。因此不同于使 用神经网络作为判别器生成器生成任意样本,并且判别器给出来自真实或假的置信度值的现有方法。在本申请中,最佳判别器D * G(x)忽略与训练数据集C中样本不同的任何样本,仅考虑与训练数据集C中相同的样本,并且生成器为每个样本分配概率并评估每个样本对价值函数的影响。这意味着生成器仅需要与训练数据集C中相同的样本以极大化值函数,如等式(1)所示。从而可通过直接优化生成器模型的分布和来自训练数据集的的经验分布p(x)之间的JS散度,隐含地替代执行生成器和判别器的最小极大优化过程,以实现对GAN的训练。
进一步可选的,在对生成器进行训练时,可对JSD(p(x)||p G(x))求导,即计算JSD(p(x)||p G(x))对应的梯度,从而调整神经网络生成器的参数。例如,假设使用包括参数θ的神经网络作为生成器G,生成器上的样本x的概率表示为
Figure PCTCN2018123519-appb-000006
最优(最佳)判别器表示为
Figure PCTCN2018123519-appb-000007
根据上述公式(1),对于给定样本x∈C,存在有效算法对
Figure PCTCN2018123519-appb-000008
的梯度进行计算,如下:
Figure PCTCN2018123519-appb-000009
由此表明,对于给定样本x∈C,JSD(p(x)||p G(x))的梯度是对数似然的梯度的修改,我们可使用随机梯度下降(SGD)来优化生成器的输出分布与来自训练数据集的经验数据分布之间的JSD。可选的,当在算法中使用小批量(mini-batch)SGD时,将几个序列堆叠在一起。由于梯度的项为
Figure PCTCN2018123519-appb-000010
这使得梯度非常小,则可以将
Figure PCTCN2018123519-appb-000011
项标准化,如在这批序列上使该项接近1或0,以使得训练变得非常稳定且易于调整。
从而能够基于判别器输出信息得到的梯度来训练生成器。可见,本申请无需采用神经网络逼近GAN模型中的判别器,而是通过直接得到最优的判别器的显式表达式,将其带入价值函数,根据数据经验分布的独特的特性,对生成器的参数进行直接优化。本申请因仅需考虑训练数据集上的样本,易于处理,而无需使用两个神经网络,无需使用REINFORCE或Monte Carlo search等算法来对GAN进行优化,避免了传统方法中因神经网络判别器的存在可能导致的次优解以及训练不容易收敛等一系列问题,解决了因文本离散性导致GAN无法应用到文本领域的问题。从而可广泛用在机器人问答、自动生成新闻、机器翻译等文本领域中,也可用于生成逼真的图像。没有神经网络判别器的生成式对抗网络可以自动生成文本数据,生成数据的质量从多样性和通顺性上都优于已有的文本生成工具,如SeqGAN,RankGan,LeakGan等文本生成工具。
在本实施例中,网络训练设备能够利用预先选取的训练数据集和为预设的显式表达式的判别器对输入该判别器的样本进行鉴别,进而根据该鉴别结果信息训练生成器,使得能够通过直接得到GAN中最优的判别器的显式表达式,并根据该判别器的输出对该GAN的生成器进行训练,而无需采用神经网络逼近GAN中的判别器,这就有助于解决因文本离散性导致GAN无法应用到文本领域的问题,且降低了训练复杂度。
请参见图3,图3是本申请实施例提供的另一种生成式对抗网络的训练方法的流程示意图。具体的,如图3所示,该生成式对抗网络的训练方法可以包括以下步骤:
301、获取生成式对抗网络的特征信息。
其中,该特征信息可用于表征待训练的GAN(生成器)的特征。例如,该特征信息可包括生成器用于生成的数据的类型、GAN应用场景信息、生成器生成文本(序列)的长度的概率信息、生成器生成文本的长度的比例信息等等,此处不一一列举。
302、根据该生成式对抗网络的特征信息从样本数据库选取至少一个第一样本,以确定训练数据集。
其中,该样本数据库中可包括各种样本数据,各样本数据均是来源于真实数据源的真实数据。该确定出的训练数据集包括该至少一个第一样本,即从样本数据库选取的真实数据。可选的,该样本数据库中的各样本数据可携带有各自的类型信息(如属于本文、图像等类型)、标签信息(如领域标签、场景关键字标签、文本长度标签等等);和/或,可选的,该样本数据库中的各样本数据还可按照文本长度划分子数据库,并为每一子数据库的文本数据设置长度标 签,每一子数据库的文本数据的长度在同一长度区间范围内,等等,此处不一一列举。
在一种可能的实施方式中,网络训练设备可确定该生成器用于生成的数据的类型,即目标类型,并根据该目标类型从样本数据库中选取至少一个第一样本,进而生成包括该至少一个第一样本的该训练数据集。其中,该样本数据库中可包括多种数据类型的样本数据,该至少一个第一样本中每一个该第一样本的数据类型与该目标类型均相同。也就是说,在进行训练数据集的选取时,可结合GAN中生成器用于生成的数据的类型来选取相同类型的样本数据,以实现对GAN的个性化训练,提升训练灵活性和可靠性。例如,如果生成器是用于生成文本序列的(或者说GAN是应用于文本领域的),则可选取文本数据作为样本,并确定包括该选取的文本数据的训练数据集。又如,如果生成器是用于生成图像的(或者说GAN是应用于图像领域的),则可选取图像数据作为样本,并确定包括该选取的图像数据的训练数据集。
在一种可能的实施方式中,网络训练设备可获取该GAN的应用场景信息,并根据该应用场景信息确定其对应的标签,比如根据该应用场景信息的关键字从预设的标签集合中确定与该关键字对应的标签,即目标标签,进而根据该目标标签从样本数据库选取至少一个第一样本,并生成包括该至少一个第一样本的该训练数据集。其中,该标签集合可包括多种标签以及与每一个标签对应的关键字,且该目标标签对应的关键字包括该应用场景信息的关键字;该样本数据库中可包括多种样本标签分别对应的样本数据,该第一样本的样本标签与该目标标签相同。也就是说,在进行训练数据集的选取时,还可结合不同的GAN场景来选取该场景领域对应标签下的样本数据,以实现对GAN的个性化训练,提升训练灵活性和可靠性。例如,对于用于银行智能机器人的GAN,可从样本数据库中选取银行业务相关的数据,如银行业务标签下的文本数据作为样本或银行标签下的文本数据作为样本,并确定包括该银行业务标签下的文本数据训练数据集。又如,对于用于自动生成政治新闻的GAN,可从样本数据库中选取政治相关的数据,如政治标签下的文本数据作为样本,来确定包括该政治标签下的文本数据的训练数据集,等等,此处不一一列举。
在一种可能的实施方式中,该第一样本可以为文本序列,从而网络训练设备可分别确定该生成器生成各长度的文本序列的概率,并根据该生成各长度的文本序列的概率确定待选取的各长度的文本序列的比例,每一长度下的文本序列对应的概率和比例一一对应;进而可根据该待选取的各长度的文本序列的比例,从样本数据库选取至少一个第一样本,并生成包括该至少一个第一样本的 该训练数据集。其中,该样本数据库中可包括各长度的文本序列对应的样本数据,该至少一个第一样本中各长度的样本数据在该训练数据集中所占的比例与该待选取的各长度的文本序列的比例相匹配(即每一长度对应的样本数据在该训练数据集中所占的比例与该长度对应的待选取的文本序列的比例相同)。也就是说,在进行训练数据集的选取时,还可结合语句的长短来选取训练数据集,从而能够提升训练的可靠性,针对性较强。可选的,该概率可以是工作人员根据经验设置得到的,也可以是通过大数据分析确定出的。例如,网络训练设备可按照设置的各长度的文本序列的概率确定其对应的比例,进而按照各长度对应的比例从该样本数据库(如从各长度标签下的样本数据,又如从各长度对应的子数据库)中选取对应比例的样本数据;又如,网络训练设备可通过获取该GAN的应用场景信息并根据该应用场景信息选取该应用场景下的历史数据,进而根据选取的该应用场景下的历史数据预测出该GAN的生成器生成不同长度的语句的概率(如分别确定出预设时间段内历史数据中各长度的语句的概率,将该确定出的各长度的语句的概率作为生成器生成各长度的文本序列的概率),以根据该概率选取对应比例数量的样本数据作为训练数据集,从而能够进一步提升GAN训练的灵活性及可靠性。
可选的,还可预先设置一个数目阈值,在进行训练数据集的选取时,可根据该数据阈值选取对应数目的样本数据作为训练数据集。进一步可选的,还可预先设置得到GAN的特征信息和数目阈值的对应关系,即不同特征信息对应的数目阈值可以不同,从而在不同训练场景下,能够灵活选取对应的数目的样本数据作为训练数据集来实现对GAN的训练,进一步提升了训练可靠性。
303、通过训练数据集中的第一样本训练判别器。
其中,该判别器是根据预设的显式表达式确定出的,而不再为神经网络判别器,该显式表达式可用于指示将输入的样本鉴别为真实数据的概率。
304、获取该生成器生成的第二样本,并使用该判别器对输入该判别器的样本进行鉴别,以得到鉴别后的输出信息。
其中,输入该判别器的样本即判别器的输入可以为该训练数据集中的该第一样本或该生成器生成的该第二样本。该输出信息即判别器的输出可包括该显式表达式对应的第一概率,该第一概率为该判别器将该输入的样本如第二样本鉴别为真实数据的概率。
305、根据该判别器的该输出信息训练生成器。
具体的,在对GAN中的生成器进行训练时,可以将该判别器对应的显式表达式代入该GAN的价值函数,即将该第一概率代入该价值函数,以实现对 该生成器的训练。使得能够根据训练数据集中数据经验分布的独特的特性,来实现对生成器的训练。可选的,该生成器可以为神经网络生成器,即该生成器可以为神经网络模型。
可选的,该步骤303-305的描述可参照上述图2所示实施例中步骤202-204的相关描述,此处不赘述。
在其他可选的实施例中,在对GAN进行训练之前,可输出提示消息,以提示用户选择网络训练方式,比如选择为基于文本的GAN训练还是基于图像的GAN训练。如果是前者,可按照上述图2或图3所示方式对GAN进行训练(后面简称方式1)。如果是后者,可按照方式1对GAN进行训练;或者按照极小极大博弈(如
Figure PCTCN2018123519-appb-000012
)来实现对生成器(极小博弈)的训练和判别器(极大博弈)的训练(后面简称方式2),从而实现对GAN的训练,比如可进一步输出提示消息提示用户选择具体的训练方式。或者,可选的,网络训练设备还可通过获取GAN的特征信息(如生成器用于生成的数据的类型、GAN的应用领域等)来自动选择训练方式,比如对于用于生成文本序列的GAN,可选择方式1对该GAN进行训练,对于用于生成图像的GAN,可选择方式2对该GAN进行训练,等等,此处不一一列举。
在本实施例中,网络训练设备能够通过获取GAN的特征信息,根据该GAN的特征信息来选取样本数据作为训练数据集,进而利用该训练数据集中的样本数据和为预设的显式表达式的判别器对输入该判别器的样本进行鉴别,并根据该鉴别结果信息训练生成器,使得能够通过灵活选取训练数据集来提升GAN训练的灵活性及可靠性,并能够通过直接得到GAN中最优的判别器的显式表达式,根据该判别器的输出对该GAN的生成器进行训练,而无需采用神经网络逼近GAN中的判别器,这就有助于解决因文本离散性导致GAN无法应用到文本领域的问题,且降低了训练复杂度。
上述方法实施例都是对本申请的生成式对抗网络的训练方法的举例说明,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
请参见图4,图4是本申请实施例提供的一种生成式对抗网络的训练装置的结构示意图。该装置可设置于网络训练设备中,用于执行上述生成式对抗网络的训练方法。具体的,本实施例的网络训练设备400可包括:获取单元401和训练单元402;
获取单元401,用于从预设的样本数据库选取至少一个第一样本,所述样本数据库包括至少一个样本数据,所述至少一个第一样本均为真实数据;
训练单元402,用于通过所述至少一个第一样本训练所述判别器;
获取单元401,用于获取所述生成器生成的第二样本,并使用所述判别器对所述第二样本进行鉴别,以得到鉴别后的输出信息;其中,所述判别器用于根据所述第一样本并采用显示表达式来鉴别所述第二样本为真实数据的第一概率,所述输出信息包括所述第一概率;
训练单元402,还用于根据所述判别器的所述输出信息训练所述生成器。
可选的,当所述第二样本为所述第一样本时,所述第一概率是根据第二概率和第三概率确定出的;其中,所述第二概率为所述第二样本在所述至少一个第一样本构成的训练数据集中的分布概率,所述第三概率为所述生成器生成所述第二样本的概率;
当所述第二样本不为所述第一样本时,所述第一概率为零。
可选的,当所述第二样本为所述第一样本时,所述第一概率为所述第二概率与目标和值之间的比值,所述目标和值为所述第二概率与所述第三概率之和。
可选的,获取单元401,还可具体用于确定所述生成器用于生成的数据的目标类型,并根据所述目标类型从样本数据库中选取至少一个第一样本。
其中,所述样本数据库中包括多种数据类型的样本数据,所述至少一个第一样本中每一个所述第一样本的数据类型与所述目标类型均相同。
可选的,获取单元401,还可具体用于获取所述生成式对抗网络的应用场景信息,并根据所述应用场景信息的关键字从预设的标签集合中确定目标标签,所述标签集合包括多种标签以及与每一个标签对应的关键字,所述目标标签对应的关键字包括所述应用场景信息的关键字;根据所述目标标签从样本数据库选取至少一个第一样本。
其中,所述样本数据库中包括多种样本标签分别对应的样本数据,所述第一样本的样本标签与所述目标标签相同。
可选的,所述第一样本为文本序列;
获取单元401,还可具体用于分别确定所述生成器生成各长度的文本序列的概率,并根据所述生成各长度的文本序列的概率确定待选取的各长度的文本序列的比例,每一长度下的文本序列对应的概率和比例一一对应;根据所述待选取的各长度的文本序列的比例,从样本数据库选取至少一个第一样本。
其中,所述样本数据库中包括各长度的文本序列对应的样本数据,所述至 少一个第一样本中各长度的样本数据在所述训练数据集中所占的比例与所述待选取的各长度的文本序列的比例相匹配。
可选的,所述训练单元402,可具体用于将所述显式表达式代入所述生成式对抗网络的价值函数,以根据所述显式表达式对应的所述第一概率对所述神经网络生成器进行训练。
具体的,该网络训练设备可通过上述单元实现上述图2至图3所示实施例中的生成式对抗网络的训练方法中的部分或全部步骤。应理解,本申请实施例是对应方法实施例的装置实施例,对方法实施例的描述,也适用于本申请实施例。
在本实施例中,网络训练设备能够利用预先选取的训练数据集和为预设的显式表达式的判别器对输入该判别器的样本进行鉴别,进而根据该鉴别结果信息训练生成器,使得能够通过直接得到GAN中最优的判别器的显式表达式,并根据该判别器的输出对该GAN的生成器进行训练,而无需采用神经网络逼近GAN中的判别器,这就有助于解决因文本离散性导致GAN无法应用到文本领域的问题,且降低了训练复杂度。
请参见图5,图5是本申请实施例提供的一种网络训练设备的结构示意图。该网络训练设备可用于执行上述的方法。如图5所示,本实施例中的网络训练设备500可以包括:一个或多个处理器501和存储器502。可选的,该网络训练设备还可包括一个或多个用户接口503,和/或,一个或多个通信接口504。上述处理器501、用户接口503、通信接口504和存储器502可通过总线505连接,或者可以通过其他方式连接,图5中以总线方式进行示例说明。其中,存储器502用于存储计算机程序,所述计算机程序包括程序指令,处理器501用于执行存储器502存储的程序指令。
其中,处理器501可用于调用所述程序指令执行以下步骤:从预设的样本数据库选取至少一个第一样本,所述样本数据库包括至少一个样本数据,所述至少一个第一样本均为真实数据;通过所述至少一个第一样本训练所述判别器;获取所述生成器生成的第二样本,并使用所述判别器对所述第二样本进行鉴别,以得到鉴别后的输出信息;其中,所述判别器用于根据所述第一样本并采用显示表达式来鉴别所述第二样本为真实数据的第一概率,所述输出信息包括所述第一概率;根据所述判别器的所述输出信息训练所述生成器。
可选的,当所述第二样本为所述第一样本时,所述第一概率是根据第二概率和第三概率确定出的;其中,所述第二概率为所述第二样本在所述至少一个第一样本构成的训练数据集中的分布概率,所述第三概率为所述生成器生成所 述第二样本的概率;当所述第二样本不为所述第一样本时,所述第一概率为零。
进一步可选的,当所述第二样本为所述第一样本时,所述第一概率为所述第二概率与目标和值之间的比值,所述目标和值为所述第二概率与所述第三概率之和。
可选的,处理器501在执行所述从预设的样本数据库选取至少一个第一样本时,还可用于执行以下步骤:确定所述生成器用于生成的数据的目标类型,并根据所述目标类型从样本数据库中选取至少一个第一样本;其中,所述样本数据库中包括多种数据类型的样本数据,所述至少一个第一样本中每一个所述第一样本的数据类型与所述目标类型均相同。
可选的,处理器501在执行所述从预设的样本数据库选取至少一个第一样本时,还可用于执行以下步骤:获取所述生成式对抗网络的应用场景信息,并根据所述应用场景信息的关键字从预设的标签集合中确定目标标签,所述标签集合包括多种标签以及与每一个标签对应的关键字,所述目标标签对应的关键字包括所述应用场景信息的关键字;根据所述目标标签从样本数据库选取至少一个第一样本;其中,所述样本数据库中包括多种样本标签分别对应的样本数据,所述第一样本的样本标签与所述目标标签相同。
可选的,所述第一样本为文本序列;处理器501在执行所述从预设的样本数据库选取至少一个第一样本时,还可用于执行以下步骤:分别确定所述生成器生成各长度的文本序列的概率,并根据所述生成各长度的文本序列的概率确定待选取的各长度的文本序列的比例,每一长度下的文本序列对应的概率和比例一一对应;根据所述待选取的各长度的文本序列的比例,从样本数据库选取至少一个第一样本;其中,所述样本数据库中包括各长度的文本序列对应的样本数据,所述至少一个第一样本中各长度的样本数据在所述训练数据集中所占的比例与所述待选取的各长度的文本序列的比例相匹配。
可选的,所述生成器可以为神经网络生成器。
可选的,处理器501在执行所述通过所述判别器的所述输出信息训练所述生成器时,可具体执行以下步骤:将所述显式表达式代入所述生成式对抗网络的价值函数,以根据所述显式表达式对应的所述第一概率对所述神经网络生成器进行训练。
其中,所述处理器501可以是中央处理单元(Central Processing Unit,CPU),该处理器还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程 逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
用户接口503可包括输入设备和输出设备,输入设备可以包括触控板、麦克风等,输出设备可以包括显示器(LCD等)、扬声器等。
通信接口504可包括接收器和发射器,用于与其他设备进行通信。
存储器502可以包括只读存储器和随机存取存储器,并向处理器501提供指令和数据。存储器502的一部分还可以包括非易失性随机存取存储器。例如,存储器502还可以存储上述的显式表达式等等。
具体实现中,本申请实施例中所描述的处理器501等可执行上述图2至图3所示的方法实施例中所描述的实现方式,也可执行本申请实施例图4所描述的各单元的实现方式,此处不赘述。
本申请实施例还提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时可实现图2至图3所对应实施例中描述的生成式对抗网络的训练方法中的部分或全部步骤,也可实现本申请图4或图5所示实施例的装置或网络训练设备的功能,此处不赘述。
本申请实施例还提供了一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行上述方法中的部分或全部步骤,此处不赘述。
所述计算机可读存储介质可以是前述任一实施例所述的网络训练设备的内部存储单元,例如网络训练设备的硬盘或内存。所述计算机可读存储介质也可以是所述网络训练设备的外部存储设备,例如所述网络训练设备上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。
在本申请中,术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中字符“/”,一般表示前后关联对象是一种“或”的关系。
在本申请的各种实施例中,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
以上所述,仅为本申请的部分实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到各种等效的修改或替换,这些修改或替换都应涵盖在本申请的保护范围之内。

Claims (20)

  1. 一种生成式对抗网络的训练方法,所述生成式对抗网络包括生成器和判别器,其特征在于,所述方法包括:
    从预设的样本数据库选取至少一个第一样本,所述样本数据库包括至少一个样本数据,所述至少一个第一样本均为真实数据;
    通过所述至少一个第一样本训练所述判别器;
    获取所述生成器生成的第二样本,并使用所述判别器对所述第二样本进行鉴别,以得到鉴别后的输出信息;其中,所述判别器用于根据所述第一样本并采用显示表达式来鉴别所述第二样本为真实数据的第一概率,所述输出信息包括所述第一概率;
    根据所述判别器的所述输出信息训练所述生成器。
  2. 根据权利要求1所述的方法,其特征在于,
    当所述第二样本为所述第一样本时,所述第一概率是根据第二概率和第三概率确定出的;其中,所述第二概率为所述第二样本在所述至少一个第一样本构成的训练数据集中的分布概率,所述第三概率为所述生成器生成所述第二样本的概率;
    当所述第二样本不为所述第一样本时,所述第一概率为零。
  3. 根据权利要求2所述的方法,其特征在于,当所述第二样本为所述第一样本时,所述第一概率为所述第二概率与目标和值之间的比值,所述目标和值为所述第二概率与所述第三概率之和。
  4. 根据权利要求1-3任一项所述的方法,其特征在于,所述从预设的样本数据库选取至少一个第一样本,包括:
    确定所述生成器用于生成的数据的目标类型,并根据所述目标类型从样本数据库中选取至少一个第一样本;其中,所述样本数据库中包括多种数据类型的样本数据,所述至少一个第一样本中每一个所述第一样本的数据类型与所述目标类型均相同。
  5. 根据权利要求1-3任一项所述的方法,其特征在于,所述从预设的样本数据库选取至少一个第一样本,包括:
    获取所述生成式对抗网络的应用场景信息,并根据所述应用场景信息的关键字从预设的标签集合中确定目标标签,所述标签集合包括多种标签以及与每一个标签对应的关键字,所述目标标签对应的关键字包括所述应用场景信息的关键字;
    根据所述目标标签从样本数据库选取至少一个第一样本;其中,所述样本数据库中包括多种样本标签分别对应的样本数据,所述第一样本的样本标签与所述目标标签相同。
  6. 根据权利要求1-3任一项所述的方法,其特征在于,所述第一样本为文本序列;所述从预设的样本数据库选取至少一个第一样本,包括:
    分别确定所述生成器生成各长度的文本序列的概率,并根据所述生成各长度的文本序列的概率确定待选取的各长度的文本序列的比例,每一长度下的文本序列对应的概率和比例一一对应;
    根据所述待选取的各长度的文本序列的比例,从样本数据库选取至少一个第一样本;
    其中,所述样本数据库中包括各长度的文本序列对应的样本数据,所述至少一个第一样本中各长度的样本数据在所述至少一个第一样本构成的训练数据集中所占的比例与所述待选取的各长度的文本序列的比例相匹配。
  7. 根据权利要求1-3任一项所述的方法,其特征在于,所述生成器为神经网络生成器,所述通过所述判别器的所述输出信息训练所述生成器,包括:
    将所述显式表达式代入所述生成式对抗网络的价值函数,以根据所述显式表达式对应的所述第一概率对所述神经网络生成器进行训练。
  8. 一种生成式对抗网络的训练装置,其特征在于,包括:获取单元和训练单元;
    所述获取单元,用于从预设的样本数据库选取至少一个第一样本,所述样本数据库包括至少一个样本数据,所述至少一个第一样本均为真实数据;
    所述训练单元,用于通过所述至少一个第一样本训练所述判别器;
    所述获取单元,还用于获取所述生成器生成的第二样本,并使用所述判别器对所述第二样本进行鉴别,以得到鉴别后的输出信息;其中,所述判别器用于根据所述第一样本并采用显示表达式来鉴别所述第二样本为真实数据的第一概率,所述输出信息包括所述第一概率;
    所述训练单元,还用于根据所述判别器的所述输出信息训练所述生成器。
  9. 根据权利要求8所述的装置,其特征在于,
    当所述第二样本为所述第一样本时,所述第一概率是根据第二概率和第三概率确定出的;其中,所述第二概率为所述第二样本在所述至少一个第一样本构成的训练数据集中的分布概率,所述第三概率为所述生成器生成所述第二样本的概率;
    当所述第二样本不为所述第一样本时,所述第一概率为零。
  10. 根据权利要求9所述的装置,其特征在于,当所述第二样本为所述第一样本时,所述第一概率为所述第二概率与目标和值之间的比值,所述目标和值为所述第二概率与所述第三概率之和。
  11. 根据权利要求8-10任一项所述的装置,其特征在于,
    所述获取单元,具体用于确定所述生成器用于生成的数据的目标类型,并根据所述目标类型从样本数据库中选取至少一个第一样本;其中,所述样本数据库中包括多种数据类型的样本数据,所述至少一个第一样本中每一个所述第一样本的数据类型与所述目标类型均相同。
  12. 根据权利要求8-10任一项所述的装置,其特征在于,
    所述获取单元,具体用于获取所述生成式对抗网络的应用场景信息,并根据所述应用场景信息的关键字从预设的标签集合中确定目标标签,所述标签集合包括多种标签以及与每一个标签对应的关键字,所述目标标签对应的关键字包括所述应用场景信息的关键字;根据所述目标标签从样本数据库选取至少一个第一样本;
    其中,所述样本数据库中包括多种样本标签分别对应的样本数据,所述第一样本的样本标签与所述目标标签相同。
  13. 根据权利要求8-10任一项所述的装置,其特征在于,所述第一样本为文本序列;
    所述获取单元,具体用于分别确定所述生成器生成各长度的文本序列的概率,并根据所述生成各长度的文本序列的概率确定待选取的各长度的文本序列的比例,每一长度下的文本序列对应的概率和比例一一对应;根据所述待选取的各长度的文本序列的比例,从样本数据库选取至少一个第一样本;
    其中,所述样本数据库中包括各长度的文本序列对应的样本数据,所述至少一个第一样本中各长度的样本数据在所述至少一个第一样本构成的训练数据集中所占的比例与所述待选取的各长度的文本序列的比例相匹配。
  14. 根据权利要求8-10任一项所述的装置,其特征在于,所述生成器为神经网络生成器;
    所述训练单元,具体用于将所述显式表达式代入所述生成式对抗网络的价值函数,以根据所述显式表达式对应的所述第一概率对所述神经网络生成器进行训练。
  15. 一种网络训练设备,其特征在于,包括处理器和存储器,所述处理器和存储器相互连接,其中,所述存储器用于存储计算机程序,所述计算机程序包括程序指令,所述处理器被配置用于调用所述程序指令,执行以下步骤:
    从预设的样本数据库选取至少一个第一样本,所述样本数据库包括至少一个样本数据,所述至少一个第一样本均为真实数据;
    通过所述至少一个第一样本训练生成式对抗网络包括的判别器;
    获取所述生成器生成的第二样本,并使用所述判别器对所述第二样本进行鉴别,以得到鉴别后的输出信息;其中,所述判别器用于根据所述第一样本并采用显示表达式来鉴别所述第二样本为真实数据的第一概率,所述输出信息包括所述第一概率;
    根据所述判别器的所述输出信息训练所述生成式对抗网络包括的生成器。
  16. 根据权利要求15所述的设备,其特征在于,
    当所述第二样本为所述第一样本时,所述第一概率是根据第二概率和第三概率确定出的;其中,所述第二概率为所述第二样本在所述至少一个第一样本构成的训练数据集中的分布概率,所述第三概率为所述生成器生成所述第二样本的概率;
    当所述第二样本不为所述第一样本时,所述第一概率为零。
  17. 根据权利要求16所述的设备,其特征在于,当所述第二样本为所述第一样本时,所述第一概率为所述第二概率与目标和值之间的比值,所述目标和值为所述第二概率与所述第三概率之和。
  18. 根据权利要求15-17任一项所述的设备,其特征在于,所述处理器在调用所述程序指令执行所述从预设的样本数据库选取至少一个第一样本时,具体执行以下步骤:
    确定所述生成器用于生成的数据的目标类型,并根据所述目标类型从样本数据库中选取至少一个第一样本;其中,所述样本数据库中包括多种数据类型的样本数据,所述至少一个第一样本中每一个所述第一样本的数据类型与所述目标类型均相同。
  19. 根据权利要求15-17任一项所述的设备,其特征在于,所述生成器为神经网络生成器,所述处理器在调用所述程序指令执行所述通过所述判别器的所述输出信息训练所述生成器时,具体执行以下步骤:
    将所述显式表达式代入所述生成式对抗网络的价值函数,以根据所述显式表达式对应的所述第一概率对所述神经网络生成器进行训练。
  20. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有计算机程序,所述计算机程序包括程序指令,所述程序指令当被处理器执行时使所述处理器执行如权利要求1-7任一项所述的方法。
PCT/CN2018/123519 2018-10-24 2018-12-25 生成式对抗网络的训练方法、相关设备及介质 WO2020082572A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811247859.7 2018-10-24
CN201811247859.7A CN109492764B (zh) 2018-10-24 2018-10-24 生成式对抗网络的训练方法、相关设备及介质

Publications (1)

Publication Number Publication Date
WO2020082572A1 true WO2020082572A1 (zh) 2020-04-30

Family

ID=65691576

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/123519 WO2020082572A1 (zh) 2018-10-24 2018-12-25 生成式对抗网络的训练方法、相关设备及介质

Country Status (2)

Country Link
CN (1) CN109492764B (zh)
WO (1) WO2020082572A1 (zh)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110046332B (zh) * 2019-04-04 2024-01-23 远光软件股份有限公司 一种相似文本数据集生成方法及装置
CN110084287A (zh) * 2019-04-11 2019-08-02 北京迈格威科技有限公司 图像识别网络对抗训练方法及装置
CN110046254B (zh) * 2019-04-18 2022-03-08 阿波罗智联(北京)科技有限公司 用于生成模型的方法和装置
CN110147535A (zh) * 2019-04-18 2019-08-20 平安科技(深圳)有限公司 相似文本生成方法、装置、设备及存储介质
CN110288965B (zh) * 2019-05-21 2021-06-18 北京达佳互联信息技术有限公司 一种音乐合成方法、装置、电子设备及存储介质
CN110211045B (zh) * 2019-05-29 2022-09-06 电子科技大学 基于srgan网络的超分辨率人脸图像重建方法
CN110188172B (zh) * 2019-05-31 2022-10-28 清华大学 基于文本的事件检测方法、装置、计算机设备及存储介质
CN112115257B (zh) * 2019-06-20 2023-07-14 百度在线网络技术(北京)有限公司 用于生成信息评估模型的方法和装置
CN110245459B (zh) * 2019-06-28 2021-06-01 北京师范大学 激光清洗效果预览方法及装置
CN110969681B (zh) * 2019-11-29 2023-08-29 山东浪潮科学研究院有限公司 一种基于gan网络的手写体书法文字生成方法
CN111461168B (zh) * 2020-03-02 2024-07-23 平安科技(深圳)有限公司 训练样本扩充方法、装置、电子设备及存储介质
CN112329568A (zh) * 2020-10-27 2021-02-05 西安晟昕科技发展有限公司 一种辐射源信号生成方法、装置及存储介质
CN112613572B (zh) * 2020-12-30 2024-01-23 北京奇艺世纪科技有限公司 一种样本数据获得方法、装置、电子设备及存储介质
CN112329404B (zh) * 2021-01-04 2021-08-24 湖南科迪云飞信息科技有限公司 基于事实导向的文本生成方法、装置和计算机设备

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107968962A (zh) * 2017-12-12 2018-04-27 华中科技大学 一种基于深度学习的两帧不相邻图像的视频生成方法
CN108460812A (zh) * 2018-04-04 2018-08-28 北京红云智胜科技有限公司 一种基于深度学习的表情包生成系统及方法
US20180293734A1 (en) * 2017-04-06 2018-10-11 General Electric Company Visual anomaly detection system
CN108960278A (zh) * 2017-05-18 2018-12-07 英特尔公司 使用生成式对抗网络的鉴别器的新奇检测

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3089081A4 (en) * 2014-02-10 2017-09-20 Mitsubishi Electric Corporation Hierarchical neural network device, learning method for determination device, and determination method
CN104182767B (zh) * 2014-09-05 2018-03-13 西安电子科技大学 主动学习和邻域信息相结合的高光谱图像分类方法
US10572979B2 (en) * 2017-04-06 2020-02-25 Pixar Denoising Monte Carlo renderings using machine learning with importance sampling
CN107392973B (zh) * 2017-06-06 2020-01-10 中国科学院自动化研究所 像素级手写体汉字自动生成方法、存储设备、处理装置
CN107590531A (zh) * 2017-08-14 2018-01-16 华南理工大学 一种基于文本生成的wgan方法
CN108573047A (zh) * 2018-04-18 2018-09-25 广东工业大学 一种中文文本分类模型的训练方法及装置
CN108596265B (zh) * 2018-05-02 2022-04-08 中山大学 基于文本描述信息和生成对抗网络的视频生成模型

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180293734A1 (en) * 2017-04-06 2018-10-11 General Electric Company Visual anomaly detection system
CN108960278A (zh) * 2017-05-18 2018-12-07 英特尔公司 使用生成式对抗网络的鉴别器的新奇检测
CN107968962A (zh) * 2017-12-12 2018-04-27 华中科技大学 一种基于深度学习的两帧不相邻图像的视频生成方法
CN108460812A (zh) * 2018-04-04 2018-08-28 北京红云智胜科技有限公司 一种基于深度学习的表情包生成系统及方法

Also Published As

Publication number Publication date
CN109492764A (zh) 2019-03-19
CN109492764B (zh) 2024-07-26

Similar Documents

Publication Publication Date Title
WO2020082572A1 (zh) 生成式对抗网络的训练方法、相关设备及介质
US11455981B2 (en) Method, apparatus, and system for conflict detection and resolution for competing intent classifiers in modular conversation system
US11978245B2 (en) Method and apparatus for generating image
CN109313650B (zh) 在自动聊天中生成响应
CN114416953B (zh) 问答处理方法、问答模型的训练方法和装置
CN107741976B (zh) 智能应答方法、装置、介质和电子设备
CN112395979B (zh) 基于图像的健康状态识别方法、装置、设备及存储介质
US12106058B2 (en) Multi-turn dialogue response generation using asymmetric adversarial machine classifiers
CN110795542B (zh) 对话方法及相关装置、设备
WO2021068563A1 (zh) 样本数据处理方法、装置、计算机设备及存储介质
WO2021120854A1 (zh) 模型训练方法、成员探测装置的训练方法及其系统
CN111832276A (zh) 用于对话解交织的丰富消息嵌入
US20200311350A1 (en) Generating method, learning method, generating apparatus, and non-transitory computer-readable storage medium for storing generating program
CN112101042A (zh) 文本情绪识别方法、装置、终端设备和存储介质
CN107291774B (zh) 错误样本识别方法和装置
CN114564595A (zh) 知识图谱更新方法、装置及电子设备
CN113821587A (zh) 文本相关性确定方法、模型训练方法、装置及存储介质
CN116756564A (zh) 面向任务解决的生成式大语言模型的训练方法和使用方法
WO2024174714A1 (zh) 真实性验证方法和装置
US20240086766A1 (en) Candidate machine learning model identification and selection
WO2021174814A1 (zh) 众包任务的答案验证方法、装置、计算机设备及存储介质
WO2021077834A1 (zh) 一种基于对话系统对用户问句提出反问的方法和装置
CN111143524A (zh) 一种用户意图确定方法及电子设备
CN110704602A (zh) 人机对话系统优化方法及人机对话系统
JP5850886B2 (ja) 情報処理装置及び方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18937913

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS (EPO FORM 1205A DATED 31.08.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 18937913

Country of ref document: EP

Kind code of ref document: A1