CN111241291B - Method and device for generating countermeasure sample by utilizing countermeasure generation network - Google Patents
Method and device for generating countermeasure sample by utilizing countermeasure generation network Download PDFInfo
- Publication number
- CN111241291B CN111241291B CN202010329630.9A CN202010329630A CN111241291B CN 111241291 B CN111241291 B CN 111241291B CN 202010329630 A CN202010329630 A CN 202010329630A CN 111241291 B CN111241291 B CN 111241291B
- Authority
- CN
- China
- Prior art keywords
- vector
- sample
- ith
- category
- generator
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 47
- 238000012549 training Methods 0.000 claims abstract description 35
- 238000004088 simulation Methods 0.000 claims abstract description 32
- 239000013598 vector Substances 0.000 claims description 170
- 230000003247 decreasing effect Effects 0.000 claims description 10
- 238000013528 artificial neural network Methods 0.000 claims description 7
- 230000004927 fusion Effects 0.000 claims description 7
- 238000007499 fusion processing Methods 0.000 claims description 6
- 230000000306 recurrent effect Effects 0.000 claims description 5
- 238000005070 sampling Methods 0.000 claims description 5
- 238000004590 computer program Methods 0.000 claims description 4
- 230000015654 memory Effects 0.000 claims description 4
- 230000009467 reduction Effects 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 9
- 230000006870 function Effects 0.000 description 7
- 238000012545 processing Methods 0.000 description 5
- 238000010801 machine learning Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 238000012795 verification Methods 0.000 description 3
- 241000196324 Embryophyta Species 0.000 description 2
- 241001465754 Metazoa Species 0.000 description 2
- 208000025174 PANDAS Diseases 0.000 description 2
- 208000021155 Paediatric autoimmune neuropsychiatric disorders associated with streptococcal infection Diseases 0.000 description 2
- 240000004718 Panda Species 0.000 description 2
- 235000016496 Panda oleosa Nutrition 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 241001522296 Erithacus rubecula Species 0.000 description 1
- 241000282412 Homo Species 0.000 description 1
- 241000282620 Hylobates sp. Species 0.000 description 1
- 241000282320 Panthera leo Species 0.000 description 1
- 241000282376 Panthera tigris Species 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013145 classification model Methods 0.000 description 1
- 230000008451 emotion Effects 0.000 description 1
- 238000011478 gradient descent method Methods 0.000 description 1
- 230000000670 limiting effect Effects 0.000 description 1
- 230000006403 short-term memory Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/35—Clustering; Classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Databases & Information Systems (AREA)
- Probability & Statistics with Applications (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The embodiment of the present specification provides a method for generating a countermeasure sample by using a countermeasure generation network, wherein the countermeasure generation network includes: the system comprises a classifier which is trained in advance and used for executing N classification tasks aiming at business objects, a generator used for generating simulation samples corresponding to real samples of each class, and N discriminators corresponding to the N classes, wherein the ith discriminator is used for discriminating whether the samples input into the classifier belong to the real samples under the ith class. In the method, training of the generator and the discriminator can be realized, and the trained generator is further used for generating the confrontation samples which have the designated real category and can be predicted as other categories by the classifier, and meanwhile, high-quality confrontation samples in large batch can be generated efficiently and quickly.
Description
Technical Field
The embodiment of the specification relates to the technical field of computers, in particular to a method and a device for generating a countermeasure sample by utilizing a countermeasure generation network.
Background
A countersample is an input sample that is formed by purposely adding subtle perturbations to the data set that cause the machine learning model to output erroneous results with high confidence. For example, in a text classification scenario, text content that was originally identified by the text classification model as a violation is misclassified as not violating with little modification and substantially unchanged semantics as seen by humans. For example, in an image recognition scene, a picture that is originally recognized as a panda by the image processing model is misclassified as a gibbon after a slight change is added, which cannot be detected by the human eye.
The confrontation sample may be used by an attacker to attack the machine learning model, resulting in low prediction accuracy of the machine learning model. Therefore, a large number of countermeasure samples need to be generated in advance, and the machine learning model needs to be trained by using the countermeasure samples, so that the model can correctly classify the countermeasure samples to resist external attacks.
However, currently, generating challenge samples typically requires more manual intervention, and the number and quality of generated challenge samples are quite limited. Therefore, a solution is needed to generate high quality challenge samples in large quantities quickly and efficiently.
Disclosure of Invention
One or more embodiments in the present specification provide a method for generating countermeasure samples using an countermeasure generation network, which can achieve fast and efficient generation of large batches of high-quality countermeasure samples.
According to a first aspect, there is provided a method of generating a confrontation sample using a confrontation generating network comprising a pre-trained classifier for performing N classes of classification tasks on a business object; the countermeasure generation network also includes a generator and N discriminators corresponding to the N categories, where N is a positive integer greater than 1. The method comprises the following steps:
obtaining a first noise vector and obtaining an ith category vector corresponding to an ith category, wherein i is a positive integer not greater than N; inputting the first noise vector and the ith category vector into the generator together to obtain a first simulation sample corresponding to the ith category real sample; inputting the first simulation sample into the ith discriminator to obtain a first probability that the first simulation sample belongs to a real sample under the ith category; inputting the obtained first real sample belonging to the ith category into the ith discriminator to obtain a second probability that the first real sample is a real sample under the ith category; training the ith discriminator by taking the first probability as a reduction and the second probability as an increase as a target; inputting the first simulation sample into the classifier to obtain a third probability that the first simulation sample belongs to the ith class; training the generator with the aim of increasing the first probability and decreasing the third probability, wherein the trained generator is used for generating target confrontation samples which simulate real samples of target classes but are predicted as other classes by the classifier.
In one embodiment, wherein obtaining the first noise vector comprises: and randomly sampling the noise space which accords with the Gaussian distribution to obtain the first noise vector.
In one embodiment, wherein obtaining an ith category vector corresponding to the ith category comprises: acquiring N category labels, and carrying out one-hot coding on the N category labels to correspondingly obtain N one-hot coding vectors; treating the N unique hot coded vectors as N class vectors, including the ith class vector.
In one embodiment, wherein the inputting the first noise vector and the ith category vector into the generator together comprises: splicing the first noise vector and the ith category vector to obtain a spliced vector, and inputting the spliced vector into the generator; or, summing the first noise vector and the ith category vector to obtain a summed vector, and inputting the summed vector into the generator.
In one embodiment, wherein the business object is text, the generator is a Recurrent Neural Network (RNN); wherein, inputting the first noise vector and the ith category vector into the generator together to obtain a first simulation sample corresponding to the ith category real sample, comprising: performing fusion processing on the first noise vector and the ith category vector to obtain a fusion vector which is used as an initial state vector of a hidden layer in the RNN; taking wildcards for text characters as initial input of the RNN network to obtain the first simulation sample.
In one embodiment, the business object is text or a picture or audio, and the trained generator is used for generating a text countermeasure sample or a picture countermeasure sample or an audio countermeasure sample.
In one embodiment, wherein after training the generator, the method further comprises: obtaining a second noise vector, and obtaining a target class vector corresponding to a target class, the target class belonging to the N classes; and inputting the second noise vector and the target category vector into the trained generator together to obtain the target confrontation sample.
According to a second aspect, there is provided an apparatus for generating countermeasure samples using a countermeasure generation network comprising a pre-trained classifier for performing N classes of classification tasks for business objects; the countermeasure generation network further includes a generator and N discriminators corresponding to the N categories, where N is a positive integer greater than 1; the device comprises:
a noise vector acquisition unit configured to acquire a first noise vector; a category vector acquisition unit configured to acquire an ith category vector corresponding to an ith category, where i is a positive integer not greater than N; the simulation sample generating unit is configured to input the first noise vector and the ith category vector into the generator together to obtain a first simulation sample corresponding to the ith category real sample; the analog sample distinguishing unit is configured to input the first analog sample into the ith discriminator to obtain a first probability that the first analog sample belongs to a real sample under an ith category; the real sample distinguishing unit is configured to input the acquired first real sample belonging to the ith category into the ith discriminator to obtain a second probability that the first real sample is a real sample under the ith category; a discriminator training unit configured to train the i-th discriminator with a target of decreasing the first probability and increasing the second probability; the analog sample classification unit is configured to input the first analog sample into the classifier to obtain a third probability that the first analog sample belongs to the ith class; and the generator training unit is configured to train the generator with the first probability increased and the third probability decreased as targets, and the trained generator is used for generating target confrontation samples which simulate real samples of target classes but are predicted as other classes by the classifier.
According to a third aspect, there is provided a computer readable storage medium having stored thereon a computer program which, when executed in a computer, causes the computer to perform the method provided in the first or second aspect.
According to a fourth aspect, there is provided a computing device comprising a memory having stored therein executable code and a processor that, when executing the executable code, implements the method provided in the first or second aspect.
In summary, by using the method and apparatus for generating countermeasure samples provided by the embodiments of the present specification, it is possible to generate countermeasure samples having a specified true category but predicted by the classifier as other categories. Moreover, by utilizing the generator after training, a large quantity of high-quality confrontation samples can be efficiently and quickly generated.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments disclosed in the present specification, the drawings needed to be used in the description of the embodiments will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments disclosed in the present specification, and it is obvious for those skilled in the art to obtain other drawings based on the drawings without creative efforts.
FIG. 1 illustrates a content diagram of a text confrontation sample according to one example;
FIG. 2 illustrates an application architecture diagram of a countermeasure generation network, according to one embodiment;
FIG. 3 illustrates a flow diagram of a method for generating a challenge sample using a challenge generation network, according to one embodiment;
figure 4 shows a schematic diagram of the structure of an RNN network according to one embodiment;
fig. 5 is a schematic diagram of an apparatus for generating a challenge sample using a challenge generation network according to an embodiment.
Detailed Description
Embodiments disclosed in the present specification are described below with reference to the accompanying drawings.
As mentioned above, currently, generating challenge samples usually requires much manual intervention, and the number and quality of generated challenge samples are very limited. For example, in the natural language field, confrontational samples are currently generated mainly by replacing individual words in the original text. In one example, fig. 1 is a schematic diagram illustrating the content of a text confrontation sample according to an example, wherein the scratched-out text is an original text, and the original text is replaced by another text to obtain a text confrontation sample (or a natural language confrontation sample). Such replacement does not take into account the context of the word to be replaced, and therefore the replaced sentence often becomes non-compliant with grammatical rules and reading is not smooth. In addition, after words are modified, the meaning of the text may be changed greatly, so that the classification of the modified text is changed by human, which is against the principle of generation of the confrontational sample, and therefore, the quality of the confrontational sample generated in the current alternative mode is poor. Moreover, when the countermeasure sample is generated by adopting the alternative mode, a large amount of trial and verification is needed, and only a small amount of text countermeasure samples can be obtained.
For another example, currently in the field of image processing, a confrontational sample is generated mainly by a fast gradient method or an iterative gradient method. Specifically, the fast gradient method includes a step of giving an image, inputting the image into an intended attack model to obtain a prediction result, and then modifying the original image by using a gradient descent method to make the prediction result worse, and the iterative gradient method is to repeat the process of the fast gradient method for a plurality of times for one image and then use the last modified image as a countersample. However, the quality of the countermeasure sample obtained by the fast gradient method is not high, and although the confidence of classifying the countermeasure sample into the correct class is reduced, the countermeasure sample is difficult to be output into other classes with high confidence, and the amount of calculation required for generating the countermeasure sample by the iterative gradient method is large.
Furthermore, the inventors have also found that, in the current way of generating challenge samples, it is difficult to quickly generate a good quality challenge sample that specifies the true category (corresponding to the classification result of the human brain).
Based on the above observations, the inventor designs a specific countermeasure generation network, and further provides a method for generating countermeasure samples by using the countermeasure generation network, so as to generate a large quantity of high-quality countermeasure samples quickly and efficiently. It is to be noted that "confrontation" in the confrontation generation network and "confrontation" in the confrontation sample are two different concepts.
Specifically, fig. 2 is a schematic diagram illustrating an application architecture of a countermeasure generation network according to an embodiment, where the countermeasure generation network is used for processing data related to a business object, and as shown in fig. 2, the countermeasure generation network includes a pre-trained classifier for performing N (positive integer greater than 1) classification tasks, a generator for generating business object samples, and N discriminators corresponding to N classes, where each discriminator is used for discriminating whether an input sample is a real sample under the corresponding class or a false sample generated by the generator. And further, the generator after training can be utilized to generate the confrontation sample which has high similarity with the real sample of any specified category in the N categories but can be identified as other categories by the classifier.
For the sake of understanding, the above-mentioned countermeasure generation network will be specifically described below.
In an embodiment, the business object targeted by the countermeasure generation network may be a text, and accordingly, the classifier, the generator and the discriminator are a text classifier, a text generator and a text discriminator, respectively, and the countermeasure sample generated by the trained generator is a text countermeasure sample. In a specific embodiment, the text may include a text posted by a user in the social platform, the corresponding classification task may be to identify whether the text posted by the user includes violation content, and the corresponding set N categories may include violation and non-violation, or the corresponding classification task may be to identify a user emotion included in the text, and the corresponding set N categories may include happy feeling, angry feeling, peace and quiet feeling. In another embodiment, the text may include an information text in a content information platform, the corresponding classification task may be to determine a domain category to which the information text belongs, and the correspondingly set N categories may include sports, entertainment, popular science, and the like, or the corresponding classification task may be to identify a degree of interest of the user in the information text, and the correspondingly set N categories may include no interest, great interest, and the like.
In another embodiment, the business object may be a picture, and accordingly, the classifier, the generator and the discriminator are a picture classifier, a picture generator and a picture discriminator, respectively, and the confrontation sample generated by the trained generator is used as a picture confrontation sample. In a specific embodiment, the pictures may include pictures of animals and plants, the corresponding classification task may be to determine the category to which the animals and plants belong, and the correspondingly set N categories may include pandas, tigers, lions, and the like. In another specific embodiment, the picture may include a face image, the corresponding classification task may be to identify a face identity in the face image, and the correspondingly set N categories may be N different user identities, where the user identity may be uniquely identified by a mobile phone number, an identity card number, or the like.
In another embodiment, the business object may be audio, and accordingly, the classifier, the generator and the discriminator are respectively an audio classifier, an audio generator and an audio discriminator, and the confrontation sample generated by the trained generator is an audio confrontation sample. In a specific embodiment, the audio may be a user query voice recorded in the customer service, the corresponding classification task may be a standard user question for determining the user query voice, and the correspondingly set N categories may include a user question of how to turn on flower, how to adjust the amount of flower, and the like. In another specific embodiment, the audio may be a verification voice used as a login password, and the corresponding classification task may be to identify a user identity corresponding to the verification voice.
In still another embodiment, the business object may also be a user, a merchant, a commodity, a business event, and the like. In a particular embodiment, the business event may include a social event (such as a session initiated through instant messaging software or a transfer initiated through a payment platform), a login event, and the like.
As can be seen from the above, the countermeasure generation network can be used to process relevant data of the business objects such as text, pictures, audio, users, merchants, business events, and the like.
On the other hand, in one embodiment, the classifier, the generator and the discriminator in the above-described countermeasure generation network may be implemented based on a Convolutional Neural Network (CNN) or a Deep Neural Network (DNN). In an embodiment, the network structures of the N classifiers may be the same or different.
The above describes the generation of a network against a challenge. A method for generating a challenge sample using the challenge generating network is described below with reference to an embodiment. In particular, fig. 3 shows a flowchart of a method for generating a countermeasure sample using a countermeasure generation network according to an embodiment, and an execution subject of the method can be any device or equipment or system or platform with computing and processing capabilities. As shown in fig. 3, the method comprises the steps of:
step S310, acquiring a first noise vector and an ith category vector corresponding to the ith category, wherein i is a positive integer not greater than N; step S320, inputting the first noise vector and the ith category vector into the generator together to obtain a first simulation sample corresponding to the ith category real sample; step S330, inputting the first simulation sample into the ith discriminator to obtain a first probability that the first simulation sample belongs to a real sample under the ith category; step S340, inputting the acquired first real sample belonging to the ith category into the ith discriminator to obtain a second probability that the first real sample is a real sample under the ith category; step S350, training the ith discriminator by taking the first probability as a reduction and the second probability as an increase as a target; step S360, inputting the first simulation sample into the classifier to obtain a third probability that the first simulation sample belongs to the ith class; step S370, training the generator to increase the first probability and decrease the third probability as a target, wherein the trained generator is used to generate a target confrontation sample, which simulates a target class real sample but is predicted as other classes by the classifier.
In the above steps, it should be first explained that "first" in "first noise vector", "first analog sample", and the like, "second" in "first" and "second probability", and other similar terms are used only for distinguishing the same kind of things, and do not have other limiting effects.
The steps are as follows:
first, in step S310, a first noise vector is obtained, and an ith class vector corresponding to the ith class is obtained, where i is a positive integer not greater than N.
For convenience of description, a certain noise vector acquired in this step is referred to as a first noise vector. Specifically, a noise space conforming to a specific distribution may be randomly sampled, resulting in a first noise vector. In one embodiment, the specific distribution may be a gaussian distribution (or a standard normal distribution). In another embodiment, the particular distribution may be a laplacian distribution.
On the other hand, for the obtaining of the ith category vector, in an embodiment, the obtaining may include: firstly, N category labels are obtained, and the N category labels are subjected to one-hot coding to correspondingly obtain N one-hot coding vectors; then, the N unique hot coded vectors are taken as N class vectors, including the ith class vector.
In a specific embodiment, the N category labels may be N category identifiers, where each category identifier is used to uniquely identify a corresponding category. In one example, the category labels may be comprised of letters, numbers, symbols, or the like. In another specific embodiment, the N category labels may be category names of N categories, such as "low risk", "medium risk", and "high risk".
It should be understood that the above-mentioned One-Hot encoding, i.e., one-Hot encoding, also known as One-bit-efficient encoding, uses an M (positive integer) bit status register to encode M states, each having its own independent register bit, and only One of which is active at any time. Based on this, the obtained category labels can be encoded into N-dimensional vectors, wherein the value of one dimension is different from the value of the other dimensions. In one example, assuming that the above-mentioned N class labels include class numbers 1, 2 and 3, the 3 class labels may be encoded into three unique thermal encoding vectors of (1, 0), (0, 1, 0) and (0, 1) in sequence.
In another embodiment, N category vectors may be randomly assigned to the N categories, only the N category vectors need to be different from each other. Thus, N category vectors corresponding to the N categories can be obtained, and an ith category vector corresponding to the ith category is obtained from the N category vectors.
After the first noise vector and the ith category vector are obtained, in step S320, the first noise vector and the ith category vector are input into the generator together, so as to obtain a first simulation sample corresponding to the ith category real sample.
Specifically, the first noise vector and the ith category vector may be fused to obtain a fused vector, and the fused vector is input into the generator. In one embodiment, the fusion process may be a stitching process, and the corresponding fusion vector is a stitching vector. In another embodiment, wherein the fusion process may be a summation process, the resulting fusion vector is a summation vector.
In an implementation manner, in the case that the business object is a text, the generator may adopt a Recurrent Neural Network (RNN), and fig. 4 illustrates a structural schematic diagram of the RNN Network according to an embodiment, as shown in fig. 4, the RNN Network is based on a hidden state vector h t And entering words word t The next word can be predicted t+1 Wherein t is a natural number. In particular, if an initial state vector h is given 0 And using a wildcard character<Go>(Special symbol indicating the beginning of a sentence) as the first word of the sentence word 1 The RNN network may then, after training, generate each word of a sentence in turn, resulting in a natural language sentence, and the sentence conforms to the natural language of the real world.
Based on this, the method can comprise the following steps: and giving the fusion vector as the initial state vector of a hidden layer in the RNN, and taking a wildcard character (such as the wildcard character < Go >) aiming at a text character as an initial input of the RNN to obtain a section of natural language text output by the RNN, namely the first simulation sample. The obtained simulated text is used as a text countermeasure sample, so that the text countermeasure sample is more in line with the language habit of human, more in line with grammar and more smooth.
It should be noted that, in the case that the service object is a text, the generator may also use other neural networks obtained based on RNN network improvement, such as a Long Short-Term Memory (LSTM) network or a gate round robin Unit (GRU) network.
In the above, the first simulation sample simulating the ith category real sample can be obtained. Then, in step S330, the first simulated sample is input into the i-th discriminator to obtain a first probability that the first simulated sample belongs to the real sample in the i-th category. In step S340, the obtained first true sample belonging to the ith category is input into the ith discriminator, so as to obtain a second probability that the first true sample is a true sample in the ith category. In one embodiment, the first real sample may be obtained by randomly sampling from a set of real samples corresponding to the ith category. In another embodiment, a plurality of true samples belonging to the ith category may be obtained from the total set of true samples corresponding to the N categories, and used as the first true sample.
It should be noted that the purpose of the ith discriminator is to distinguish the ith category of real samples from the simulated samples (or dummy samples) generated by the generator to simulate the ith category of real samples as much as possible. In one embodiment, the output of the discriminator is the probability that the sample input thereto belongs to the true sample, and accordingly, the output probability of the discriminator can be directly taken as the probability that the corresponding sample belongs to the true sample. In another embodiment, the output of the discriminator is the probability that the sample input thereto belongs to a false sample, and accordingly, the value obtained by subtracting the output probability of the discriminator from 1 is used as the probability that the corresponding sample belongs to a true sample. Accordingly, in step S350, the i-th discriminator is trained with the objective of decreasing the first probability and increasing the second probability.
Specifically, the discriminant training loss of the ith discriminator is determined according to the first probability and the second probability, and then the model parameters in the ith discriminator are adjusted by using the discriminant training loss. In one embodiment, wherein the discriminant training loss may be determined by:
wherein,represents the ithThe discriminant function corresponding to the discriminant,andrespectively representing the first real sample and the first simulated sample,andrespectively representing the second probability and the first probability,to representDistribution of true samples conforming to the ith category of true samples,RepresentDistribution of simulated samples conforming to simulated samples for the ith class,Indicating that the expected value is to be found.
It should be noted that the discriminant training loss may also be determined by calculating the Wasserstein distance, which is not described in detail herein.
The training of the ith discriminator can be realized, and by analogy, the training of each classifier in the N discriminators can be realized.
On the other hand, the generator needs to be trained. It should be noted that the purpose of the generator is to fool the discriminator as much as possible, so that the discrimination network discriminates the analog sample output by the generator as a real sample. Therefore, the discriminator and the generator mutually confront each other, and parameters are continuously adjusted, so that the discriminator can not finally judge whether the simulation sample output by the generator is real or not. Further, it is desirable that the first simulated sample generated to simulate the real sample of the ith category be classified by the classifier into other categories than the ith category as much as possible. Accordingly, in step S360, the first analog sample is input into the classifier, and a third probability that the first analog sample belongs to the ith class is obtained. Then, in step S370, the generator is trained with a goal of increasing the first probability and decreasing the third probability.
Specifically, the generated training loss of the generator is determined according to the first probability and the third probability, and the model parameters in the generator are adjusted by using the generated training loss. In one embodiment, wherein the generation loss may be determined by:
wherein,representing the first noise vector as described above and,represents the above-mentioned i-th class vector,the corresponding generation function of the generator is represented,representing the first analog sample described above and,representing the discriminant function corresponding to the ith discriminator,the first probability is represented as described above in relation to,to representIn accordance with the above-mentioned noise spaceThe spatial distribution of (a) is such that,representing the classification function corresponding to the above-mentioned classifier,which represents the above-mentioned i-th category,representing a first analogue sampleIs classified intoThe above-mentioned third probability of (2),indicating that the expected value is to be found.
It should be noted that other forms of loss functions may be used to determine the generated training loss.
Therefore, by repeating the steps S310 to S370, multiple rounds of iterative training of the classifiers and generators in the countermeasure generation network can be implemented. Further, a generator trained after multiple rounds of iterative training can be used to generate real samples simulating the target class, and the target confrontation samples predicted by the classifier as other classes. Specifically, in one embodiment, after the step S370, the method further includes: obtaining a second noise vector, and obtaining a target class vector corresponding to a target class, the target class belonging to the N classes; and inputting the second noise vector and the target category vector into the trained generator together to obtain the target confrontation sample. In this manner, generation of a challenge sample having a specified real category may be achieved.
In addition, for the training of the arbiter and the generator, an end-to-end training method may be used, and the arbiter and the generator are adjusted and referred in each training; or, the model parameters in the generator can be fixed first, the discriminator can be trained for multiple times, then the model parameters in the discriminator can be fixed, the generator can be trained, and the iteration is repeated in such a way, so that multiple rounds of iterative training are completed. Moreover, the iteration number may be a manually set hyper-parameter, or may be iterated until the model converges without presetting the iteration number.
In the above embodiments, the first probability and the second probability both refer to the probability that the corresponding sample belongs to the true sample. In other embodiments, the first probability and the second probability may also refer to the probability that the corresponding sample belongs to a false sample, and it should be understood that such embodiments are also covered by the protection scope of the foregoing claims.
In summary, by adopting the method for generating the countermeasure samples by using the countermeasure generating network provided by the embodiments of the present specification, it is possible to generate the countermeasure samples having the specified real category but predicted as other categories by the classifier. Moreover, by utilizing the generator after training, a large quantity of high-quality confrontation samples can be efficiently and quickly generated.
Corresponding to the generation method, the embodiment of the specification also discloses a generation device. In particular, fig. 5 shows a schematic structural diagram of an apparatus for generating countermeasure samples using a countermeasure generation network according to an embodiment, wherein the countermeasure generation network includes a pre-trained classifier for performing classification tasks of N classes for a business object; the countermeasure generation network further includes a generator and N discriminators corresponding to the N categories, where N is a positive integer greater than 1.
The above apparatus may be implemented by any processing platform with computing power, server cluster, etc., as shown in fig. 5, the apparatus 500 includes:
a noise vector acquisition unit 510 configured to acquire a first noise vector; a category vector acquisition unit 520 configured to acquire an ith category vector corresponding to an ith category, where i is a positive integer not greater than N; a simulation sample generating unit 530, configured to input the first noise vector and the ith category vector into the generator together, so as to obtain a first simulation sample corresponding to the ith category real sample; a simulated sample discriminating unit 540 configured to input the first simulated sample into the ith discriminator to obtain a first probability that the first simulated sample belongs to a real sample under the ith category; a real sample distinguishing unit 550, configured to input the obtained first real sample belonging to the ith category into the ith discriminator to obtain a second probability that the first real sample is a real sample in the ith category; a discriminator training unit 560 configured to train the i-th discriminator with a target of decreasing the first probability and increasing the second probability; the analog sample classification unit 570 is configured to input the first analog sample into the classifier to obtain a third probability that the first analog sample belongs to the ith class; a generator training unit 580 configured to train the generator with a goal of increasing the first probability and decreasing the third probability, the trained generator for generating a goal confrontation sample that simulates a goal class true sample but is predicted by the classifier as the other class.
In one embodiment, the noise vector obtaining unit 510 is specifically configured to: and randomly sampling the noise space which accords with the Gaussian distribution to obtain the first noise vector.
In one embodiment, the category vector obtaining unit 520 is specifically configured to: acquiring N category labels, and carrying out one-hot coding on the N category labels to correspondingly obtain N one-hot coding vectors; treating the N unique hot coded vectors as N class vectors, including the ith class vector.
In one embodiment, the analog sample generating unit 530 is specifically configured to: splicing the first noise vector and the ith category vector to obtain a spliced vector, and inputting the spliced vector into the generator to obtain the first analog sample; or, summing the first noise vector and the ith category vector to obtain a summed vector, and inputting the summed vector into the generator to obtain the first analog sample.
In one embodiment, the business object is text and the generator is a Recurrent Neural Network (RNN); the analog sample generating unit 530 is specifically configured to: performing fusion processing on the first noise vector and the ith category vector to obtain a fusion vector which is used as an initial state vector of a hidden layer in the RNN; taking wildcards for text characters as initial input to the RNN network, resulting in the first simulation sample.
In one embodiment, the business object is text or a picture or audio, and the trained generator is used for generating a text countermeasure sample or a picture countermeasure sample or an audio countermeasure sample.
In one embodiment, the apparatus 500 further comprises a confrontation sample generation unit 590 configured to: obtaining a second noise vector, and obtaining a target class vector corresponding to a target class, the target class belonging to the N classes; and inputting the second noise vector and the target category vector into the trained generator together to obtain the target confrontation sample.
In summary, by adopting the apparatus for generating a confrontation sample by using a confrontation generation network provided by the embodiments of the present specification, it is possible to generate a confrontation sample having a specified true category but predicted by a classifier as another category. Moreover, by utilizing the generator after training, a large quantity of high-quality confrontation samples can be efficiently and quickly generated.
As above, according to an embodiment of yet another aspect, there is also provided a computer readable storage medium having stored thereon a computer program which, when executed in a computer, causes the computer to perform the method described in connection with fig. 3.
There is also provided, according to an embodiment of yet another aspect, a computing device comprising a memory having stored therein executable code, and a processor that, when executing the executable code, implements the method described in connection with fig. 3.
Those skilled in the art will recognize that, in one or more of the examples described above, the functions described in the embodiments disclosed herein may be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium.
The above-mentioned embodiments, objects, technical solutions and advantages of the embodiments disclosed in the present specification are further described in detail, it should be understood that the above-mentioned embodiments are only specific embodiments of the embodiments disclosed in the present specification, and do not limit the scope of the embodiments disclosed in the present specification, and any modifications, equivalents, improvements and the like made on the basis of the technical solutions of the embodiments disclosed in the present specification should be included in the scope of the embodiments disclosed in the present specification.
Claims (16)
1. A method of generating a confrontation sample using a confrontation generating network comprising a pre-trained classifier for performing N classes of classification tasks on a business object; the countermeasure generation network further includes a generator and N discriminators corresponding to the N categories, where N is a positive integer greater than 1; the method comprises the following steps:
obtaining a first noise vector and obtaining an ith category vector corresponding to an ith category, wherein i is a positive integer not greater than N;
inputting the first noise vector and the ith category vector into the generator together to obtain a first simulation sample corresponding to the ith category real sample;
inputting the first simulation sample into an ith discriminator to obtain a first probability that the first simulation sample belongs to a real sample under the ith category;
inputting the obtained first real sample belonging to the ith category into the ith discriminator to obtain a second probability that the first real sample is a real sample under the ith category;
training the ith discriminator by taking the first probability as a reduction and the second probability as an increase as a target;
inputting the first simulation sample into the classifier to obtain a third probability that the first simulation sample belongs to the ith category;
training the generator with the aim of increasing the first probability and decreasing the third probability, wherein the trained generator is used for generating target confrontation samples which simulate real samples of target classes but are predicted as other classes by the classifier.
2. The method of claim 1, wherein obtaining a first noise vector comprises:
and randomly sampling a noise space conforming to Gaussian distribution to obtain the first noise vector.
3. The method of claim 1, wherein obtaining an ith class vector corresponding to the ith class comprises:
acquiring N category labels, and performing unique hot coding on the N category labels to correspondingly obtain N unique hot coding vectors;
treating the N unique hot coded vectors as N class vectors, including the ith class vector.
4. The method of claim 1, wherein inputting the first noise vector and the ith class vector together into the generator comprises:
splicing the first noise vector and the ith category vector to obtain a spliced vector, and inputting the spliced vector into the generator; or the like, or a combination thereof,
and summing the first noise vector and the ith category vector to obtain a summed vector, and inputting the summed vector into the generator.
5. The method of claim 1, wherein the business object is text and the generator is a Recurrent Neural Network (RNN); wherein, inputting the first noise vector and the ith category vector into the generator together to obtain a first simulation sample corresponding to the ith category real sample, comprising:
performing fusion processing on the first noise vector and the ith category vector to obtain a fusion vector which is used as an initial state vector of a hidden layer in the RNN;
taking wildcards for text characters as initial input of the RNN network to obtain the first simulation sample.
6. The method of claim 1, wherein the business object is text or a picture or audio, and the trained generator is configured to generate a text confrontation sample or a picture confrontation sample or an audio confrontation sample.
7. The method of claim 1, wherein after training the generator, the method further comprises:
obtaining a second noise vector, and obtaining a target class vector corresponding to a target class, the target class belonging to the N classes;
and inputting the second noise vector and the target category vector into the trained generator together to obtain the target confrontation sample.
8. An apparatus for generating countermeasure samples using an countermeasure generation network, the countermeasure generation network including a pre-trained classifier for performing N classes of classification tasks for a business object; the countermeasure generation network further includes a generator and N discriminators corresponding to the N categories, where N is a positive integer greater than 1; the device comprises:
a noise vector acquisition unit configured to acquire a first noise vector;
a category vector acquisition unit configured to acquire an ith category vector corresponding to an ith category, where i is a positive integer not greater than N;
the simulation sample generating unit is configured to input the first noise vector and the ith category vector into the generator together to obtain a first simulation sample corresponding to the ith category real sample;
the analog sample distinguishing unit is configured to input the first analog sample into an ith discriminator to obtain a first probability that the first analog sample belongs to a real sample under an ith category;
the real sample distinguishing unit is configured to input the acquired first real sample belonging to the ith category into the ith discriminator to obtain a second probability that the first real sample is a real sample under the ith category;
a discriminator training unit configured to train the i-th discriminator with a target of decreasing the first probability and increasing the second probability;
the analog sample classification unit is configured to input the first analog sample into the classifier to obtain a third probability that the first analog sample belongs to the ith class;
and a generator training unit configured to train the generator aiming at increasing the first probability and decreasing the third probability, wherein the trained generator is used for generating a target confrontation sample which simulates a target class real sample but is predicted as other classes by the classifier.
9. The apparatus according to claim 8, wherein the noise vector obtaining unit is specifically configured to:
and randomly sampling a noise space conforming to Gaussian distribution to obtain the first noise vector.
10. The apparatus according to claim 8, wherein the category vector obtaining unit is specifically configured to:
acquiring N category labels, and performing unique hot coding on the N category labels to correspondingly obtain N unique hot coding vectors;
taking the N one-hot coded vectors as N category vectors, wherein the ith category vector is included.
11. The apparatus of claim 8, wherein the analog sample generation unit is specifically configured to:
splicing the first noise vector and the ith category vector to obtain a spliced vector, and inputting the spliced vector into the generator to obtain the first analog sample; or,
and summing the first noise vector and the ith category vector to obtain a summed vector, and inputting the summed vector into the generator to obtain the first analog sample.
12. The apparatus of claim 8, wherein the business object is text and the generator is a Recurrent Neural Network (RNN); the analog sample generation unit is specifically configured to:
performing fusion processing on the first noise vector and the ith category vector to obtain a fusion vector which is used as an initial state vector of a hidden layer in the RNN;
taking wildcards for text characters as initial input of the RNN network to obtain the first simulation sample.
13. The apparatus of claim 8, wherein the business object is text or a picture or audio, and the trained generator is configured to generate a text confrontation sample or a picture confrontation sample or an audio confrontation sample.
14. The apparatus of claim 8, wherein the apparatus further comprises a challenge sample generation unit configured to:
obtaining a second noise vector, and obtaining a target class vector corresponding to a target class, the target class belonging to the N classes;
and inputting the second noise vector and the target category vector into the trained generator together to obtain the target confrontation sample.
15. A computer-readable storage medium, on which a computer program is stored, wherein the computer program, when executed in a computer, causes the computer to perform the method of any of claims 1-7.
16. A computing device comprising a memory and a processor, wherein the memory has stored therein executable code that when executed by the processor implements the method of any of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010329630.9A CN111241291B (en) | 2020-04-24 | 2020-04-24 | Method and device for generating countermeasure sample by utilizing countermeasure generation network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010329630.9A CN111241291B (en) | 2020-04-24 | 2020-04-24 | Method and device for generating countermeasure sample by utilizing countermeasure generation network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111241291A CN111241291A (en) | 2020-06-05 |
CN111241291B true CN111241291B (en) | 2023-01-03 |
Family
ID=70873601
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010329630.9A Active CN111241291B (en) | 2020-04-24 | 2020-04-24 | Method and device for generating countermeasure sample by utilizing countermeasure generation network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111241291B (en) |
Families Citing this family (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111753091B (en) * | 2020-06-30 | 2024-09-03 | 北京小米松果电子有限公司 | Classification method, training device, training equipment and training storage medium for classification model |
CN112464548B (en) * | 2020-07-06 | 2021-05-14 | 中国人民解放军军事科学院评估论证研究中心 | Dynamic allocation device for countermeasure unit |
CN112069795B (en) * | 2020-08-28 | 2023-05-30 | 平安科技(深圳)有限公司 | Corpus detection method, device, equipment and medium based on mask language model |
CN112085279B (en) * | 2020-09-11 | 2022-09-06 | 支付宝(杭州)信息技术有限公司 | Method and device for training interactive prediction model and predicting interactive event |
CN112181952B (en) * | 2020-11-30 | 2021-12-14 | 中国电力科学研究院有限公司 | Method, system, device and storage medium for constructing data model |
CN112966112B (en) * | 2021-03-25 | 2023-08-08 | 支付宝(杭州)信息技术有限公司 | Text classification model training and text classification method and device based on countermeasure learning |
CN113159315A (en) * | 2021-04-06 | 2021-07-23 | 华为技术有限公司 | Neural network training method, data processing method and related equipment |
CN113220553B (en) * | 2021-05-13 | 2022-06-17 | 支付宝(杭州)信息技术有限公司 | Method and device for evaluating performance of text prediction model |
CN113204974B (en) * | 2021-05-14 | 2022-06-17 | 清华大学 | Method, device and equipment for generating confrontation text and storage medium |
CN113988908A (en) * | 2021-10-14 | 2022-01-28 | 同盾科技有限公司 | Marketing crowd delivery method and device, electronic equipment and storage medium |
CN114782670A (en) * | 2022-05-11 | 2022-07-22 | 中航信移动科技有限公司 | Multi-mode sensitive information identification method, equipment and medium |
TWI844284B (en) * | 2023-02-24 | 2024-06-01 | 國立中山大學 | Method and electrical device for training cross-domain classifier |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108520282B (en) * | 2018-04-13 | 2020-04-03 | 湘潭大学 | Triple-GAN-based classification method |
CN109697694B (en) * | 2018-12-07 | 2023-04-07 | 山东科技大学 | Method for generating high-resolution picture based on multi-head attention mechanism |
CN109948660A (en) * | 2019-02-26 | 2019-06-28 | 长沙理工大学 | A kind of image classification method improving subsidiary classification device GAN |
CN110647927A (en) * | 2019-09-18 | 2020-01-03 | 长沙理工大学 | ACGAN-based image semi-supervised classification algorithm |
CN111027439B (en) * | 2019-12-03 | 2022-07-29 | 西北工业大学 | SAR target recognition method for generating confrontation network based on auxiliary classification |
-
2020
- 2020-04-24 CN CN202010329630.9A patent/CN111241291B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN111241291A (en) | 2020-06-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111241291B (en) | Method and device for generating countermeasure sample by utilizing countermeasure generation network | |
CN110580500B (en) | Character interaction-oriented network weight generation few-sample image classification method | |
CN108228686B (en) | Method and device for realizing image-text matching and electronic equipment | |
CN111401558B (en) | Data processing model training method, data processing device and electronic equipment | |
US20230041233A1 (en) | Image recognition method and apparatus, computing device, and computer-readable storage medium | |
CN111667066B (en) | Training method and device of network model, character recognition method and device and electronic equipment | |
CN111241287A (en) | Training method and device for generating generation model of confrontation text | |
CN112395979B (en) | Image-based health state identification method, device, equipment and storage medium | |
CN111209878A (en) | Cross-age face recognition method and device | |
Singh et al. | Steganalysis of digital images using deep fractal network | |
CN111522908A (en) | Multi-label text classification method based on BiGRU and attention mechanism | |
CN110246198B (en) | Method and device for generating character selection verification code, electronic equipment and storage medium | |
CN110234018A (en) | Multimedia content description generation method, training method, device, equipment and medium | |
Ra et al. | DeepAnti-PhishNet: Applying deep neural networks for phishing email detection | |
CN112784929A (en) | Small sample image classification method and device based on double-element group expansion | |
CN110674370A (en) | Domain name identification method and device, storage medium and electronic equipment | |
Jami et al. | Biometric template protection through adversarial learning | |
CN113435264A (en) | Face recognition attack resisting method and device based on black box substitution model searching | |
CN112364198A (en) | Cross-modal Hash retrieval method, terminal device and storage medium | |
CN117558270B (en) | Voice recognition method and device and keyword detection model training method and device | |
Nida et al. | Video augmentation technique for human action recognition using genetic algorithm | |
CN111062019A (en) | User attack detection method and device and electronic equipment | |
Wang et al. | Latent coreset sampling based data-free continual learning | |
CN111488950B (en) | Classification model information output method and device | |
CN111598075B (en) | Picture generation method, device and readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |