CN111914866A - Method and device for training antagonistic image detection model and storage medium - Google Patents

Method and device for training antagonistic image detection model and storage medium Download PDF

Info

Publication number
CN111914866A
CN111914866A CN201910387970.4A CN201910387970A CN111914866A CN 111914866 A CN111914866 A CN 111914866A CN 201910387970 A CN201910387970 A CN 201910387970A CN 111914866 A CN111914866 A CN 111914866A
Authority
CN
China
Prior art keywords
training
model
images
image
confrontation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910387970.4A
Other languages
Chinese (zh)
Inventor
赵晨旭
石海林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Original Assignee
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Century Trading Co Ltd, Beijing Jingdong Shangke Information Technology Co Ltd filed Critical Beijing Jingdong Century Trading Co Ltd
Priority to CN201910387970.4A priority Critical patent/CN111914866A/en
Publication of CN111914866A publication Critical patent/CN111914866A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a training method and a training device for a confrontation image detection model and a storage medium, and relates to the technical field of image processing. The training method for the confrontation image detection model comprises the following steps: initializing a confrontation image detection model by adopting a pre-trained meta-model, wherein the parameters of the meta-model are determined based on a plurality of pre-trained internal network models, each internal network model is pre-trained by adopting one or more confrontation images, and the number of each confrontation image is less than a preset value; acquiring training images comprising a new type of confrontation images, wherein the number of the new type of confrontation images is less than a preset value; the challenge image detection model is trained by using the training image, so that the challenge image and the non-challenge image are identified by using the trained challenge image detection model. The training method can realize rapid convergence, so that new types of confrontation images can be rapidly and accurately identified.

Description

Method and device for training antagonistic image detection model and storage medium
Technical Field
The invention relates to the technical field of image processing, in particular to a training method and a training device for a confrontation image detection model and a storage medium.
Background
Antagonistic samples (adaptive algorithms) refer to input samples formed by deliberately adding subtle perturbations to a data set that cause the model to give an erroneous output with high confidence. When a subtle disturbance is added to the image to mislead the model, the resulting sample may be referred to as a challenge image. The human observer does not perceive the difference between the original image and the challenge image, but the model makes completely different predictions for the original image and the challenge image.
If the normal image is mixed with the countermeasure sample, certain threats can be generated to the system safety. For example, in a face recognition system, by attacking an image with the face of the user U1, the generated countermeasure image is likely to be recognized by the neural network as the user U2, but the image is still the user U1 seen by human eyes, causing a certain loss to the attacked user.
To avoid this as much as possible, an algorithmic model of the challenge sample detection may be integrated into the computer vision system to detect this potential. In the related technology, a large number of samples are adopted to train a deep learning model, a Long Short-Term Memory (LSTM) model and other models for training so as to identify the confrontation samples.
Disclosure of Invention
The inventors have realized that the related art can only detect for a known type of challenge image. When a new type of countermeasure image appears, a large number of new types of countermeasure images are required to retrain the model, and thus it is difficult to quickly cope with a new attack pattern for the image. If the model is trained with only a small number of new types of confrontation images, the recognition accuracy of the model is low.
The embodiment of the invention aims to solve the technical problem that: how to quickly and accurately identify new kinds of confrontational samples.
According to a first aspect of some embodiments of the present invention, there is provided a training method of a confrontation image detection model, comprising: initializing a confrontation image detection model by adopting a pre-trained meta-model, wherein the parameters of the meta-model are determined based on a plurality of pre-trained internal network models, each internal network model is pre-trained by adopting one or more confrontation images, and the number of each confrontation image is less than a preset value; acquiring training images comprising a new type of confrontation images, wherein the number of the new type of confrontation images is less than a preset value; the challenge image detection model is trained by using the training image, so that the challenge image and the non-challenge image are identified by using the trained challenge image detection model.
In some embodiments, the meta-model and the internal network model have the same network structure.
In some embodiments, the training images further include at least one of non-antagonistic images, antagonistic images outside the new category.
In some embodiments, the preset value is no greater than 10.
In some embodiments, the training method further comprises: and identifying the antagonistic image and the non-antagonistic image by adopting the trained antagonistic image detection model.
In some embodiments, the training method further comprises: obtaining a plurality of task sets, wherein each task set comprises a support set and a query set, the support set comprises one or more confrontation images, and the number of each confrontation image is less than a preset value; initializing an internal network model corresponding to each task set by adopting parameters of the meta-model; adopting a support set in each task set to pre-train a corresponding internal network model; predicting the query set in the corresponding task set by respectively adopting each internal network model after pre-training; and updating the parameters of the meta-model according to the prediction result of each pre-trained internal network model.
In some embodiments, the internal network model is used to identify the category of the antagonistic image.
In some embodiments, updating the parameters of the meta-model according to the prediction result of each pre-trained internal network model comprises: determining a gradient value corresponding to the parameter of each internal network model according to the prediction result of each internal network model after pre-training; and updating the corresponding parameters of the meta-model by adopting the sum of the gradient values corresponding to the same parameter of each internal network model.
In some embodiments, each task set is generated using the following method: selecting a preset number of categories from a confrontation image dataset comprising a plurality of categories of confrontation images; selecting a preset number of confrontation images from each selected category to be added into the support set, and selecting a plurality of confrontation images from each selected category to be added into the query set; and adopting the support set and the query set as task sets.
In some embodiments, a preset number of resist images and a preset number of non-resist images are selected to be added to the support set in each of the selected categories, and a plurality of resist images and a plurality of non-resist images are selected to be added to the query set in each of the selected categories.
According to a second aspect of some embodiments of the present invention, there is provided a training apparatus for confrontation with an image detection model, including: an initialization module configured to initialize a countermeasure image detection model using a pre-trained meta model, wherein parameters of the meta model are determined based on a plurality of pre-trained internal network models, each internal network model is pre-trained using one or more kinds of countermeasure images, and the number of each kind of countermeasure image is smaller than a preset value; a training image acquisition module configured to acquire a training image including a new kind of countermeasure image, wherein the number of the new kind of countermeasure image is smaller than a preset value; a training module configured to train the countermeasure image detection model with a training image to identify the countermeasure image and the non-countermeasure image with the trained countermeasure image detection model.
According to a third aspect of some embodiments of the present invention, there is provided a training apparatus for confrontation with an image detection model, including: a memory; and a processor coupled to the memory, the processor configured to perform any of the aforementioned methods of training against an image detection model based on instructions stored in the memory.
According to a fourth aspect of some embodiments of the present invention, there is provided a computer readable storage medium having a computer program stored thereon, wherein the program, when executed by a processor, implements any of the aforementioned methods of training against an image detection model.
Some embodiments of the above invention have the following advantages or benefits: the meta-model can have a priori knowledge obtained by each internal network model and learned based on a small number of samples by determining parameters based on a large number of internal network models after pre-training. Therefore, when a new type of confrontation sample appears, rapid convergence can be realized by training the confrontation image detection model initialized by the meta-model by using a small number of samples, so that the new type of confrontation image can be identified rapidly and accurately.
Other features of the present invention and advantages thereof will become apparent from the following detailed description of exemplary embodiments thereof, which proceeds with reference to the accompanying drawings.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a flow diagram of a method of training a confrontation image detection model according to some embodiments of the invention.
FIG. 2 is a flow diagram of a task set generation method according to some embodiments of the invention.
FIG. 3 is a flow diagram illustrating a meta-model pre-training method according to some embodiments of the invention.
FIG. 4 is a flow diagram of a testing method according to some embodiments of the invention.
FIG. 5 is a schematic diagram of an apparatus for training a confrontation image detection model according to some embodiments of the invention.
FIG. 6 is a schematic structural diagram of an apparatus for training a confrontation image detection model according to another embodiment of the present invention.
FIG. 7 is a schematic diagram of an apparatus for training a confrontation image detection model according to some embodiments of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the invention, its application, or uses. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The relative arrangement of the components and steps, the numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present invention unless specifically stated otherwise.
Meanwhile, it should be understood that the sizes of the respective portions shown in the drawings are not drawn in an actual proportional relationship for the convenience of description.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
In all examples shown and discussed herein, any particular value should be construed as merely illustrative, and not limiting. Thus, other examples of the exemplary embodiments may have different values.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
FIG. 1 is a flow diagram of a method of training a confrontation image detection model according to some embodiments of the invention. As shown in fig. 1, the training method of the countermeasure image detection model of this embodiment includes steps S102 to S108.
In step S102, the countermeasure image detection model is initialized with the meta model trained in advance. For example, the network structure and parameters of the meta-model may be replicated to obtain an initial confrontational image detection model.
The parameters of the meta-model are determined based on a plurality of pre-trained internal network models. The internal network model and the meta model may be implemented using a neural network model. In some embodiments, the meta-model and the internal network model have the same network structure.
The counterimage is classified by means of an attack. For example, the confrontation images generated by attacks by Fast Gradient Sign Method (FGSM), Carlini and Wagner (C & W), and Projection Gradient Descent (PGD) belong to three different categories. Each internal network model is pre-trained using one or more antagonistic images, the number of each antagonistic image being less than a preset value. Thus, each internal network model learns a small number of countermeasure images of each kind, i.e., learns each attack pattern based on a small number of samples.
In step S104, a training image including a new kind of confrontation images is acquired, wherein the number of the new kind of confrontation images is smaller than a preset value. The new class is different from the class of confrontation images used in the pre-training process.
In some embodiments, the training images include at least one of non-antagonistic images (alternatively referred to as clean images), antagonistic images other than the new category, in addition to the new category of antagonistic images. By training the new kind of confrontation image with the known kind of confrontation image or non-confrontation image, the accuracy of model prediction can be further improved.
In some embodiments, the preset value is no greater than 10. For example, 5 new kinds of confrontation images may be included in the training image.
In step S106, the training image is used to train the counterimage detection model.
In step S108, the trained resist image detection model is used to identify a resist image and a non-resist image.
During training, whatever the resisting image, it may be marked as a negative sample, while the non-resisting image may be marked as a positive sample. Thus, in the prediction phase, the confrontation image detection model may not be concerned with the specific type of the confrontation image.
The meta-model can have a priori knowledge obtained by each internal network model and learned based on a small number of samples by determining parameters based on a large number of internal network models after pre-training. Therefore, when a new type of confrontation sample appears, rapid convergence can be realized by training the confrontation image detection model initialized by the meta-model by using a small number of samples, so that the new type of confrontation image can be identified rapidly and accurately.
In the stage of training the meta-model in advance, the pre-training process of each internal network may be regarded as a scene that is learned based on a small number of samples, and a corresponding task set is designed for each scene as data for pre-training. An embodiment of the task set generation method of the present invention is described below with reference to fig. 2.
FIG. 2 is a flow diagram of a task set generation method according to some embodiments of the invention. As shown in fig. 2, the task set generating method of this embodiment includes steps S202 to S206.
In step S202, a preset number of categories are selected from a confrontation image data set including a plurality of categories of confrontation images.
In step S204, a preset number of resist images are selected from each selected category to be added to the support set, and a plurality of resist images are selected from each selected category to be added to the query set.
In some embodiments, a preset number of resist images and a preset number of non-resist images may also be selected in each of the selected categories for addition to the support set, and a plurality of resist images and a plurality of non-resist images may also be selected in each of the selected categories for addition to the query set. That is, the non-antagonistic image can also be regarded as a kind of "category".
Therefore, the task set can simultaneously have a positive sample and a negative sample, so that a better training effect can be obtained.
In step S206, the support set and the query set are employed as task sets.
It is assumed that the antagonistic images in the antagonistic image data set belong to the categories a to G, respectively. Each task set includes two categories of confrontational images, or one category of confrontational images and a non-confrontational image. The non-antagonistic image is represented by "class P". Table 1 is an exemplary composition of a task set.
TABLE 1
Figure BDA0002055477680000071
The number of categories in each task set may also be greater than 2, as needed, and will not be described herein again.
In some embodiments, the images in the task set may also be provided with a coarse category label and a fine category label. The coarse classification label is used to indicate whether the image is a competing image or a non-competing image. The fine category label of the confrontation image is used for representing the category of the confrontation image, namely the confrontation image generated by which attack mode, and the fine category label of the non-confrontation image is used for representing the category of the image content, such as cat, dog, tiger and the like. Pre-training may be performed based on fine category labels and testing may be performed based on coarse category labels.
By the method of the embodiment, a plurality of task sets comprising a small number of confrontation images can be generated according to the confrontation image data set, so that the data used in the pre-training process is close to the use environment.
Other ways of generating a set of tasks may be used by those skilled in the art, as desired.
An embodiment of the pre-trained meta-model is described below with reference to fig. 3.
FIG. 3 is a flow diagram illustrating a meta-model pre-training method according to some embodiments of the invention. As shown in FIG. 3, the meta-model pre-training method of this embodiment includes steps S302 to S310.
In step S302, a plurality of task sets are obtained, where each task set includes a support set and a query set, the support set includes one or more confrontation images, and the number of each confrontation image is smaller than a preset value.
In step S304, the internal network model corresponding to each task set is initialized by using the parameters of the meta-model.
In some embodiments, the initial values of the parameters of the meta-model may be determined by a deep learning method.
In step S306, the support set in each task set is used to pre-train the corresponding internal network model.
In some embodiments, for each internal network model, the corresponding support set may be trained as a batch (batch) of data. For example, images in the support set can be input into an internal network model to obtain a prediction result; then, a gradient value of the loss function of the internal network model with respect to the parameter is calculated according to a difference between the prediction result and the label value of the image, and the parameter is updated according to the gradient value.
In some embodiments, the internal network model is used to identify the category of the antagonistic image. That is, the internal network model may be used to predict the fine classification label of the input image. The trained meta-model and the antagonistic image detection model can predict the coarse classification labels of the images, i.e. only predict whether the images are antagonistic images. Therefore, the model can be adjusted by using the prediction result with fine granularity in the training stage, and the prediction result with coarse granularity is given in the using stage, so that the prediction accuracy can be further improved.
In step S308, each pre-trained internal network model is used to predict the query set in the corresponding task set.
In step S310, the parameters of the meta-model are updated according to the prediction result of each pre-trained internal network model.
In some embodiments, a gradient value corresponding to a parameter of each internal network model may be determined according to a prediction result of each pre-trained internal network model; and updating the corresponding parameters of the meta-model by adopting the sum of the gradient values corresponding to the same parameter of each internal network model.
In some embodiments, the sum of the difference between the prediction result corresponding to each internal network model and the label value may also be determined according to the prediction result of each pre-trained internal network model; summing the difference sum corresponding to each internal network model; and determining gradient values of the loss functions of the meta-models with respect to parameters of the meta-models based on the result of the summation calculation, and updating the parameters of the meta-models according to the gradient values. That is, the parameters of the meta-model are adjusted by using the loss of each internal network model as the loss of the meta-model.
Steps S302 to S310 may be performed as many times as necessary until a preset convergence condition is reached. And according to the requirement, testing the trained meta-model to judge whether the prediction accuracy rate meets the preset requirement. An embodiment of the test method of the present invention is described below with reference to fig. 4.
FIG. 4 is a flow diagram of a testing method according to some embodiments of the invention. As shown in fig. 4, the test method of this embodiment includes steps S402 to S410.
In step S402, a plurality of task sets for testing are obtained, wherein each task set for testing comprises a support set and a query set, the support set comprises one or more confrontation images, and the number of each confrontation image is less than a preset value.
In step S404, the internal network model corresponding to each task set for testing is initialized by using the parameters of the pre-trained meta-model.
In step S406, the corresponding internal network model is pre-trained using the support set in each task set for testing.
In step S408, each pre-trained internal network model is used to predict the query set in the corresponding task set for testing.
In a testing phase in some embodiments, the internal network model may be used to predict the coarse classification label of the image, i.e. whether the predicted image is a countermeasure image; alternatively, the internal network model may output a prediction result based on the fine classification label, and convert the fine classification label in the prediction result into the rough classification label.
In step S410, a score corresponding to each internal network model is determined according to a difference between the prediction result and the labeled value of the query set in the task set for testing, and a total score is calculated. According to the comparison result of the total score and the preset threshold value, the prediction accuracy rate can be determined.
It should be clear to those skilled in the art that the above descriptions about "pre-training" and "training" are only for explaining the training processes of the internal network model and the meta-model and the sequence of the training processes of the confrontation image detection model, that is, the internal network model and the meta-model are trained first, and then the confrontation image detection model is trained based on the training results of the meta-model. The literal names of "pre-training", "training" do not play a limiting role in the inventive solution. If necessary, those skilled in the art may also refer to the process of determining the initial parameters of the meta-model based on the deep learning method as "pre-training", the process of training the internal network model and the meta-model as "training", and the process of training the anti-image detection model as "fine-tuning".
An embodiment of the training apparatus against an image detection model of the present invention is described below with reference to fig. 5.
FIG. 5 is a schematic diagram of an apparatus for training a confrontation image detection model according to some embodiments of the invention. As shown in fig. 5, the training apparatus 50 of this embodiment includes: an initialization module 510 configured to initialize a confrontation image detection model using a pre-trained meta model, wherein parameters of the meta model are determined based on a plurality of pre-trained internal network models, each internal network model is pre-trained using one or more kinds of confrontation images, and the number of each kind of confrontation image is smaller than a preset value; a training image acquisition module 520 configured to acquire a training image including a new kind of countermeasure images, wherein the number of the new kind of countermeasure images is less than a preset value; a training module 530 configured to train the antagonistic image detection model with a training image to identify the antagonistic image and the non-antagonistic image with the trained antagonistic image detection model.
In some embodiments, the meta-model and the internal network model have the same network structure.
In some embodiments, the training images further include at least one of non-antagonistic images, antagonistic images outside the new category.
In some embodiments, the preset value is no greater than 10.
In some embodiments, the training apparatus 50 further comprises a recognition module 540 configured to recognize the antagonistic image and the non-antagonistic image using the trained antagonistic image detection model.
In some embodiments, the training apparatus 50 further comprises a pre-training module 550 configured to obtain a plurality of task sets, wherein each task set comprises a support set and a query set, the support set comprises one or more confrontation images, and the number of each confrontation image is less than a preset value; initializing an internal network model corresponding to each task set by adopting parameters of the meta-model; adopting a support set in each task set to pre-train a corresponding internal network model; predicting the query set in the corresponding task set by respectively adopting each internal network model after pre-training; and updating the parameters of the meta-model according to the prediction result of each pre-trained internal network model.
In some embodiments, the internal network model is used to identify the category of the antagonistic image.
In some embodiments, the pre-training module 550 is further configured to determine a gradient value corresponding to the parameter of each internal network model according to the prediction result of each pre-trained internal network model; and updating the corresponding parameters of the meta-model by adopting the sum of the gradient values corresponding to the same parameter of each internal network model.
In some embodiments, the pre-training module 550 is further configured to select a preset number of categories from a confrontation image dataset comprising a plurality of categories of confrontation images; selecting a preset number of confrontation images from each selected category to be added into the support set, and selecting a plurality of confrontation images from each selected category to be added into the query set; and adopting the support set and the query set as task sets.
In some embodiments, the pre-training module 550 is further configured to select a preset number of the antagonistic images in each of the selected categories and select a preset number of the non-antagonistic images to add to the support set, and select a plurality of the antagonistic images in each of the selected categories and select a plurality of the non-antagonistic images to add to the query set.
FIG. 6 is a schematic structural diagram of an apparatus for training a confrontation image detection model according to another embodiment of the present invention. As shown in fig. 6, the training device 60 for the countermeasure image detection model of this embodiment includes: a memory 610 and a processor 620 coupled to the memory 610, the processor 620 being configured to execute the training method of the confrontation image detection model in any of the foregoing embodiments based on instructions stored in the memory 610.
Memory 610 may include, for example, system memory, fixed non-volatile storage media, and the like. The system memory stores, for example, an operating system, an application program, a Boot Loader (Boot Loader), and other programs.
FIG. 7 is a schematic diagram of an apparatus for training a confrontation image detection model according to some embodiments of the present invention. As shown in fig. 7, the training device 70 for the countermeasure image detection model of this embodiment includes: the memory 710 and the processor 720 may further include an input/output interface 730, a network interface 740, a storage interface 750, and the like. These interfaces 730, 740, 750, as well as the memory 710 and the processor 720, may be connected, for example, by a bus 760. The input/output interface 730 provides a connection interface for input/output devices such as a display, a mouse, a keyboard, and a touch screen. The network interface 740 provides a connection interface for various networking devices. The storage interface 750 provides a connection interface for external storage devices such as an SD card and a usb disk.
An embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, wherein the program, when executed by a processor, implements any one of the aforementioned methods of training a confrontation image detection model.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable non-transitory storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (13)

1. A method of training a confrontation image detection model, comprising:
initializing a confrontation image detection model by adopting a pre-trained meta-model, wherein parameters of the meta-model are determined based on a plurality of pre-trained internal network models, each internal network model is pre-trained by adopting one or more confrontation images, and the number of each confrontation image is less than a preset value;
acquiring training images comprising a new kind of confrontation images, wherein the number of the confrontation images of the new kind is less than a preset value;
the training image is used for training a countermeasure image detection model so as to identify a countermeasure image and a non-countermeasure image by using the trained countermeasure image detection model.
2. The training method of claim 1, wherein the meta model and the internal network model have the same network structure.
3. The training method of claim 1, wherein the training images further comprise at least one of non-antagonistic images, antagonistic images outside the new category.
4. Training method according to claim 1, wherein said preset value is not greater than 10.
5. The training method of claim 1, further comprising:
and identifying the antagonistic image and the non-antagonistic image by adopting the trained antagonistic image detection model.
6. The training method of any of claims 1-5, further comprising:
obtaining a plurality of task sets, wherein each task set comprises a support set and a query set, the support set comprises one or more confrontation images, and the number of each confrontation image is less than a preset value;
initializing an internal network model corresponding to each task set by adopting parameters of the meta-model;
adopting a support set in each task set to pre-train a corresponding internal network model;
predicting the query set in the corresponding task set by respectively adopting each internal network model after pre-training;
and updating the parameters of the meta-model according to the prediction result of each pre-trained internal network model.
7. The training method of claim 6, wherein the internal network model is used to identify a category of confrontation images.
8. The training method of claim 6, wherein the updating the parameters of the meta-model according to the prediction result of each pre-trained internal network model comprises:
determining a gradient value corresponding to the parameter of each internal network model according to the prediction result of each internal network model after pre-training;
and updating the corresponding parameters of the meta-model by adopting the sum of the gradient values corresponding to the same parameter of each internal network model.
9. The training method of claim 6, wherein each set of tasks is generated using the following method:
selecting a preset number of categories from a confrontation image dataset comprising a plurality of categories of confrontation images;
selecting a preset number of confrontation images from each selected category to be added into the support set, and selecting a plurality of confrontation images from each selected category to be added into the query set;
and adopting the support set and the query set as task sets.
10. The training method of claim 9, wherein a preset number of antagonistic images are selected in each selected category and a preset number of non-antagonistic images are selected to be added to the support set, and a plurality of antagonistic images are selected in each selected category and a plurality of non-antagonistic images are selected to be added to the query set.
11. A training apparatus against an image detection model, comprising:
an initialization module configured to initialize a countermeasure image detection model using a pre-trained meta model, wherein parameters of the meta model are determined based on a plurality of pre-trained internal network models, each internal network model is pre-trained using one or more kinds of countermeasure images, and the number of each kind of countermeasure image is smaller than a preset value;
a training image acquisition module configured to acquire training images including a new kind of countermeasure images, wherein the number of the new kind of countermeasure images is smaller than a preset value;
a training module configured to train a challenge image detection model using the training image to identify a challenge image and a non-challenge image using the trained challenge image detection model.
12. A training apparatus against an image detection model, comprising:
a memory; and
a processor coupled to the memory, the processor configured to perform the method of training against an image detection model of any of claims 1-10 based on instructions stored in the memory.
13. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, implements the method of training against an image detection model according to any one of claims 1 to 10.
CN201910387970.4A 2019-05-10 2019-05-10 Method and device for training antagonistic image detection model and storage medium Pending CN111914866A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910387970.4A CN111914866A (en) 2019-05-10 2019-05-10 Method and device for training antagonistic image detection model and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910387970.4A CN111914866A (en) 2019-05-10 2019-05-10 Method and device for training antagonistic image detection model and storage medium

Publications (1)

Publication Number Publication Date
CN111914866A true CN111914866A (en) 2020-11-10

Family

ID=73242869

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910387970.4A Pending CN111914866A (en) 2019-05-10 2019-05-10 Method and device for training antagonistic image detection model and storage medium

Country Status (1)

Country Link
CN (1) CN111914866A (en)

Similar Documents

Publication Publication Date Title
CN111475797B (en) Method, device and equipment for generating countermeasure image and readable storage medium
CN109145766B (en) Model training method and device, recognition method, electronic device and storage medium
CN112434721A (en) Image classification method, system, storage medium and terminal based on small sample learning
CN109961444B (en) Image processing method and device and electronic equipment
JP5214760B2 (en) Learning apparatus, method and program
CN105426356A (en) Target information identification method and apparatus
CN110096938B (en) Method and device for processing action behaviors in video
CN110348475A (en) It is a kind of based on spatial alternation to resisting sample Enhancement Method and model
CN107516102B (en) Method, device and system for classifying image data and establishing classification model
WO2019146057A1 (en) Learning device, system for generating captured image classification device, device for generating captured image classification device, learning method, and program
CN112214707A (en) Webpage content characterization method, classification method, device and equipment
CN111783812B (en) Forbidden image recognition method, forbidden image recognition device and computer readable storage medium
CN111291695B (en) Training method and recognition method for recognition model of personnel illegal behaviors and computer equipment
CN115100614A (en) Evaluation method and device of vehicle perception system, vehicle and storage medium
CN113435531B (en) Zero sample image classification method and system, electronic equipment and storage medium
CN117134958A (en) Information processing method and system for network technology service
CN115713669B (en) Image classification method and device based on inter-class relationship, storage medium and terminal
CN111914866A (en) Method and device for training antagonistic image detection model and storage medium
Lv et al. A challenge of deep‐learning‐based object detection for hair follicle dataset
CN114445656A (en) Multi-label model processing method and device, electronic equipment and storage medium
CN113836297A (en) Training method and device for text emotion analysis model
CN114241253A (en) Model training method, system, server and storage medium for illegal content identification
CN112861601A (en) Method for generating confrontation sample and related equipment
CN113762382B (en) Model training and scene recognition method, device, equipment and medium
CN115330579B (en) Model watermark construction method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination