CN111723864A - Method and device for performing countermeasure training by using internet pictures based on active learning - Google Patents

Method and device for performing countermeasure training by using internet pictures based on active learning Download PDF

Info

Publication number
CN111723864A
CN111723864A CN202010566122.2A CN202010566122A CN111723864A CN 111723864 A CN111723864 A CN 111723864A CN 202010566122 A CN202010566122 A CN 202010566122A CN 111723864 A CN111723864 A CN 111723864A
Authority
CN
China
Prior art keywords
sample
samples
training
model
selecting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010566122.2A
Other languages
Chinese (zh)
Inventor
蒋正晖
陶文源
姚雯
夏宇峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN202010566122.2A priority Critical patent/CN111723864A/en
Publication of CN111723864A publication Critical patent/CN111723864A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method and a device for performing confrontation training by using internet pictures based on active learning, wherein the method comprises the following steps: selecting part of the picture samples to be trained by using an active learning strategy, and judging and selecting the picture samples to be selected by using the trained model as a discriminator; selecting picture sample data from the original picture data set, generating a confrontation sample by using the selected picture sample data, and then performing confrontation training; improving a countermeasure sample selection strategy problem, measuring the uncertainty of the sample based on the probability, and efficiently selecting a representative sample by the strategy so as to generate a countermeasure sample for learning; and selecting the maximum classification probability value as an index for measuring the uncertainty of the sample. The device comprises: a memory, a processor implementing the method steps when executing the program.

Description

Method and device for performing countermeasure training by using internet pictures based on active learning
Technical Field
The invention relates to the field of robust machine learning, in particular to a method and a device for performing countermeasure training by using an internet picture based on active learning, which selectively utilize countermeasure samples to realize countermeasure training and improve the training rate.
Background
With the continuous improvement of hardware conditions, the processing of mass data is realized, and the deep learning is rapidly developed. The inspiration of the deep network model is derived from a biological neural network, and different data characteristics can be learned from a large number of data samples so as to perform tasks such as classification and regression. It is widely used in computer vision and natural language processing. In the fields of image recognition and the like, compared with the traditional classifier, the performance of the classifier is very superior. However, neural networks have a complex and not robust nature, which may exhibit some unpredictable results when the data samples are subjected to small perturbations. For example: in the field of image recognition, the noise-containing sample can easily cause the model to make a false judgment on the type, and the model has high confidence for the judgment, namely the model has high confidence for the judgment of the model. This noise is often not recognizable to the naked human eye.
In order to study the robustness of the model, researchers have proposed the concept of confrontational samples. The challenge sample, i.e., the sample generated after adding artificially and carefully designed perturbation to the original sample, is more challenging for the depth model than the normal sample containing gaussian noise and the like. Relevant studies have demonstrated that in real life, challenge samples are truly present.
In order to improve the robustness of the model, the mainstream method at present is resistance training. After the initial model training is finished, the confrontation sample is used as a training set for training, so that the data characteristics of the confrontation sample are obtained. Therefore, the accuracy of the classification of the confrontation samples can be greatly improved.
Taking an AI (artificial intelligence) picture auditing system for intelligently identifying sensitive pictures as an example, with the current step in the information era, information can be easily acquired, but a lot of junk information such as violent erotic pictures are mixed with massive information, so that serious social influence is brought if the violent erotic pictures are spread in the internet, and auditing of related pictures is urgently needed. Even if the number of advertisement pictures is too large, the network experience is seriously affected. The artificial auditing can not realize the requirements because the real-time and quick auditing can not be realized, so that the AI intelligent system can play a great role. Meanwhile, the existence of the countermeasure sample gives a way for an attacker to evade the audit of the AI system, so that the illegal picture can be audited and passed under the condition of only slightly modifying the picture, and the illegal picture can be easily transmitted on the Internet, thereby causing severe social influence.
In order to defend against such malicious attacks, the robustness of the system needs to be improved. To this end, researchers have proposed a means, known as confrontational training, to train the model using confrontational samples. A large number of researches show that the robustness of the model is greatly improved after the confrontation training. However, it has a serious drawback that the time cost is too high. If the confrontation training is introduced into the AI intelligent picture auditing system, the high time cost cannot be estimated in the case of a large amount of data, and the practice of training by using the whole data is not practical, so that the application of the confrontation training in a real scene is seriously hindered. Therefore, a strategy is needed to reduce the time cost of the countertraining.
Disclosure of Invention
The invention provides a method and a device for carrying out countermeasure training by using internet pictures based on active learning, the invention directly selects samples in an original data set, the generated countermeasure samples are most representative, the countermeasure samples are generated by avoiding redundant data, unnecessary calculation is reduced, time cost is obviously reduced, and the method and the device are more suitable for being popularized to a real application scene and are described in detail as follows:
a method of countermeasure training of internet pictures based on active learning, the method comprising:
selecting a batch of original picture samples from a data pool by using an active learning strategy, and selecting the picture samples from the original picture samples to generate confrontation training;
predicting the selected picture by using the trained convolutional neural network model, and outputting prediction probability information of each sample;
taking the trained convolutional neural network model as a discriminator, and if the maximum probability value is greater than a threshold value, training the model by using confrontation samples generated by the samples;
calculating a maximum probability value P for each samplemaxAfter the data is input into the convolutional neural network model,outputting the prediction probability values of the corresponding categories;
artificially setting a maximum probability threshold value beta, selecting samples with the maximum probability values larger than the beta, setting a sample selection ratio kappa,
if Pmax>β is greater than m.kappa, it is referred to PmaxSorting and selecting the largest top m & kappa samples;
attacking the original sample to generate a countersample; carrying out countermeasure training, wherein the weight parameters of the model are continuously updated and are used for measuring the uncertainty of the sample in the next round;
and repeating the steps, setting a plurality of periods for training, and finally obtaining a robust model.
Wherein, the attacking the original sample and generating the confrontation sample specifically comprises:
a) selecting an initial clean sample, and recording a convolutional neural network model as f (x; ω);
b) calculating and obtaining a loss function value of the convolutional neural network model about the sample;
c) calculating the gradient of the loss function value of the current convolutional neural network model for the sample, and obtaining the gradient direction of the loss function value by using a function sign (), wherein the gradient direction is used for indicating that the original image is generated into a antagonizing sample, and disturbance with the disturbance size of α is added to the original image, y0The category corresponding to the real sample;
d) and (5) iteratively executing the process for t times to generate a final confrontation sample.
An apparatus for countermeasure training using internet pictures based on active learning, the apparatus comprising: memory, processor and computer program stored on the memory and executable on the processor, which when executing the program implements the method steps of claim 1.
The technical scheme provided by the invention has the beneficial effects that:
1. according to the method, samples in the original data set are directly selected, the generated countermeasure samples are most representative, and unnecessary calculation is reduced by avoiding redundant data to generate countermeasure samples;
2. the invention obviously reduces the time cost, leads the model to be trained more efficiently and is more suitable for being popularized to a real application scene;
3. compared with the traditional method, on models such as VGG, ResNet and the like, the method based on active learning confrontation training can shorten the training time by more than 30% under average conditions, greatly improves the speed of the confrontation training, and meanwhile, the test accuracy is consistent with that of using all data samples;
4. the method is not only suitable for a mainstream deep learning model, but also suitable for a more complex model such as a Bayes neural network, and researchers propose that the robustness of the model can be obviously improved by using the convolution Bayes neural network, however, the defects of the method are obvious, and the Bayes neural network needs to sample network weights in training and prediction stages, so that the cost of countertraining calculation is very high on the basis, and the method can also obviously improve the training rate and shorten the time by more than 35%.
Drawings
FIG. 1 is a flow chart of a method for countertraining internet pictures based on active learning;
FIG. 2 is a schematic diagram of a method for countertraining internet pictures based on active learning;
FIG. 3 is a schematic diagram of the overall structure of a ResNet neural network in the AI intelligent picture auditing system;
fig. 4 is a schematic diagram of a basic convolution module structure in a ResNet neural network.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention are described in further detail below.
For the traditional confrontation training method, a large number of confrontation samples are generally used, and in the process of generating the confrontation samples, algorithms such as PGD, C & W and the like are generally used, iterative computation is carried out for multiple times, and therefore time cost is high. Meanwhile, the model in practical application is usually very large, which causes no small obstacle for smooth realization of the countertraining. In research, a large amount of data redundancy actually exists in the confrontation sample, the robustness of the model can be greatly improved even if not all data sets are used, and the improvement degree is equivalent to that of the complete data set. Therefore, many calculations are not necessary in generating the challenge sample. The high time cost makes the traditional confrontation training unable to be applied in the real scene, and in order to popularize the confrontation training to the real scene, a strategy is urgently needed to reduce the time cost.
Example 1
A method for confrontational training using internet pictures based on active learning, see fig. 1 and 2, the method comprising:
1. selecting part of the picture samples to be trained by using an active learning strategy, and judging and selecting the picture samples to be selected by using the trained model as a discriminator;
through the processing, redundant data is prevented from generating countersample, and unnecessary calculation amount is reduced. The training speed can be obviously improved under the condition of ensuring high robustness improving effect. The excessive dependence on the data volume is reduced, and when the model faces mass picture data, the model can independently and efficiently learn, is more intelligent, and greatly reduces the cost of manual marking.
2. Directly selecting picture sample data from the original picture data set, generating a confrontation sample by using the selected picture sample data, and then performing confrontation training;
the method does not use the whole picture data to generate the confrontation sample, but selects partial data from the original picture data set to generate the confrontation sample to carry out confrontation training, and obviously reduces unnecessary calculation cost and time cost.
3. The method comprises the steps of improving a sample selection strategy problem, measuring uncertainty of a sample based on probability, efficiently selecting a representative sample by the strategy, and further generating a countersample for learning;
the method is simple and easy to implement, the value of judging the samples by introducing too complicated logic is avoided, and compared with a method for randomly selecting the samples, the robustness improvement degree of the method is higher under the condition of using the same number of the samples.
4. The invention is suitable for the current mainstream models such as VGG, ResNet and the like, is also suitable for the Bayesian neural network model, has better universality and can be popularized to real scenes.
5. The method selects the classification maximum probability value as an index for measuring the uncertainty of the sample, and the specific selection strategy is as follows:
1) selecting a batch of original picture data from a picture data pool;
2) predicting the selected picture by using the model subjected to the current countermeasure training, and outputting prediction probability information of each picture sample;
3) calculating the maximum probability value of each picture sample;
4) artificially setting a maximum probability threshold β, selecting picture samples with maximum probability values greater than β, and setting a picture sample selection ratio k if P is greater thanmax>β the number of picture samples (N) is greater than m.kappa.N samples are taken according to PmaxIs sorted by size, P is selectedmaxMaximum first m · k picture samples, PmaxFor the maximum probability value of the sample class, each prediction class has a corresponding probability value during prediction, which is described in the following table.
Since in the training process, PmaxThe number of the pixels larger than the threshold value β is constantly changed, and although the number is not 0, the number may exceed the set number m · κ of picture sample selections, and in order to cope with this, the number is set to PmaxThe sizes are sorted to further filter out picture samples with relatively small uncertainty.
5) Attacking the original picture sample selected in the step 4) to generate a countersample;
6) carrying out countermeasure training, continuously updating the weight parameters of the model, and using the weight parameters for next round of sample uncertainty measurement;
7) and repeating the steps, setting a plurality of periods for training, and finally obtaining a robust model.
Example 2
The solution of example 1 is further described below with reference to fig. 1 to 4, and is described in detail below:
taking mass picture data on the Internet as a data pool, selecting various illegal pictures as a picture training set according to a picture identification and audit task of an AI intelligent picture audit system, and taking the selected picture data set as an initial training set X;
the AI intelligent picture auditing system is well known to those skilled in the art, and is not described in detail in the embodiments of the present invention.
Secondly, constructing a convolutional neural network model of the AI intelligent picture auditing system, and training the neural network model by using the selected initial training set X to obtain an initial convolutional neural network model, wherein the model structure is shown in FIG. 3;
the AI intelligent picture auditing system is based on a main stream ResNet model, the ResNet model is greatly concerned after being proposed, the training of a deep model is realized, a ResNet-50 model is built, and the overall structure is as follows:
ResNet-50 is a deep model. Data obtained from the network is pre-processed and randomly clipped to a size of 300 x 300 as input data. For one RGB picture, three channels, i.e. the number of channels is 3, are represented in a 300 × 300 × 3 matrix in the computer, and the black and white picture is a single channel and is represented in a 300 × 300 × 1 matrix in the computer. In this scenario 3-channel pictures are used as data sets. For other application scenes, the steps of picture preprocessing can be set artificially, such as randomly cutting to the specified picture size or horizontally turning the picture, and the like, which is equivalent to data augmentation and can increase the generalization performance of the model. The model structure firstly performs convolution and pooling operation on data, and then inputs the data into 16 convolution modules for learning.
The ResNet-50 is formed by splicing 16 network modules, the internal details of each module are shown in FIG. 4 and are composed of three convolutional layers, wherein parameters such as (1 × 1) are the sizes of convolutional kernels. The convolution kernel is a three-dimensional matrix, and the three dimensions are height, width and channel number respectively. To extract multiple sets of features from the same picture, the number of convolution kernels may be increased to obtain multiple feature maps. The number of channels of the convolution kernel, i.e. the number of convolution kernels, is artificially set and is typically a power of 2, such as 64,128, 256, 512, etc. The maximum setting in this model is 512. A cross-connection structure exists in each module, and the cross-connection structure is used for fully utilizing the characteristic information in the original output data X of the previous layer. Meanwhile, the degradation phenomenon caused by the fact that the network model is too deep can be avoided. And finally, after the data is subjected to the maximum pooling layer, inputting the data into a full connection layer for classifying violation categories of the current data.
And thirdly, in order to improve the robustness of the confrontation, the confrontation samples generated by using the original picture data set are required to be used for the confrontation training.
Wherein, for the stage of generating the confrontation sample, also called attack stage, the confrontation sample x is generated according to the steps (a-d) by using the iterative algorithm based on the model gradient information*If the mainstream attack algorithm pgd is used, the upper disturbance limit σ, the disturbance value increase step α, and the iteration number n. α < σ are manually set, and the values can be arbitrarily specified, and usually α ═ σ/n is taken*And multiple iterations are favorable for generating a confrontation sample with excellent attack effect.
The parameters and the meanings of English abbreviations are as follows:
Figure BDA0002547703820000061
Figure BDA0002547703820000071
the formula is expressed as follows:
Figure BDA0002547703820000072
the projection operation is recorded as γ, taking image classification as an example, an image is represented in a computer as a multi-dimensional matrix, each point in the matrix, that is, each pixel value, ranges from 0 to 255, and if a certain pixel value exceeds the range in the iteration process, the certain pixel value is projected into the data intervalThe rule is that the value exceeding the lower bound is directly 0, and the value exceeding the upper bound is 255, so as to ensure the validity of the picture pixel. If-2 is 0, then 260 is 255. x is the number oft+1xtThe intermediate values are generated when the final confrontation sample is obtained, and are the confrontation samples obtained in the t +1 th iteration and the t th iteration in the iteration process respectively. At the initial stage, there is x0=x0,x0Denotes the initial sample (sample without added perturbation), y0The category corresponding to the real sample. The specific steps for generating the confrontation sample algorithm are as follows:
a) an initial clean sample is selected, and the convolutional neural network model is known as f (x; ω); wherein f (x; omega) represents a neural network with a network weight omega, and x represents an input sample;
b) calculating and obtaining a loss function of the convolutional neural network model with respect to the sample
Figure BDA0002547703820000081
A value of (d);
c) calculating a gradient of a loss function value of a current convolutional neural network model to a sample
Figure BDA0002547703820000082
And obtains its gradient direction by using function sign ()
Figure BDA0002547703820000083
This gradient direction is used to indicate the direction of change of each pixel value on the picture to generate the countermeasure samples0The category corresponding to the real sample.
d) And (5) iteratively executing the process for t times to generate a final confrontation sample.
Different from the traditional mode, the method selects representative picture samples from the original data set and then generates the confrontation samples, so that redundant data samples can be effectively prevented from generating the confrontation samples, and the time of confrontation training is obviously shortened. This idea is derived from active learning, which is advantageous to solve efficient training in the face of large data. Active learning needs a discriminator and an evaluation index, and for evaluating whether a sample is selected or not, a currently trained model is used as the discriminator, and the value of the sample is measured and selected according to the uncertainty degree of the model to the current sample.
The method selects the classification maximum probability value as the size of the uncertainty of the measurement model to the sample, and the specific selection strategy is as follows:
1. a batch of original picture samples are selected from the data pool, and the total number of the picture samples is recorded as m, such as 500. The size of m is set manually, and then proper picture samples are selected from the data to generate confrontation samples for confrontation training.
2. And predicting the selected picture by using the trained convolutional neural network model, and outputting prediction probability information of each sample. The model for the countertraining is used as a discriminator to determine the uncertainty of the raw data in step 1. The method is measured by using the maximum probability value, if the maximum probability value is greater than a threshold value, the model is informed of the type of the model, the confrontation sample generated after disturbance is added is more prone to mislead the judgment of the model, the information quantity is higher, namely the uncertainty is higher, and the confrontation sample generated by using the sample has higher training value for the model.
3. Calculating a maximum probability value P for each samplemax. The same as the prediction stage, after data is input into the convolutional neural network model, the prediction probability values of corresponding categories are output, and the prediction probability values are evaluation indexes in active learning.
4. Artificially setting a maximum probability threshold β, e.g., β ═ 0.3, selecting samples with maximum probability values greater than βmax>β is greater than m.kappa, it is referred to PmaxSorting is performed and the largest top m · κ samples are selected. Since in the training process, PmaxThe number of samples greater than the threshold value β is constantly changed, and although the number is not 0, the number may exceed the set number m · κ of sample selections, and in order to cope with this, the number is set to PmaxSorting by size, further filtering out inaccuraciesQualitatively identifying relatively small samples;
5. and (4) attacking the original sample selected in the step (4) to generate a countersample.
6. And (5) performing confrontation training. The weight parameters of the model are continuously updated and used for measuring the sample uncertainty in the next round;
7. and repeating the steps, setting a plurality of periods for training, and finally obtaining a robust model.
In the embodiment of the present invention, except for the specific description of the model of each device, the model of other devices is not limited, as long as the device can perform the above functions.
Those skilled in the art will appreciate that the drawings are only schematic illustrations of preferred embodiments, and the above-described embodiments of the present invention are merely provided for description and do not represent the merits of the embodiments.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (3)

1. A method for confrontation training using internet pictures based on active learning, the method comprising:
selecting a batch of original image samples from a data pool by using an active learning strategy, and selecting the image samples from the original image samples to generate confrontation samples;
predicting the selected picture by using a convolutional neural network model for countertraining, and outputting prediction probability information of each sample;
taking the convolutional neural network model for countertraining as a discriminator, and if the maximum probability value is greater than a threshold value, training the model by using the countersamples generated by the samples;
calculating a maximum probability value P for each samplemaxInputting data into a convolutional neural network model, and outputting prediction probability values of corresponding categories;
artificially setting a maximum probability threshold value beta, selecting samples with the maximum probability values larger than the beta, setting a sample selection ratio kappa,
if Pmax>β is greater than m.kappa, it is referred to PmaxSorting and selecting the largest top m & kappa samples;
attacking the original sample to generate a countersample; carrying out countermeasure training, wherein the weight parameters of the model are continuously updated and are used for measuring the uncertainty of the sample in the next round;
and repeating the steps, setting a plurality of periods for training, and finally obtaining a robust model.
2. The method for performing countermeasure training using internet pictures based on active learning according to claim 1, wherein the attacking of the original sample to generate the countermeasure sample specifically comprises:
a) selecting an initial clean sample, and recording a convolutional neural network model as f (x; ω);
b) calculating and obtaining a loss function value of the convolutional neural network model about the sample;
c) calculating the gradient of the loss function value of the current convolutional neural network model for the sample, and obtaining the gradient direction of the loss function value by using a function sign (), wherein the gradient direction is used for indicating that the original image is generated into a antagonizing sample, and disturbance with the disturbance size of α is added to the original image, y0The category corresponding to the real sample;
d) and (5) iteratively executing the process for t times to generate a final confrontation sample.
3. An apparatus for countermeasure training using internet pictures based on active learning, the apparatus comprising: memory, processor and computer program stored on the memory and executable on the processor, characterized in that the method steps of claim 1 are implemented when the processor executes the program.
CN202010566122.2A 2020-06-19 2020-06-19 Method and device for performing countermeasure training by using internet pictures based on active learning Pending CN111723864A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010566122.2A CN111723864A (en) 2020-06-19 2020-06-19 Method and device for performing countermeasure training by using internet pictures based on active learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010566122.2A CN111723864A (en) 2020-06-19 2020-06-19 Method and device for performing countermeasure training by using internet pictures based on active learning

Publications (1)

Publication Number Publication Date
CN111723864A true CN111723864A (en) 2020-09-29

Family

ID=72567723

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010566122.2A Pending CN111723864A (en) 2020-06-19 2020-06-19 Method and device for performing countermeasure training by using internet pictures based on active learning

Country Status (1)

Country Link
CN (1) CN111723864A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112750535A (en) * 2021-01-30 2021-05-04 云知声智能科技股份有限公司 Method and system for measuring model uncertainty

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108388795A (en) * 2018-02-11 2018-08-10 浙江工业大学 A kind of confrontation attack defense method based on LSTM detectors
CN108960080A (en) * 2018-06-14 2018-12-07 浙江工业大学 Based on Initiative Defense image to the face identification method of attack resistance
CN110276377A (en) * 2019-05-17 2019-09-24 杭州电子科技大学 A kind of confrontation sample generating method based on Bayes's optimization
CN110610208A (en) * 2019-09-11 2019-12-24 湖南大学 Active safety increment data training method
WO2020052583A1 (en) * 2018-09-14 2020-03-19 Huawei Technologies Co., Ltd. Iterative generation of adversarial scenarios
CN111144274A (en) * 2019-12-24 2020-05-12 南京航空航天大学 Social image privacy protection method and device facing YOLO detector

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108388795A (en) * 2018-02-11 2018-08-10 浙江工业大学 A kind of confrontation attack defense method based on LSTM detectors
CN108960080A (en) * 2018-06-14 2018-12-07 浙江工业大学 Based on Initiative Defense image to the face identification method of attack resistance
WO2020052583A1 (en) * 2018-09-14 2020-03-19 Huawei Technologies Co., Ltd. Iterative generation of adversarial scenarios
CN110276377A (en) * 2019-05-17 2019-09-24 杭州电子科技大学 A kind of confrontation sample generating method based on Bayes's optimization
CN110610208A (en) * 2019-09-11 2019-12-24 湖南大学 Active safety increment data training method
CN111144274A (en) * 2019-12-24 2020-05-12 南京航空航天大学 Social image privacy protection method and device facing YOLO detector

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112750535A (en) * 2021-01-30 2021-05-04 云知声智能科技股份有限公司 Method and system for measuring model uncertainty
CN112750535B (en) * 2021-01-30 2024-03-12 云知声智能科技股份有限公司 Method and system for measuring model uncertainty

Similar Documents

Publication Publication Date Title
CN106776842B (en) Multimedia data detection method and device
CN106683048B (en) Image super-resolution method and device
CN109325550B (en) No-reference image quality evaluation method based on image entropy
CN112115967B (en) Image increment learning method based on data protection
CN110136162B (en) Unmanned aerial vehicle visual angle remote sensing target tracking method and device
CN111145145B (en) Image surface defect detection method based on MobileNet
Zhao et al. ADRN: Attention-based deep residual network for hyperspectral image denoising
CN106650667A (en) Pedestrian detection method and system based on support vector machine
CN113205103A (en) Lightweight tattoo detection method
CN113627543A (en) Anti-attack detection method
CN116310386A (en) Shallow adaptive enhanced context-based method for detecting small central Net target
CN116309178A (en) Visible light image denoising method based on self-adaptive attention mechanism network
Camacho et al. Convolutional neural network initialization approaches for image manipulation detection
CN114882278A (en) Tire pattern classification method and device based on attention mechanism and transfer learning
Roy et al. Test time adaptation for blind image quality assessment
CN114283058A (en) Image super-resolution reconstruction method based on countermeasure network and maximum mutual information optimization
CN111723864A (en) Method and device for performing countermeasure training by using internet pictures based on active learning
CN112818774A (en) Living body detection method and device
CN110738645B (en) 3D image quality detection method based on convolutional neural network
CN109344852A (en) Image-recognizing method and device, analysis instrument and storage medium
CN113487506B (en) Attention denoising-based countermeasure sample defense method, device and system
CN114581470B (en) Image edge detection method based on plant community behaviors
CN115375966A (en) Image countermeasure sample generation method and system based on joint loss function
CN114743148A (en) Multi-scale feature fusion tampering video detection method, system, medium, and device
CN114139655A (en) Distillation type competitive learning target classification system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200929