CN111783890A - Small pixel countermeasure sample defense method for image recognition process - Google Patents

Small pixel countermeasure sample defense method for image recognition process Download PDF

Info

Publication number
CN111783890A
CN111783890A CN202010637934.1A CN202010637934A CN111783890A CN 111783890 A CN111783890 A CN 111783890A CN 202010637934 A CN202010637934 A CN 202010637934A CN 111783890 A CN111783890 A CN 111783890A
Authority
CN
China
Prior art keywords
disturbance
sample
training
model
countermeasure
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010637934.1A
Other languages
Chinese (zh)
Other versions
CN111783890B (en
Inventor
牛伟纳
张小松
任仲蔚
丁康一
谢科
张瑾昀
曹蓉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202010637934.1A priority Critical patent/CN111783890B/en
Publication of CN111783890A publication Critical patent/CN111783890A/en
Application granted granted Critical
Publication of CN111783890B publication Critical patent/CN111783890B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/254Fusion techniques of classification results, e.g. of results related to same input data

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the field of machine learning, provides a small pixel confrontation sample defense method in an image identification process, and aims to solve the problem of error identification of an image added with disturbance camouflage in the small pixel image identification process. The method comprises the steps of training an original sample set O to obtain a classification model which is not enhanced, generating a countermeasure sample for each picture in a training data set to obtain an countermeasure sample set A, counting disturbance values, counting the proportion of each disturbance value, calculating the distribution of the disturbance values of each countermeasure sample to obtain a disturbance distribution histogram of the countermeasure samples; simulating a disturbance rule of a generation algorithm of the countermeasure sample by using a disturbance distribution histogram of the countermeasure sample, and training a DUNet model to obtain a noise reduction input layer; and splicing the noise reduction input layer and the small pixel image classification model which is not enhanced to obtain an enhanced model.

Description

Small pixel countermeasure sample defense method for image recognition process
Technical Field
A model defense method for small-pixel confrontation samples is used for defending the confrontation samples and belongs to the field of machine learning.
Background
In recent years, machine learning develops rapidly, and with continuous progress of machine learning technology, machine learning is widely applied to fields closely related to daily life of people, such as malicious mail detection, malicious program detection, image recognition, face recognition, image classification, unmanned driving and the like. Machine learning gradually permeates into daily life of people and becomes a key technology for improving the living standard of people. However, while the machine learning brings great help to the learning and life of people, many safety problems exist in the machine learning algorithm. In a system for early detecting network intrusion and malicious mails by using machine learning, an attacker can use a specific rule to avoid the detection of a model according to the detection characteristics of different detection models, thereby seriously influencing the safety of the machine learning and hindering the application of the machine learning. So far more issues with machine learning security have not been discovered.
In 2014, Christian szegdy et al proposed the concept of confrontation samples (adaptive algorithms), and the machine learning model is easily interfered by some elaborated human disturbance, so that the recognition confidence of the model classifier on the wrong class exceeds that of the correct class, thereby causing the error of the classification result of the model. Szegedy's study shows that many deep learning models including Convolutional Neural Networks (CNN) have countersample problems. The GoodFellow study found that models trained in different subsets of the training dataset all misclassified the same challenge samples, indicating that deep learning models are likely to have a large number of blind spots. An article published by Anh Nguyen et al on CVPR 2015(IEEE Conference on Computer Vision and Pattern Recognition2015) mentions a sample that can produce similar differences to the original samples, but the Recognition of these samples by the model is erroneous, and many consider this type of data as a defect in deep learning. GoodFellow states that the problem of countersample exists not only in the field of deep learning, but also in traditional machine learning, so that studying the countersample problem contributes to the development of the whole field of machine learning. Therefore, the generation algorithm for researching the confrontation sample can fundamentally research the confrontation sample problem and fundamentally find out the key for solving the confrontation sample problem. The existing model training method is improved, the safety and the robustness of the model are improved, and the machine learning model can enter the daily production and life of people as soon as possible.
It can be seen that the current research on the safety of the artificial intelligence is still in a relatively basic stage, various attack methods are all the same, but defense methods are always in debate. Most defense methods can only resist a few attacks against the sample generation algorithm, a good model robustness enhancing method is difficult to find, and the algorithm defense efficiency is low. It is more difficult to propose a countermeasure sample defense method than to propose a countermeasure sample generation algorithm.
In fact, the countermeasure sample is generally generated on the basis of a normal sample, for image data, some disturbance invisible to human eyes are added to the normal sample, the sample generated after the addition can enable an artificial intelligence model to generate errors when facing the sample, for convenience of understanding, and the following examples are made without considering the implementation difficulty.
Word interpretation: by small pixels is meant that the original image is less corrupted and does not result in excessive pixel value modification.
Disclosure of Invention
In view of the above technical problems, the present invention aims to solve the problem of an error in identifying an image to which disturbance camouflage is added during the process of identifying a small pixel image.
In order to achieve the purpose, the invention adopts the following technical scheme:
a defense method for small pixel countermeasure samples in an image recognition process is characterized by comprising the following steps:
s1: training a model by using a small pixel image training data set A to obtain a small pixel image classification model which is not enhanced;
s2: using a small-pixel confrontation sample generation algorithm to generate a confrontation sample for each picture in the training data set, so as to obtain a confrontation sample set A;
s3: counting the disturbance values of all samples in the countermeasure sample set A to obtain a disturbance distribution histogram of the countermeasure samples;
s4: simulating a disturbance rule of a generation algorithm of the countermeasure sample by using a disturbance distribution histogram of the countermeasure sample, and training a DUNet model to obtain a noise reduction input layer;
s5: and splicing the noise reduction input layer and the small pixel image classification model which is not enhanced to obtain an enhanced model.
In the above technical solution, step S4 includes:
step 4-1, for all original samples O in the training data set, using disturbance proportion data obtained by a confrontation sample disturbance feature extraction module to generate random disturbance according to corresponding weight information to obtain a disturbance sample O' with a certain noise distribution rule;
step 4-2, combining the original sample O and the corresponding disturbance sample O' into a training point pair { OiO′iIn which O isiDenotes the ith sample, O 'in the original sample O'iRepresenting the disturbance sample generated by simulating the ith sample by using a disturbance distribution rule of the antagonistic sample, wherein the two samples form a training point pair, O'iFor training input, OiOutputting for the target;
step 4-3, using training pointsTo { OiO′iAnd training a model of the DUNet image recovery structure, wherein the input of the training is O'iThe training label is Oi(ii) a And obtaining a noise reduction network DUNet, wherein the loss function of model training is MAE. Since DUNet is image restoration, the target output of the model should be the original image OiThe labels for analogy to the MNIST dataset are 0-9, and the labels for DUNet are the original images.
In the above technical solution, step S5 includes:
step 5-1, using a noise reduction network DUNet as an additional network structure, modifying an input network layer of the small pixel image classification model which is not enhanced, and ensuring that the DUNet structure can be connected with the identification model;
and 5-2, connecting the data layer of the small pixel image classification model which is not enhanced with the output of the DUNet noise reduction layer.
Because the invention adopts the technical scheme, the invention has the following beneficial effects:
firstly, the algorithm does not depend on the gradient information of the neural network, and the practicability is strong.
And secondly, the model to be attacked is used as a black box model, and parameter information in the model does not need to be known.
And thirdly, only the weight information of the anti-disturbance is needed to be used, and the method is simple to realize.
And fourthly, using a DUNet noise reduction structure to resist disturbance damage of the sample on a noise reduction input layer, thereby achieving the purpose of further improving the robustness of the model.
Drawings
FIG. 1 is a schematic flow diagram of the present invention;
figure 2 is an exemplary graph of cifar10 data samples;
FIG. 3 is a histogram of noise distribution;
FIG. 4 is a schematic diagram of training point pairs for DUNet;
FIG. 5 is a schematic input/output diagram of a model;
Detailed Description
The invention will be further described with reference to the accompanying drawings and specific embodiments.
Examples
S1: in the data preparation stage, certain element data needs to be collected as a training data set.
S2: a data preprocessing stage, which screens useful data in the original data and discards useless information (such as label information removal, only label number reservation and the like), so that the model is easier to train;
s3: a model pre-training stage, wherein a conventional neural network model training method is used in the model pre-training stage, the element data set and the data set label are used as data input and are transmitted to a machine learning framework for supervised learning, and an original model is trained to serve as a model which is not enhanced;
s4: a confrontation sample generation stage, which uses a certain small pixel image confrontation sample generation algorithm (such as C & W, deep pool) to perform mass confrontation sample generation on the training data set.
S5: and a confrontation sample feature extraction stage, wherein the confrontation samples of the small pixel images are subjected to disturbance value statistics to obtain a disturbance distribution histogram of the confrontation samples.
S6: in the noise reduction training stage, a disturbance distribution histogram of a countermeasure sample is used for simulating a disturbance rule of a countermeasure sample generation algorithm, and a DUNet model is trained to obtain a noise reduction input layer, wherein the noise reduction training stage mainly comprises the following steps:
s6-1, using the training point pairs provided by the data generator moduleiO′iAnd training a model of the DUNet image recovery structure, wherein the input of the training is O'iThe training label is Oi
S6-2, the loss function of model training is MAE (mean Absolute error).
S7: a model combination stage, in which the noise reduction input layer and the original model are spliced to obtain an enhanced model, the stage mainly comprises the following steps:
s7-1, the noise reduction training module uses a noise reduction network DUNet obtained by the training of the noise reduction training module as an additional network structure to modify an input network layer of the recognition model and ensure that the DUNet structure can be connected with the recognition model;
s7-2, connecting the data layer of the recognition model with the output of the DUNet noise reduction layer.
Examples
The dataset cifar10 dataset was identified for a small object.
Because of the large search space of genetic algorithms, experimental validation using smaller sized cifar10 datasets was chosen here. As shown in fig. 2, the cifar10 data set contains color images of 10 types of common objects, wherein each image has a size of 32 × 32, and the whole data set contains 50000 training data and 10000 testing data, which are common data sets for small object recognition. The cifar10 dataset and cifar100 are part of a larger dataset tinyiimages (http:// groups. csail. mit. edu/vision/tinyiimages) and are well suited for training for small object recognition.
Firstly, preparing data: in order to obtain countermeasure samples in a large batch, a cifar10 data set is selected as training data, a C & W countermeasure sample generation algorithm is selected as an attack method in the test, and the defense effect of a noise reduction input layer on the C & W countermeasure sample generation algorithm is tested.
Secondly, data preprocessing: the three-channel color picture with the size bit of 32 multiplied by 3 of each image of an original cifar10 data set is characterized in that each pixel is represented by an 8-bit unsigned integer, so that each cifar10 sample needs to be normalized before training of a model for the convenience of processing of a machine learning model, and the pixel value is compressed from [0, 255] to the range of [0.0, 1.0 ];
thirdly, pre-training the model: the whole cifar10 data set is divided into 50000 training samples and 10000 verification samples, and the identification model uses VGG16, and the model structure is shown in Table 1. Through 200epochs training, the recognition success rate of the recognition model on the verification data set reaches 82.90%.
TABLE 1
Figure BDA0002567085720000051
Figure BDA0002567085720000061
Fourthly, a confrontation sample generation stage: the stage uses a C & W countermeasure sample generation algorithm to carry out mass countermeasure sample generation on the VGG16 network model. The parameters of the C & W countermeasure sample generation algorithm used in the test are set to be confidence 0, binary _ step 5, max _ adjuster 100, and int _ const 1e-3, and under the parameters, the C & W countermeasure sample generation algorithm can achieve an attack success rate of 100%.
Fifthly, a countermeasure sample feature extraction stage: in this stage, the distribution of the disturbance noise of the countermeasure sample generated in the fourth stage is statistically analyzed, and the noise is sorted and analyzed, so that a histogram of the distribution rule of the noise is obtained as shown in fig. 3.
Sixthly, noise reduction training stage: in this stage, the DUNet noise reduction input layer is trained to ensure that disturbance noise can disappear from the noise reduction input layer, and the model structure of the DUNet is shown in Table 2.
TABLE 2
Figure BDA0002567085720000062
Figure BDA0002567085720000071
Figure BDA0002567085720000081
The noise reduction training stage mainly comprises the following steps:
1. preparing DUNet data, wherein the training data of the DUNet network is a disturbance sample generated according to the characteristics of the C & W confrontation sample, and the output data is an original sample.
2. And the data generator is used for generating a disturbance sample similar to the C & W countermeasure sample according to the disturbance value distribution rule of the C & W countermeasure sample generation algorithm to generate random disturbance, and the random disturbance method added to the original sample is to add random disturbance according to the corresponding weight according to the disturbance distribution histogram of the C & W countermeasure sample. After generating similar perturbation samples to the C & W continuation counter sample generation algorithm, the training point pairs of DUNet are shown in fig. 4.
3. Model training, after 500epochs training, the DUNet model can better recover the disturbance sample, and the input and output of the model are as shown in FIG. 5.
Seventhly, model combination stage: and splicing the trained DUNet noise reduction input network to the original model to achieve the purpose of enhancing the robustness of the model. Test results show that the defense capabilities of the five confrontation sample generation algorithms (FGSM, IFGSM, JSMA and C & W, DeepFool) are improved in different degrees through the noise reduction treatment of DUNet, and the defense effects are shown in Table 3.
TABLE 3
Attack method Original model DUNet enhanced model Degree of decrease
FGSM 77.50% 77.70% -0.20%
IFGSM 82.20% 82.20% 0.00%
JSMA 98.43% 67.00% 31.43%
C&W 100.00% 21.10% 78.90%
DeepFool 93.90% 35.20% 58.70%
The above are merely representative examples of the many specific applications of the present invention, and do not limit the scope of the invention in any way. All the technical solutions formed by the transformation or the equivalent substitution fall within the protection scope of the present invention.

Claims (3)

1. A defense method for small pixel countermeasure samples in an image recognition process is characterized by comprising the following steps:
s1: training a model by using an original sample set O in a small pixel image training data set to obtain a small pixel image classification model which is not enhanced;
s2: using a small-pixel confrontation sample generation algorithm to generate a confrontation sample for each picture in the training data set, so as to obtain a confrontation sample set A;
s3: counting disturbance values of all samples in the countermeasure sample set A, and counting the proportion of each disturbance value, wherein the counting method comprises the steps of calculating the disturbance value distribution of each countermeasure sample, and then solving the average disturbance distribution condition of the whole countermeasure sample disturbance data set A' to obtain the disturbance distribution histogram of the countermeasure samples;
s4: simulating a disturbance rule of a generation algorithm of the countermeasure sample by using a disturbance distribution histogram of the countermeasure sample, and training a DUNet model to obtain a noise reduction input layer;
s5: and splicing the noise reduction input layer and the small pixel image classification model which is not enhanced to obtain an enhanced model.
2. The method of claim 1 for defending against samples for small pixels in an image recognition process, wherein: step S4 includes:
step 4-1, extracting disturbance proportion data obtained by using disturbance characteristics of a countermeasure sample for all original sample sets O in a training data set, and generating random disturbance on the original samples according to corresponding weight information to obtain a disturbance sample set O' with a certain noise distribution rule;
step 4-2, combining the original sample O and the corresponding disturbance sample O' into a training point pair { OiO′iIn which O isiDenotes the ith sample, O 'in the original sample O'iRepresenting the disturbance sample generated by simulating the ith sample by using a disturbance distribution rule of the antagonistic sample, wherein the two samples form a training point pair, O'iFor training input, OiOutputting for the target;
step 4-3, using the training point pairs { OiO′iAnd training a model of the DUNet image recovery structure, wherein the input of the training is O'iThe target value of the training is Oi(ii) a And obtaining a noise reduction network DUNet, wherein the loss function of model training is MAE.
3. The method of claim 1 for defending against samples for small pixels in an image recognition process, wherein: step S5 includes:
step 5-1, using a noise reduction network DUNet as an additional network structure, modifying an input network layer of the small pixel image classification model which is not enhanced, and ensuring that the DUNet structure can be connected with the identification model;
and 5-2, connecting the data layer of the small pixel image classification model which is not enhanced with the output of the DUNet noise reduction layer.
CN202010637934.1A 2020-07-02 2020-07-02 Small pixel countermeasure sample defense method for image recognition process Active CN111783890B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010637934.1A CN111783890B (en) 2020-07-02 2020-07-02 Small pixel countermeasure sample defense method for image recognition process

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010637934.1A CN111783890B (en) 2020-07-02 2020-07-02 Small pixel countermeasure sample defense method for image recognition process

Publications (2)

Publication Number Publication Date
CN111783890A true CN111783890A (en) 2020-10-16
CN111783890B CN111783890B (en) 2022-06-03

Family

ID=72759651

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010637934.1A Active CN111783890B (en) 2020-07-02 2020-07-02 Small pixel countermeasure sample defense method for image recognition process

Country Status (1)

Country Link
CN (1) CN111783890B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112766324A (en) * 2021-01-02 2021-05-07 西安电子科技大学 Image confrontation sample detection method, system, storage medium, terminal and application
CN112990015A (en) * 2021-03-16 2021-06-18 北京智源人工智能研究院 Automatic lesion cell identification method and device and electronic equipment
CN116094824A (en) * 2023-02-07 2023-05-09 电子科技大学 Detection system and method for few sample malicious traffic
CN116824150A (en) * 2023-04-24 2023-09-29 苏州梅曼智能科技有限公司 Industrial image feature extraction method based on generated countermeasure model

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190130110A1 (en) * 2017-11-01 2019-05-02 International Business Machines Corporation Protecting Cognitive Systems from Gradient Based Attacks through the Use of Deceiving Gradients
CN110175611A (en) * 2019-05-24 2019-08-27 浙江工业大学 Defence method and device towards Vehicle License Plate Recognition System black box physical attacks model
CN110222774A (en) * 2019-06-10 2019-09-10 百度在线网络技术(北京)有限公司 Illegal image discrimination method, device, content safety firewall and storage medium
CN110516695A (en) * 2019-07-11 2019-11-29 南京航空航天大学 Confrontation sample generating method and system towards Medical Images Classification
CN110717522A (en) * 2019-09-18 2020-01-21 平安科技(深圳)有限公司 Countermeasure defense method of image classification network and related device
US20200134468A1 (en) * 2018-10-26 2020-04-30 Royal Bank Of Canada System and method for max-margin adversarial training

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190130110A1 (en) * 2017-11-01 2019-05-02 International Business Machines Corporation Protecting Cognitive Systems from Gradient Based Attacks through the Use of Deceiving Gradients
US20200134468A1 (en) * 2018-10-26 2020-04-30 Royal Bank Of Canada System and method for max-margin adversarial training
CN110175611A (en) * 2019-05-24 2019-08-27 浙江工业大学 Defence method and device towards Vehicle License Plate Recognition System black box physical attacks model
CN110222774A (en) * 2019-06-10 2019-09-10 百度在线网络技术(北京)有限公司 Illegal image discrimination method, device, content safety firewall and storage medium
CN110516695A (en) * 2019-07-11 2019-11-29 南京航空航天大学 Confrontation sample generating method and system towards Medical Images Classification
CN110717522A (en) * 2019-09-18 2020-01-21 平安科技(深圳)有限公司 Countermeasure defense method of image classification network and related device

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
FANGZHOU LIAO 等: "Defense Against Adversarial Attacks Using High-Level Representation Guided Denoiser", 《2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 *
KUI REN 等: "Adversarial Attacks and Defenses in Deep Learning", 《ENGINEERING》 *
RYAN SHEATSLEY: "Adversarial Examples in Constrainted Domains", 《PENNSTATE ELECTRONIC THESES AND DISSERTATIONS FOR GRADUATE SCHOOL》 *
XIAOLEI LIU 等: "A Black-box Attack on Neural Networks Based on Swarm Evolutionary Algorithm", 《ARXIV》 *
马玉琨 等: "一种面向人脸活体检测的对抗样本生成算法", 《软件学报》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112766324A (en) * 2021-01-02 2021-05-07 西安电子科技大学 Image confrontation sample detection method, system, storage medium, terminal and application
CN112766324B (en) * 2021-01-02 2024-02-02 西安电子科技大学 Image countermeasure sample detection method, system, storage medium, terminal and application
CN112990015A (en) * 2021-03-16 2021-06-18 北京智源人工智能研究院 Automatic lesion cell identification method and device and electronic equipment
CN112990015B (en) * 2021-03-16 2024-03-19 北京智源人工智能研究院 Automatic identification method and device for lesion cells and electronic equipment
CN116094824A (en) * 2023-02-07 2023-05-09 电子科技大学 Detection system and method for few sample malicious traffic
CN116094824B (en) * 2023-02-07 2024-02-20 电子科技大学 Detection system and method for few sample malicious traffic
CN116824150A (en) * 2023-04-24 2023-09-29 苏州梅曼智能科技有限公司 Industrial image feature extraction method based on generated countermeasure model

Also Published As

Publication number Publication date
CN111783890B (en) 2022-06-03

Similar Documents

Publication Publication Date Title
CN111783890B (en) Small pixel countermeasure sample defense method for image recognition process
CN108537743B (en) Face image enhancement method based on generation countermeasure network
CN113554089B (en) Image classification countermeasure sample defense method and system and data processing terminal
CN111475797B (en) Method, device and equipment for generating countermeasure image and readable storage medium
WO2021189364A1 (en) Method and device for generating adversarial image, equipment, and readable storage medium
CN110348475B (en) Confrontation sample enhancement method and model based on spatial transformation
Gragnaniello et al. Analysis of adversarial attacks against CNN-based image forgery detectors
Choi et al. Detecting composite image manipulation based on deep neural networks
CN112784790B (en) Generalization false face detection method based on meta-learning
CN113127857B (en) Deep learning model defense method aiming at adversarial attack and deep learning model
CN113627543B (en) Anti-attack detection method
CN113269228B (en) Method, device and system for training graph network classification model and electronic equipment
CN112507811A (en) Method and system for detecting face recognition system to resist masquerading attack
CN114677722A (en) Multi-supervision human face in-vivo detection method integrating multi-scale features
CN113379618A (en) Optical remote sensing image cloud removing method based on residual dense connection and feature fusion
CN112990357B (en) Black box video countermeasure sample generation method based on sparse disturbance
Wang et al. Adversarial analysis for source camera identification
Ghafourian et al. Toward face biometric de-identification using adversarial examples
CN114049537A (en) Convergence neural network-based countermeasure sample defense method
CN113822377A (en) Fake face detection method based on contrast self-learning
Saealal et al. Three-Dimensional Convolutional Approaches for the Verification of Deepfake Videos: The Effect of Image Depth Size on Authentication Performance
Huang et al. Multi-Teacher Single-Student Visual Transformer with Multi-Level Attention for Face Spoofing Detection.
CN115759190A (en) Cross-domain black box model reverse attack method
Lin et al. Exploiting temporal information to prevent the transferability of adversarial examples against deep fake detectors
CN114913607A (en) Finger vein counterfeit detection method based on multi-feature fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant