CN116883780B - Adaptive position constraint sparse countermeasure sample generation method based on domain transformation - Google Patents

Adaptive position constraint sparse countermeasure sample generation method based on domain transformation Download PDF

Info

Publication number
CN116883780B
CN116883780B CN202310785125.9A CN202310785125A CN116883780B CN 116883780 B CN116883780 B CN 116883780B CN 202310785125 A CN202310785125 A CN 202310785125A CN 116883780 B CN116883780 B CN 116883780B
Authority
CN
China
Prior art keywords
target
disturbance
loss
model
attack
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310785125.9A
Other languages
Chinese (zh)
Other versions
CN116883780A (en
Inventor
戚永军
宋媛萌
贾正正
王宇辰
贾召弟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
North China Institute of Aerospace Engineering
Original Assignee
North China Institute of Aerospace Engineering
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by North China Institute of Aerospace Engineering filed Critical North China Institute of Aerospace Engineering
Priority to CN202310785125.9A priority Critical patent/CN116883780B/en
Publication of CN116883780A publication Critical patent/CN116883780A/en
Application granted granted Critical
Publication of CN116883780B publication Critical patent/CN116883780B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0495Quantised networks; Sparse networks; Compressed networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • G06V10/765Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects using rules for classification or partitioning the feature space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/7715Feature extraction, e.g. by transforming the feature space, e.g. multi-dimensional scaling [MDS]; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a self-adaptive position constraint sparse anti-disturbance sample generation method based on domain transformation, which comprises the steps of firstly utilizing a model based on an encoder-decoder structure to generate anti-disturbance, decoupling the process of decoding the coded image features to generate the anti-disturbance, generating the anti-disturbance within a global limiting range, and generating a binary mask, wherein the mask can limit and modify pixel number values, and the global sparse anti-disturbance can be obtained by fusing the two masks; and then domain conversion is carried out on an input image by using domain conversion to extract high-frequency image characteristics, the characteristics are subjected to binarization processing by utilizing a self-adaptive binarization algorithm, the processed image characteristics are normalized into [0,1], and finally the obtained image characteristics are fused with global sparse anti-disturbance, and then the fusion is added to an original image to obtain an anti-disturbance image sample.

Description

Adaptive position constraint sparse countermeasure sample generation method based on domain transformation
Technical Field
The invention belongs to the technical field of machine learning against sample attack, and particularly relates to a self-adaptive position constraint sparse against sample generation method based on domain transformation.
Background
The challenge sample refers to an artificially created input sample for a machine learning model that is designed to fool the machine learning model into erroneous output results without being perceived by the human eye. Countering sample attacks is an important issue in the field of machine learning model security, especially in security-related fields and artificial intelligence applications, where countering sample attacks poses a significant threat to the robustness and security of the model.
Deep neural network models have met with great success in handling various classification and identification tasks. However, deep neural networks are vulnerable to challenge attacks against their model structure when faced with challenge attacks. Counterattack can fool the deep neural network model by adding some small perturbations that are imperceptible to humans, causing it to produce erroneous output results. The attack modes against the sample include infinite norm attack, L1, L2 norm attack, L0 norm attack, etc. Wherein the L0 norm attack is a modified fixed pixel number attack mode for the machine learning model, and unlike other attack modes, it can limit the number of modified pixels, thereby making the challenge sample more difficult to be perceived by human eyes. Meanwhile, in order to further improve the robustness and reliability of the attack, a sparse matrix is used as a mask matrix for changing the pixel generation position, which also makes the L0-norm attack possible to be referred to as a sparse attack method. Meanwhile, the attack resistance is classified into a target attack and a no-target attack according to whether an attack target class is specified.
In the current method of generating the challenge sample, in order to fool the deep neural network model, an attacker typically adds a lot of noise and disturbance to the original image, so that the attacked image looks more similar to the original image. However, this method is easily affected by the sensitivity of the model to noise and disturbance, and meanwhile, the added disturbance is too heavy, so that the image after attack is not natural enough, and the robustness and reliability of the attack are affected. Therefore, how to design a deep neural network can be deceived, the added disturbance quantity can be reduced as much as possible, the robustness and reliability of the attack are enhanced, and the deep neural network becomes one of research hotspots in the current anti-attack field.
In the existing image sparse anti-attack method, because no constraint is added to the disturbance position, disturbance pixel points are relatively obvious, in the real world, human beings always pay more attention to a high-frequency part in one image, and the color and texture of the high-frequency part are richer, so that the method is more suitable for adding the anti-disturbance. Aiming at the problem, the invention provides a self-adaptive position constraint sparse countermeasure sample generation method based on domain transformation.
Disclosure of Invention
The invention aims to solve the defects of the prior art, and provides a self-adaptive position constraint sparse countermeasure sample generation method based on domain transformation, which is used for reducing the visibility of countermeasure sample data modification, and simultaneously provides the generation method construction of two attack modes, namely target attack and non-target attack, so that the method can adapt to the countermeasure sample generation requirements in different scenes.
The invention is realized by the following technical scheme:
the adaptive position constraint sparse countercheck sample generation method based on domain transformation includes the steps that an original image sample is input into an encoder to obtain depth characteristics, then the depth characteristics are decoded by two decoders, wherein the first decoder is used for generating global disturbance data in a limited range, and the second decoder is used for generating a binarization mask matrix to control reserved disturbance pixel positions; then performing dot multiplication operation on the global disturbance data obtained by the first decoder and the mask matrix obtained by the second decoder to obtain a preliminary sparse disturbance matrix;
inputting an original image sample into a wavelet transformation layer to obtain a corresponding high-frequency characteristic image, then binarizing the output by utilizing a self-adaptive binarization algorithm, and normalizing the output to obtain a high-frequency position limiting binarization matrix capable of limiting a disturbance additional area; and multiplying the binarization matrix with the sparse disturbance matrix to obtain final antipodal disturbance, and adding the antipodal disturbance to the original image to obtain a final antipodal sample image.
In the above technical solution, the output of the first decoder is mapped non-linearly to be output between [ -eps, +eps ], where eps is the maximum disturbance value that can be accepted.
In the above technical solution, the output of the second decoder is mapped between [0,1] to obtain a probability matrix, and then mapped into 0,1 codes through binarization operation to obtain a binarization mask matrix, where the mask matrix retains the pixel disturbance conditions within the range of the disturbance interval.
In the above technical solution, a random quantization operator is introduced during binarization, and when P (x) =1, binary quantization is performed, and when P (x) =0, the original value is retained, where P (x) refers to a probability, and P (x) obeys bernoulli distribution.
In the technical scheme, the invention designs the non-target attack countermeasure sample generation model and the target attack countermeasure sample generation model according to the countermeasure scene, and the non-target attack countermeasure sample generation model and the target attack countermeasure sample generation model execute the self-adaptive position constraint sparse countermeasure sample generation method based on domain transformation; judging whether the attack countermeasure scene is target attack or no target attack by judging whether the input original image sample contains a label of a specified output type, and selecting a corresponding attack countermeasure sample generation model to generate an attack countermeasure sample.
In the above technical solution, the model for generating the target-less attack countermeasure sample and the loss function of the model for generating the target-less attack countermeasure sample during training each include three parts: the method comprises the steps of generating loss, binary loss and model identification loss, wherein the generating loss and the binary loss of two models are identical in loss function, and the model identification loss functions of the two models are different as follows:
the model recognition loss function of the target attack countermeasure sample generation model is as follows:
loss(pred,target)=CrossEntropyLoss(pred,target)
wherein pred is a predicted value output by a white box target model, target is a target class value, and cross EntroyLoss is a cross entropy loss function;
the model identification loss function of the model generated by the target-free attack countermeasure sample is as follows:
loss(pred,target)=1-CrossEntropyLoss(pred,target)
wherein pred is a predicted value output by a white box target model, target is a tag class value of real input data, and cross EntroyLoss is a cross entropy loss function;
the comprehensive loss function is as follows:
loss=α·L gen +β·L bin +γ·L rec
wherein L is gen 、L bin 、L rec The generation loss, the binary loss and the model identification loss are represented respectively, and alpha, beta and gamma are the harmonic coefficients.
The invention has the advantages and beneficial effects that:
the invention is mainly used for solving the problem that the visual hiding degree of the countermeasure sample generated by the countermeasure attack method of the current image classification model L0 is not high, and the disturbance additional area is limited in a high-frequency area, so that the additional position of the added disturbance pixel point can be limited in the range of a required area with obvious color change and severe texture change, and the disturbance invisibility of the countermeasure sample generated by the model can be obviously improved. Meanwhile, the anti-sample generated by the invention can keep relatively good attack effect. Meanwhile, the invention provides the construction of the generation methods of the target attack and the non-target attack, which can adapt to the generation requirements of the countermeasure sample under different scenes.
Drawings
Fig. 1 is a schematic diagram of a domain transform-based adaptive position-constrained sparse challenge sample generation method of the present invention.
Fig. 2 is a flow chart of automatically selecting a corresponding model for challenge sample generation based on different challenge scenarios.
Other relevant drawings may be made by those of ordinary skill in the art from the above figures without undue burden.
Detailed Description
In order to make the person skilled in the art better understand the solution of the present invention, the following describes the solution of the present invention with reference to specific embodiments.
Example 1
Referring to fig. 1, the method firstly inputs original image samples into an encoder (the original image samples are a white box target model and can identify correct images) to obtain depth features, then decodes the depth features by using two decoders, wherein a first decoder is used for generating global disturbance data in a limited range, and a second decoder is used for generating a binary mask matrix to control reserved disturbance pixel positions.
Specifically, the method comprises the following steps: the output of the first decoder is subjected to nonlinear mapping and output to be between [ -eps, +eps ], wherein eps is the maximum disturbance value which can be accepted, so as to generate global disturbance data and limit the interval range of disturbance; the output of the second decoder is mapped between [0,1] to obtain a probability matrix, then the probability matrix is mapped into 0,1 through binarization operation to obtain a binarization mask matrix, the mask matrix reserves the pixel disturbance condition in a limited disturbance interval range, furthermore, in order to realize back propagation, a random quantization operator is introduced during binarization, binary quantization is carried out when P (x) =1, an original value is reserved when P (x) =0, P (x) refers to a probability, and P (x) obeys Bernoulli distribution.
And then, performing dot multiplication operation on the global disturbance data obtained by the first decoder and the mask matrix obtained by the second decoder to obtain a preliminary sparse disturbance matrix.
Inputting an original image sample into a wavelet transformation layer to obtain a corresponding high-frequency characteristic image, then binarizing the output by utilizing a self-adaptive binarization algorithm, and normalizing the output to obtain a high-frequency position limiting binarization matrix capable of limiting a disturbance additional area; and multiplying the binarization matrix with the sparse disturbance matrix to obtain final antipodal disturbance, and adding the antipodal disturbance to the original image to obtain a final antipodal sample image.
Example two
On the basis of the first embodiment, the method designs two types of challenge sample generation models (namely, a non-target attack challenge sample generation model and a target attack challenge sample generation model) of non-target and target attacks according to the challenge scene, and can automatically select a corresponding model to generate a challenge sample according to the input requirement of the original image sample.
Targeted-free attacks refer to: the category of the output result is not specified, and the induction model identification result is only required to be different from the original image real category label. The target attack means that the class of the false output of the induction model identification is appointed, and the class is inconsistent with the real label of the input image; that is, in the case of a target attack, the original image sample of the input model needs to contain a tag specifying the output class. And whether the challenge scene is a target attack or no target attack can be judged by judging whether the input original image sample contains a label of a specified output type, so that a corresponding challenge sample generation model is selected to generate a challenge sample.
The structure of the non-target attack challenge sample generation model and the target attack challenge sample generation model are the same, and each of them includes one encoder and two decoders as in the first embodiment, and each performs the above-described adaptive position constraint sparse challenge sample generation method based on domain transformation. However, the loss functions of the two models during training are different, namely, the non-target attack countermeasure sample generation model and the target attack countermeasure sample generation model need to be trained, in the training process, after the countermeasure sample is generated, a white box target model is input, the loss functions are calculated through the white box target model, and the encoder parameters and the decoder parameters in the non-target attack countermeasure sample generation model and the target attack countermeasure sample generation model are continuously updated according to the calculation condition of the loss functions until the loss values meet the set requirements, so that the non-target attack countermeasure sample generation model and the target attack countermeasure sample generation model meeting the requirements are obtained.
Specifically, the loss function of the model for generating the target-free attack countermeasure sample and the model for generating the target attack countermeasure sample during training comprises three parts: generating loss, binary loss and model identification loss, wherein the generating loss and the loss function of the binary loss of the two models are identical, the generating loss refers to sampling loss, the binary loss refers to binarization loss, but the model identification loss functions of the two models are different as follows:
the model recognition loss function of the target attack countermeasure sample generation model is as follows:
loss(pred,target)=CrossEntropyLoss(pred,target)
wherein pred is a predicted value output by the white box target model, target is a target class value, and cross EntroyLoss is a cross entropy loss function.
The model identification loss function of the model generated by the target-free attack countermeasure sample is as follows:
loss(pred,target)=1-CrossEntropyLoss(pred,target)
wherein pred is a predicted value output by the white box target model, target is a tag class value of real input data, and cross entrotyloss is a cross entropy loss function.
The comprehensive loss function is as follows:
loss=α·L gen +β·L bin +γ·L rec
wherein L is gen 、L bin 、L rec The generation loss, the binary loss and the model identification loss are represented respectively, and alpha, beta and gamma are the harmonic coefficients.
After training is completed, a target-free attack countermeasure sample generation model and a target attack countermeasure sample generation model which meet requirements are obtained. Then, referring to fig. 2, whether the challenge scene is a target attack or no target attack can be determined according to whether the input original image sample contains a label with a specified output category, so that a corresponding challenge sample generation model is selected to generate a challenge sample.
The foregoing has described exemplary embodiments of the invention, it being understood that any simple variations, modifications, or other equivalent arrangements which would not unduly obscure the invention may be made by those skilled in the art without departing from the spirit of the invention.

Claims (6)

1. A domain transformation-based adaptive position constraint sparse countermeasure sample generation method is characterized by comprising the following steps of: firstly inputting an original image sample into an encoder to obtain a depth characteristic, and then decoding the depth characteristic by utilizing two decoders, wherein a first decoder is used for generating global disturbance data in a limited range, and a second decoder is used for generating a binarization mask matrix to control the reserved disturbance pixel position; then performing dot multiplication operation on the global disturbance data obtained by the first decoder and the mask matrix obtained by the second decoder to obtain a preliminary sparse disturbance matrix;
inputting an original image sample into a wavelet transformation layer to obtain a corresponding high-frequency characteristic image, then binarizing the output by utilizing a self-adaptive binarization algorithm, and normalizing the output to obtain a high-frequency position limiting binarization matrix capable of limiting a disturbance additional area; and multiplying the binarization matrix with the sparse disturbance matrix to obtain a final antipodal disturbance, and adding the antipodal disturbance to the original image to obtain a final antipodal sample image.
2. The domain transform-based adaptive position constraint sparse countermeasure sample generation method of claim 1, wherein: the output of the first decoder is non-linearly mapped to between [ -eps, +eps ], where eps is the maximum disturbance value that is acceptable.
3. The domain transform-based adaptive position constraint sparse countermeasure sample generation method of claim 1, wherein: the output of the second decoder is mapped between 0 and 1 to obtain a probability matrix, and then the probability matrix is mapped into 0 and 1 codes through binarization operation to obtain a binarization mask matrix, and the mask matrix reserves pixel disturbance conditions in a limited disturbance interval range.
4. A domain transform based adaptive position constrained sparse countermeasure sample generation method according to claim 3, characterized by: a random quantization operator is introduced during binarization, binary quantization is performed when P (x) =1, the original value is reserved when P (x) =0, P (x) refers to a probability, and P (x) obeys Bernoulli distribution.
5. The domain transform-based adaptive position constraint sparse countermeasure sample generation method of claim 1, wherein: designing a non-target attack challenge sample generation model and a target attack challenge sample generation model according to a challenge scene, wherein the non-target attack challenge sample generation model and the target attack challenge sample generation model both execute the adaptive position constraint sparse challenge sample generation method based on domain transformation as set forth in claim 1; judging whether the attack resisting scene is a target attack or a non-target attack by judging whether the input original image sample contains a label of a specified output type, and selecting a corresponding attack resisting sample generation model to generate an attack resisting sample.
6. The domain transform-based adaptive position constraint sparse countermeasure sample generation method of claim 5, wherein: the no-target attack challenge sample generation model and the loss function of the target attack challenge sample generation model when trained each comprise three parts: the method comprises the steps of generating loss, binary loss and model identification loss, wherein the generating loss and the binary loss of two models are identical in loss function, and the model identification loss functions of the two models are different as follows:
the model recognition loss function of the target attack countermeasure sample generation model is as follows:
loss(pred,target)=1-CrossEntropyLoss(pred,target)
wherein pred is a predicted value output by a white box target model, target is a target class value, and cross EntroyLoss is a cross entropy loss function;
the model identification loss function of the model generated by the target-free attack countermeasure sample is as follows:
loss(pred,target)=1-CrossEntropyLoss(pred,target)
wherein pred is a predicted value output by a white box target model, target is a tag class value of real input data, and cross EntroyLoss is a cross entropy loss function;
the comprehensive loss function is as follows:
loss=α·L gen +β·L bin +γ·L rec
wherein L is gen 、L bin 、L rec The generation loss, the binary loss and the model identification loss are represented respectively, and alpha, beta and gamma are the harmonic coefficients.
CN202310785125.9A 2023-06-29 2023-06-29 Adaptive position constraint sparse countermeasure sample generation method based on domain transformation Active CN116883780B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310785125.9A CN116883780B (en) 2023-06-29 2023-06-29 Adaptive position constraint sparse countermeasure sample generation method based on domain transformation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310785125.9A CN116883780B (en) 2023-06-29 2023-06-29 Adaptive position constraint sparse countermeasure sample generation method based on domain transformation

Publications (2)

Publication Number Publication Date
CN116883780A CN116883780A (en) 2023-10-13
CN116883780B true CN116883780B (en) 2023-12-08

Family

ID=88259606

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310785125.9A Active CN116883780B (en) 2023-06-29 2023-06-29 Adaptive position constraint sparse countermeasure sample generation method based on domain transformation

Country Status (1)

Country Link
CN (1) CN116883780B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113222960A (en) * 2021-05-27 2021-08-06 哈尔滨工程大学 Deep neural network confrontation defense method, system, storage medium and equipment based on feature denoising
CN116051924A (en) * 2023-01-03 2023-05-02 中南大学 Divide-and-conquer defense method for image countermeasure sample
CN116071797A (en) * 2022-12-29 2023-05-05 北华航天工业学院 Sparse face comparison countermeasure sample generation method based on self-encoder

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10825219B2 (en) * 2018-03-22 2020-11-03 Northeastern University Segmentation guided image generation with adversarial networks
US20230186055A1 (en) * 2021-12-14 2023-06-15 Rensselaer Polytechnic Institute Decorrelation mechanism and dual neck autoencoder for deep learning

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113222960A (en) * 2021-05-27 2021-08-06 哈尔滨工程大学 Deep neural network confrontation defense method, system, storage medium and equipment based on feature denoising
CN116071797A (en) * 2022-12-29 2023-05-05 北华航天工业学院 Sparse face comparison countermeasure sample generation method based on self-encoder
CN116051924A (en) * 2023-01-03 2023-05-02 中南大学 Divide-and-conquer defense method for image countermeasure sample

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
AutoAdversary: A Pixel Pruning Method for Sparse Adversarial Attack;Jinqiao Li等;《arXiv:2203.09756》;第1-11页 *
图像分类迁移攻击对抗样本生成方法研究;李哲铭;《中国优秀硕士学位论文全文数据库 信息科技辑》;I138-67 *
针对图像分类的对抗样本防御方法研究;刘嘉阳;《中国博士学位论文全文数据库 信息科技辑》;I138-162 *

Also Published As

Publication number Publication date
CN116883780A (en) 2023-10-13

Similar Documents

Publication Publication Date Title
CN112288658B (en) Underwater image enhancement method based on multi-residual joint learning
CN111681154B (en) Color image steganography distortion function design method based on generation countermeasure network
CN110276708B (en) Image digital watermark generation and identification system and method based on GAN network
Shin et al. Region-based dehazing via dual-supervised triple-convolutional network
CN111325169B (en) Deep video fingerprint algorithm based on capsule network
Liu et al. Optimum adaptive array stochastic resonance in noisy grayscale image restoration
CN114157773B (en) Image steganography method based on convolutional neural network and frequency domain attention
CN115619616A (en) Method, device, equipment and medium for generating confrontation sample based on watermark disturbance
CN116883780B (en) Adaptive position constraint sparse countermeasure sample generation method based on domain transformation
Meng et al. High-capacity steganography using object addition-based cover enhancement for secure communication in networks
Lu et al. An interpretable image tampering detection approach based on cooperative game
CN116071797B (en) Sparse face comparison countermeasure sample generation method based on self-encoder
CN113221388A (en) Method for generating confrontation sample of black box depth model constrained by visual perception disturbance
CN113034332A (en) Invisible watermark image and backdoor attack model construction and classification method and system
CN111768326A (en) High-capacity data protection method based on GAN amplification image foreground object
Shi et al. Integrating deep learning and traditional image enhancement techniques for underwater image enhancement
Kandhway et al. Modified clipping based image enhancement scheme using difference of histogram bins
CN114900586B (en) Information steganography method and device based on DCGAN
CN116012501A (en) Image generation method based on style content self-adaptive normalized posture guidance
Cui et al. Deeply‐Recursive Attention Network for video steganography
CN114842242A (en) Robust countermeasure sample generation method based on generative model
CN114418821A (en) Blind watermark processing method based on image frequency domain
CN112073732A (en) Method for embedding and decoding image secret characters of underwater robot
CN116595515A (en) Anti-sample defense method and system based on denoising self-encoder
Luo et al. Content-adaptive Adversarial Embedding for Image Steganography Using Deep Reinforcement Learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant