CN112150338B - Neural network model image watermark removing method - Google Patents

Neural network model image watermark removing method Download PDF

Info

Publication number
CN112150338B
CN112150338B CN202010992917.XA CN202010992917A CN112150338B CN 112150338 B CN112150338 B CN 112150338B CN 202010992917 A CN202010992917 A CN 202010992917A CN 112150338 B CN112150338 B CN 112150338B
Authority
CN
China
Prior art keywords
image
model
enhanced
original image
batch
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010992917.XA
Other languages
Chinese (zh)
Other versions
CN112150338A (en
Inventor
李琦
刘旋恺
李丰廷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN202010992917.XA priority Critical patent/CN112150338B/en
Publication of CN112150338A publication Critical patent/CN112150338A/en
Application granted granted Critical
Publication of CN112150338B publication Critical patent/CN112150338B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0021Image watermarking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention provides a method for removing a neural network model image watermark, and belongs to the technical field of artificial intelligence safety. The method comprises the steps of firstly obtaining an enhanced image set by randomly covering rectangular noise by utilizing an original image of a training data set. Then inputting the selected enhanced image and the corresponding original image into a model to be trained, outputting the corresponding characteristic distribution of each input image by a convolution layer of the last layer of the model, and normalizing to obtain the corresponding normalized characteristic distribution; calculating the distance between the original image and the feature distribution after normalization of the corresponding enhanced image, and punishing the feature distribution distance between the original image and the enhanced image; and performing iterative training on the model by using the loss function to finally obtain the neural network model with the watermark removed. The invention is based on limited image data, utilizes image enhancement and feature distribution optimization technology to achieve the effect of removing the watermark of the back door, is beneficial to improving the security of the watermark of the model, and ensures that the intellectual property protection of the model is more reliable.

Description

Neural network model image watermark removing method
Technical Field
The invention belongs to the technical field of artificial intelligence safety, and particularly provides a method for removing a neural network model image watermark.
Background
In recent years, neural network models have been developed rapidly, and have been widely used in fields such as image recognition. In order to train these neural network models, which can accurately identify images, a large amount of computational overhead and massive image training data are required. In order to protect the copyright of the neural network model, the model owner can utilize the image watermarking technology to inject watermark into the model to verify whether a suspicious model is the model of the model, so that the intellectual property of the model is effectively protected. The current common model watermarking technology is based on a back-gate approach, and the basic idea is to leave a back gate in the process of training a model. The model can correctly identify the normal image, and then when a specific watermark is overlaid on the normal image, the output of the model can be misclassified into preset and wrong categories. The model owner can thus trigger the back gate with a specific input image constructed in advance, and determine whether the model is its own model by observing whether the output of the neural network is a specified result.
In order to test the security and usability of model watermarks, we have to consider the robustness of model watermarks. By considering possible strategies and potential risks for watermark removal, more improved ideas and directions can be provided for future model watermark injection, and the method is beneficial to promoting the follow-up model watermark technology to be continuously perfected towards a safer and more robust direction, so that the intellectual property of the neural network model is better protected. Regarding the removal of the back-door watermark, there are currently mainly the following schemes:
1) Strategy for continuing training based on the model. The most intuitive method for removing the watermark is to train the watermark model continuously by using normal image data, and aims to enable the model to forget the watermark mode step by step in the training process. The "fine tuning parameters" framework is a method of model watermarking that can be removed by continuing to train the watermark model based on normal image data using a large learning rate, but can result in a dramatic decrease in the accuracy of the model on the original test set. Thus, this approach relies heavily on careful design of the learning rate. In addition, the scheme also needs to additionally introduce a large amount of unlabeled data, which increases the cost of the scheme for removing the watermark.
2) Model pruning based strategies. The scheme is a common method for defending the back door technology of the neural network model, and the basic strategy is that firstly, a normal image is input into the model, the activation value of each neuron in the model is recorded, and then, according to the ascending order of the activation values, the neurons which are not activated are continuously removed, and the change condition of the accuracy of the model on a test set is observed. When the model accuracy falls below a set threshold, the process is terminated. Pruning-based strategies have proven to defend against back-door attacks, but they assume that malicious attackers possess a complete training data set or validation set for use by the model owner, a strong assumption that is difficult to do in real-world scenarios.
3) L-based 2 Strategy of regular term. The scheme considers the essential reason for back-door watermarking because the model overfits the watermark patternSo that the model misclassifies the image containing the watermark when the watermark pattern appears. Thus, the scheme adds L in the process of training the watermark model 2 The regular term is used for avoiding the overfitting of the model to watermark data, so that the effect of removing the watermark can be achieved. However, the scheme also needs the same data size as the training set, and does not consider the limitation of the available data size in the actual scene.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a method for removing the watermark of the neural network model image. The invention is based on limited image data, and achieves the effect of removing the watermark of the back door by utilizing the image enhancement and feature distribution optimization technology. The invention interprets that the current model watermark possibly has the risk of being attacked, and also provides a new thought for the improvement of the subsequent model watermark technology, thereby being beneficial to improving the security of the model watermark and ensuring more reliable protection of the intellectual property of the model.
The invention provides a method for removing a neural network model image watermark, which is characterized by comprising the following steps:
1) Constructing a training data set, covering rectangular noise at random positions of each image of the training data set to obtain corresponding enhanced images, and forming an enhanced image set; the method comprises the following specific steps:
1-1) constructing a training data set;
acquiring an image data set, and randomly selecting m images from the image data set to form a training data set;
1-2) taking each image of the training data set obtained in the step 1-1) as an original image, randomly selecting a position in the original image and covering rectangular noise, wherein the specific method comprises the following steps: selecting a rectangle at the random position of the original image, wherein the length and width of the rectangle account for the length-width ratio of the whole image to be randomly selected within the range of 0.1-0.3, random noise is filled in a rectangular area, the intensity of noise pixel values is randomly selected within the range of 0-255, and other areas of the image are kept unchanged, so that an enhanced image based on noise corresponding to the original image is obtained;
1-3) setting the multiple of image enhancement as n, repeating the steps 1-2) n times for each original image of a training data set, randomly selecting a different position of the image each time and covering rectangular noise to obtain n Zhang Zengjiang images corresponding to the original image, and finally obtaining m x n reinforced images covering the random noise to form a reinforced image set;
2) Setting an initial learning rate lambda, and taking a neural network model with the image watermark to be removed as a current model; setting the number of the enhanced images of one batch of the current model to be k;
3) Randomly selecting a batch of enhanced images from the enhanced image set according to the training data set and the enhanced image set obtained in the step 1), and obtaining an original image corresponding to each enhanced image in the batch from the training data set;
then inputting each acquired original image and k Zhang Zengjiang images of the batch into a current model, and outputting the corresponding characteristic distribution of each input image by a convolution layer of the last layer of the model;
normalizing the feature distribution corresponding to each image output by the model to obtain the feature distribution normalized by the image;
4) Calculating the distance between the normalized feature distribution of each original image and the normalized feature distribution of each enhanced image corresponding to the original image in the batch by using a distribution distance measurement function according to the result of the step 3), and punishing the feature distribution distances of the original image and the enhanced images; the method comprises the following specific steps:
4-1) calculating the distance between the normalized characteristic distribution of each original image obtained in the step 3) and the normalized characteristic distribution of each enhanced image corresponding to the original image in the batch selected in the step 3) by using a distribution distance measurement function;
4-2) inputting each enhanced image of the batch into a current model, and then calculating a model output value corresponding to the enhanced image and a loss value between the enhanced image type labels by using a cross entropy loss function; the class label of the enhanced image is the same as the class label of the original image corresponding to the enhanced image;
5) Calculating a loss function;
the loss function value loss of the current model is calculated as follows:
wherein l i Is the loss value between the current model output value of the ith enhanced image in the batch and the enhanced image type label; d is a distribution distance metric function, D i Representing the normalized characteristic distribution of the ith enhanced image, d origin Representing the normalized characteristic distribution of the original image corresponding to the ith enhanced image; beta is the penalty term coefficient;
6) Using the result of the step 5), adopting an RMSProp optimizer to carry out iterative training on the current network model, calculating a corresponding loss value by the current model after each batch of training is finished, and updating model parameters; then returning to the step 3), selecting the next batch of enhanced images from the enhanced image set to train the updated current model, wherein in one round of training, the enhanced images of each batch are not repeated; after all the enhanced images in the enhanced image set are trained once, one round of training is finished;
when the model training reaches the set upper limit T of the number of rounds, after the training is finished, the neural network model with the watermark removed is finally obtained.
The invention has the characteristics and beneficial effects that:
1, the current research strategy for removing the watermark of the neural network model image is mostly based on the available data volume with the same scale as the training set, however, in a real scene, malicious attackers can hardly acquire so much data which are independent and distributed with the original training data; furthermore, some methods rely heavily on the setting of the learning rate, which requires the watermark to be removed by choosing an appropriate learning rate. The invention can achieve the effect of removing the watermark only by a small amount of limited training data, and is independent of the setting of the learning rate, so that the watermark removal is realistic and feasible.
2 aiming at the principle of the image watermarking technology, the invention improves the correct distinguishing capability of the model to the watermark by enhancing the robustness of the model to the occlusion image and optimizing the high-level characteristic distribution of the image, so that the watermark does not have the capability of misclassification of the model, namely the removal of the model watermark. The invention evaluates the security of the neural network model watermarking technology from the viewpoint of model robustness and clarifies possible security risks.
The invention focuses on the intellectual property protection problem of the neural network model in the artificial intelligent security field, and indicates the defects and improvement directions of the current watermark technology through the robustness analysis and security evaluation of the model watermark, thereby being beneficial to the continuous perfection of the subsequent watermark technology and further effectively protecting the intellectual property of the neural network model.
Detailed Description
The invention provides a method for removing a neural network model image watermark, and the method is further described in detail below by combining with specific embodiments.
The invention provides a method for removing a neural network model image watermark, which comprises the following steps:
1) Constructing a training data set, covering random rectangular noise at a random position of each image of the training data set to obtain a corresponding enhanced image, and forming an enhanced image set; the method comprises the following specific steps:
1-1) constructing a training data set;
and acquiring an image data set, randomly selecting images from the image data set according to a certain proportion, and obtaining m images to form a training data set.
The invention has no special requirement on image data sets, and can use common classical image data sets such as ImageNet, CIFAR-10, MNIST and the like. In order to fully clarify that the invention only needs limited data quantity, but not the whole training set, a certain proportion of limited data is randomly selected from the image data set, and at least 5% of data (increasing the data quantity can improve the effect of removing the watermark) is recommended, the CIFAR-10 data set commonly used in the field of image recognition is taken as an example in the embodiment, and the original image data set contains 60000 images in total and 10% of data quantity (6000 images) is randomly selected as the training data set.
1-2) taking each image of the training data set obtained in the step 1-1) as an original image, and covering rectangular noise at random positions of the original image. The length-width ratio of the rectangular noise to the whole image is randomly selected within the range of 0.1-0.3, and the intensity of the noise pixel value is randomly selected within the range of 0-255. The rectangular area is filled with random noise, and other areas of the image remain unchanged, so that a noise-based enhanced image corresponding to the original image is obtained.
1-3) setting the multiple of the image enhancement to n (n is generally in the range of 5-20, the value of the embodiment is 10), namely repeating the step 1-2) n times for each original image of the training data set, randomly selecting a different position of the image each time to perform noise enhancement operation, obtaining n Zhang Zengjiang images corresponding to the original image, and finally obtaining m×n enhanced images covered with random noise to form an enhanced image set. In this embodiment, 60000 enhanced images covered with random noise are obtained in total to form an enhanced image set.
2) Setting an initial learning rate lambda, and setting lambda advice to be a default value of 0.001; taking an original neural network model with the image watermark to be removed as a current model; setting the number of the enhanced images of one batch of the current model to be k (the value range of k is generally between 32 and 512);
3) Randomly selecting a batch of enhanced images from the enhanced image set according to the training data set and the enhanced image set obtained in the step 1), and obtaining an original image corresponding to each enhanced image in the batch from the training data set;
then inputting each acquired original image and k Zhang Zengjiang images of the batch into a current model, and outputting a characteristic distribution corresponding to each input image by a final convolution layer of the model (wherein the dimension of the characteristic distribution is determined by model structure parameters);
the invention has no special requirements on the structure of the neural network model. The convolution layer of the neural network model can extract image features and identify watermark patterns, so that the purpose of removing the watermarks is achieved by optimizing the feature distribution of the convolution layer, and the feature distribution of the convolution layer of the final layer of the enhanced image and the final layer of the original image is calculated. This example takes the classical VGG-16 model as an example, which contains 13 convolutional layers, we record the feature distribution of the enhanced image and the original image at the 13 th convolutional layer.
And normalizing the feature distribution corresponding to each image output by the model by using a Sigmoid function to obtain the feature distribution of the normalized image.
4) And 3) calculating the distance between the normalized characteristic distribution of each original image and the normalized characteristic distribution of the corresponding enhanced image of the original image in the batch by using a distribution distance measurement function according to the result of the step 3), and punishing the characteristic distribution distances of the original image and the enhanced image. The method comprises the following specific steps:
4-1) calculating the distance between the normalized characteristic distribution of each original image obtained in the step 3) and the normalized characteristic distribution of each enhanced image corresponding to the original image in the batch selected in the step 3) by using a distribution distance measurement function;
4-2) inputting each enhanced image of the batch into a current model, and then calculating a model output value corresponding to the enhanced image and a loss value between the enhanced image type labels by using a cross entropy loss function; the class label of the enhanced image is the same as the class label of the original image corresponding to the image, and is provided by the image data set.
In order to enhance the robustness of the neural network model to the watermarked image, we want the feature distributions of the normal image and the enhanced image to be as close as possible. The method selects a widely used cross entropy function or JS divergence function to measure the distance between two normalized feature distributions, and penalizes the feature distribution distance between the two by self-defining a loss function in the process of training a model. Along with the characteristic distribution of the two characteristics approaching to each other, the capability of the model for correctly identifying the watermark pattern is continuously improved, so that the model watermark is invalid.
5) A loss function is calculated.
The loss function value loss of the current model is calculated as follows:
wherein l i The loss value between the current model output value of the ith enhanced image in the batch and the enhanced image type label is calculated by the step 4); d is a distribution distance metric function, the present example selects the cross entropy function or JS divergence function, etc., mentioned earlier, D i Representing the normalized characteristic distribution of the ith enhanced image, d origin Representing the normalized feature distribution of the original image corresponding to the ith enhanced image. Beta is a penalty factor indicating the penalty for the distance of the distribution, and is generally recommended to be in the range of 0.005-0.02, which is set to 0.01 in this example.
6) Using the result of the step 5), carrying out iterative training on the current network model by adopting an RMSProp optimizer, calculating a corresponding loss value by the current model after each batch of training is finished, and updating model parameters; then returning to the step 3), selecting the next batch of enhanced images from the enhanced image set to train the updated current model, wherein in one round of training, the enhanced images of each batch are not repeated; after all the enhanced images in the enhanced image set are trained once, one round of training is finished; when the model training reaches the set upper limit T of the number of rounds (T is generally set between 40 and 60 rounds, and the example is set to 40 rounds), the neural network model with the watermark removed is finally obtained, and when an image is input into the model, the watermark is thoroughly removed.

Claims (1)

1. The method for removing the watermark of the neural network model image is characterized by comprising the following steps of:
1) Constructing a training data set, covering rectangular noise at random positions of each image of the training data set to obtain corresponding enhanced images, and forming an enhanced image set; the method comprises the following specific steps:
1-1) constructing a training data set;
acquiring an image data set, and randomly selecting m images from the image data set to form a training data set;
1-2) taking each image of the training data set obtained in the step 1-1) as an original image, randomly selecting a position in the original image and covering rectangular noise, wherein the specific method comprises the following steps: selecting a rectangle at the random position of the original image, wherein the length and width of the rectangle account for the length-width ratio of the whole image to be randomly selected within the range of 0.1-0.3, random noise is filled in a rectangular area, the intensity of noise pixel values is randomly selected within the range of 0-255, and other areas of the image are kept unchanged, so that an enhanced image based on noise corresponding to the original image is obtained;
1-3) setting the multiple of image enhancement as n, repeating the steps 1-2) n times for each original image of a training data set, randomly selecting a different position of the image each time and covering rectangular noise to obtain n Zhang Zengjiang images corresponding to the original image, and finally obtaining m x n reinforced images covering the random noise to form a reinforced image set;
2) Setting an initial learning rate lambda, and taking a neural network model with the image watermark to be removed as a current model; setting the number of the enhanced images of one batch of the current model to be k;
3) Randomly selecting a batch of enhanced images from the enhanced image set according to the training data set and the enhanced image set obtained in the step 1), and obtaining an original image corresponding to each enhanced image in the batch from the training data set;
then inputting each acquired original image and k Zhang Zengjiang images of the batch into a current model, and outputting the corresponding characteristic distribution of each input image by a convolution layer of the last layer of the model;
normalizing the feature distribution corresponding to each image output by the model to obtain the feature distribution normalized by the image;
4) Calculating the distance between the normalized feature distribution of each original image and the normalized feature distribution of each enhanced image corresponding to the original image in the batch by using a distribution distance measurement function according to the result of the step 3), and punishing the feature distribution distances of the original image and the enhanced images; the method comprises the following specific steps:
4-1) calculating the distance between the normalized characteristic distribution of each original image obtained in the step 3) and the normalized characteristic distribution of each enhanced image corresponding to the original image in the batch selected in the step 3) by using a distribution distance measurement function;
4-2) inputting each enhanced image of the batch into a current model, and then calculating a model output value corresponding to the enhanced image and a loss value between the enhanced image type labels by using a cross entropy loss function; the class label of the enhanced image is the same as the class label of the original image corresponding to the enhanced image;
5) Calculating a loss function;
the loss function value loss of the current model is calculated as follows:
wherein l i Is the loss value between the current model output value of the ith enhanced image in the batch and the enhanced image type label; d is a distribution distance metric function, D i Representing the normalized characteristic distribution of the ith enhanced image, d ortgin Representing the normalized characteristic distribution of the original image corresponding to the ith enhanced image; beta is the penalty term coefficient;
6) Using the result of the step 5), adopting an RMSProp optimizer to carry out iterative training on the current network model, calculating a corresponding loss value by the current model after each batch of training is finished, and updating model parameters; then returning to the step 3), selecting the next batch of enhanced images from the enhanced image set to train the updated current model, wherein in one round of training, the enhanced images of each batch are not repeated; after all the enhanced images in the enhanced image set are trained once, one round of training is finished;
when the model training reaches the set upper limit T of the number of rounds, after the training is finished, the neural network model with the watermark removed is finally obtained.
CN202010992917.XA 2020-09-21 2020-09-21 Neural network model image watermark removing method Active CN112150338B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010992917.XA CN112150338B (en) 2020-09-21 2020-09-21 Neural network model image watermark removing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010992917.XA CN112150338B (en) 2020-09-21 2020-09-21 Neural network model image watermark removing method

Publications (2)

Publication Number Publication Date
CN112150338A CN112150338A (en) 2020-12-29
CN112150338B true CN112150338B (en) 2023-12-05

Family

ID=73893478

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010992917.XA Active CN112150338B (en) 2020-09-21 2020-09-21 Neural network model image watermark removing method

Country Status (1)

Country Link
CN (1) CN112150338B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113297571B (en) * 2021-05-31 2022-06-07 浙江工业大学 Method and device for detecting backdoor attack of neural network model of facing graph
CN113222804B (en) * 2021-06-02 2022-03-15 景德镇陶瓷大学 Ceramic process-oriented up-sampling ceramic watermark model training method and embedding method
CN114782735B (en) * 2022-02-22 2024-04-26 北京航空航天大学杭州创新研究院 Dish identification method based on multi-region data enhancement
CN114862650B (en) * 2022-06-30 2022-09-23 南京信息工程大学 Neural network watermark embedding method and verification method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107122809A (en) * 2017-04-24 2017-09-01 北京工业大学 Neural network characteristics learning method based on image own coding
CN108805789A (en) * 2018-05-29 2018-11-13 厦门市美亚柏科信息股份有限公司 A kind of method, apparatus, equipment and readable medium removing watermark based on confrontation neural network
CN110599387A (en) * 2019-08-08 2019-12-20 北京邮电大学 Method and device for automatically removing image watermark
CN110782385A (en) * 2019-12-31 2020-02-11 杭州知衣科技有限公司 Image watermark removing method based on deep learning
CN111028163A (en) * 2019-11-28 2020-04-17 湖北工业大学 Convolution neural network-based combined image denoising and weak light enhancement method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107122809A (en) * 2017-04-24 2017-09-01 北京工业大学 Neural network characteristics learning method based on image own coding
CN108805789A (en) * 2018-05-29 2018-11-13 厦门市美亚柏科信息股份有限公司 A kind of method, apparatus, equipment and readable medium removing watermark based on confrontation neural network
CN110599387A (en) * 2019-08-08 2019-12-20 北京邮电大学 Method and device for automatically removing image watermark
CN111028163A (en) * 2019-11-28 2020-04-17 湖北工业大学 Convolution neural network-based combined image denoising and weak light enhancement method
CN110782385A (en) * 2019-12-31 2020-02-11 杭州知衣科技有限公司 Image watermark removing method based on deep learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
刘立等.一种工程图纸类文档识别分类的技术研究.《电子设计工程》.2020,(第12期), *
基于多重匹配的可见水印去除算法;张茗茗;周诠;呼延烺;;计算机工程与设计(01);176-182 *

Also Published As

Publication number Publication date
CN112150338A (en) 2020-12-29

Similar Documents

Publication Publication Date Title
CN112150338B (en) Neural network model image watermark removing method
Bayar et al. A deep learning approach to universal image manipulation detection using a new convolutional layer
CN108446700B (en) License plate attack generation method based on anti-attack
Chen et al. POBA-GA: Perturbation optimized black-box adversarial attacks via genetic algorithm
CN111753881A (en) Defense method for quantitatively identifying anti-attack based on concept sensitivity
CN111967592B (en) Method for generating countermeasure image machine identification based on separation of positive and negative disturbance
CN111754519B (en) Class activation mapping-based countermeasure method
CN113297572B (en) Deep learning sample-level anti-attack defense method and device based on neuron activation mode
CN110826581B (en) Animal number identification method, device, medium and electronic equipment
CN112182585B (en) Source code vulnerability detection method, system and storage medium
CN115860112B (en) Model inversion method-based countermeasure sample defense method and equipment
Wang et al. SmsNet: A new deep convolutional neural network model for adversarial example detection
CN112989361A (en) Model security detection method based on generation countermeasure network
CN112926661A (en) Method for enhancing image classification robustness
CN113034332B (en) Invisible watermark image and back door attack model construction and classification method and system
CN116071797B (en) Sparse face comparison countermeasure sample generation method based on self-encoder
CN116188439A (en) False face-changing image detection method and device based on identity recognition probability distribution
Stephen et al. Fingerprint image enhancement through particle swarm optimization
CN116484274A (en) Robust training method for neural network algorithm poisoning attack
CN112907503B (en) Penaeus vannamei Boone quality detection method based on self-adaptive convolutional neural network
CN114842242A (en) Robust countermeasure sample generation method based on generative model
CN115358952A (en) Image enhancement method, system, equipment and storage medium based on meta-learning
CN114693973A (en) Black box confrontation sample generation method based on Transformer model
Yoo et al. Defending against adversarial fingerprint attacks based on deep image prior
Bansal et al. Securing fingerprint images using a hybrid technique

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant