CN114254275A - Black box deep learning model copyright protection method based on confrontation sample fingerprints - Google Patents
Black box deep learning model copyright protection method based on confrontation sample fingerprints Download PDFInfo
- Publication number
- CN114254275A CN114254275A CN202111358058.XA CN202111358058A CN114254275A CN 114254275 A CN114254275 A CN 114254275A CN 202111358058 A CN202111358058 A CN 202111358058A CN 114254275 A CN114254275 A CN 114254275A
- Authority
- CN
- China
- Prior art keywords
- model
- deep learning
- suspicious
- learning model
- sample
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 55
- 238000013136 deep learning model Methods 0.000 title claims abstract description 31
- 238000012549 training Methods 0.000 claims abstract description 19
- 230000006399 behavior Effects 0.000 claims abstract description 7
- 238000013507 mapping Methods 0.000 claims description 8
- 235000000332 black box Nutrition 0.000 claims description 6
- 230000006870 function Effects 0.000 claims description 6
- 230000007246 mechanism Effects 0.000 claims description 3
- 101100045345 Bacillus subtilis (strain 168) tagT gene Proteins 0.000 claims description 2
- 238000004458 analytical method Methods 0.000 claims description 2
- 238000007405 data analysis Methods 0.000 claims description 2
- 238000012360 testing method Methods 0.000 claims description 2
- 230000003042 antagnostic effect Effects 0.000 claims 6
- 230000008569 process Effects 0.000 abstract description 6
- 238000012795 verification Methods 0.000 abstract description 2
- 238000005259 measurement Methods 0.000 abstract 1
- 238000005516 engineering process Methods 0.000 description 3
- 238000013138 pruning Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 239000000758 substrate Substances 0.000 description 2
- 230000007547 defect Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/10—Protecting distributed programs or content, e.g. vending or licensing of copyrighted material ; Digital rights management [DRM]
- G06F21/16—Program or content traceability, e.g. by watermarking
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Technology Law (AREA)
- Computer Hardware Design (AREA)
- Computer Security & Cryptography (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Collating Specific Patterns (AREA)
Abstract
The invention discloses a black box deep learning model copyright protection method based on confrontation sample fingerprints, which comprises the following steps: designing a difference degree measurement index of a deep learning model, realizing an efficient seed selection strategy and a confrontation sample fingerprint generation method, measuring the similarity of the suspicious model on the basis (only the output of the last layer of the model is needed, and the white box authority is not needed), and finally judging whether the suspicious model has infringement behavior. The method is based on the public attribute (robustness) of the deep learning model, can automatically generate a fingerprint set for an original model, and is effective in various model stealing scenes; the method is not limited by the data field and the model structure, and has good universality and expansibility. Compared with the traditional model watermark embedding method, the method does not need to intervene in the training process of the deep learning model, avoids the complicated and time-consuming parameter adjustment process and the accuracy loss caused by embedding the watermark, and ensures that the copyright verification and protection of the deep learning model become simple and efficient.
Description
Technical Field
The invention relates to the field of safety and privacy of a deep learning model, in particular to a copyright protection method of a black box deep learning model based on confrontation sample fingerprints.
Background
Deep learning has enjoyed great success in solving many practical problems, such as image recognition, speech recognition, natural language processing, and the like. Training deep learning models, however, is not straightforward and typically requires a large amount of resources, including large data sets, expensive computing resources, and expert knowledge. Furthermore, the cost of training high performance models grows rapidly as the task complexity and model capacity increases. For example, training a BERT model on Wikipedia and book corpora (15GB) requires approximately 160 ten thousand dollars. This gives a malicious adversary (model thief) the incentive to steal the model and cover their trails, resulting in model copyright infringement and possible economic loss. It has been proven that stealing a model can be done very efficiently, e.g. fine-tuning or pruning the original model, and even in case of exposing only the API of the original model, an attacker can still steal most of the functionality of the model using model extraction techniques.
The recently proposed model watermarking technology utilizes the characteristics of overfitting of the deep learning model, and embeds secret watermarks (such as signatures) into the model in the training process to protect the copyright of the deep learning model. When the same or similar watermark is extracted from the suspect model, the model ownership may be verified. However, current watermarking techniques have two key drawbacks: 1) watermark embedding needs to intervene in a normal training process, so that the model performance is damaged; 2) an overfitting embedded watermark is easy to remove by an attacker, and the watermark is invalid. Therefore, a new copyright protection method needs to be designed for the deep learning model to cope with complicated and variable attack scenes.
Disclosure of Invention
The invention aims to provide a general black box deep learning model copyright protection method based on confrontation sample fingerprints, aiming at overcoming the defects of the existing deep learning model watermark technology.
The purpose of the invention is realized by the following technical scheme: a black box deep learning model copyright protection method based on confrontation sample fingerprints comprises the following steps:
step 1: selecting representative seeds in a training set by using a confidence degree priority strategy, and generating a unique confrontation sample fingerprint set by using a confrontation sample attack method based on a deep learning model (original model) needing copyright protection;
step 2: performing fingerprint matching on the suspicious models with the same functions, taking the confrontation sample fingerprint generated in the step 1 as input, obtaining the black box output of the suspicious models, and calculating the index difference degree of the suspicious models and the original models;
and step 3: and judging whether model stealing occurs or not based on the index difference, if the index difference is smaller than a set threshold, indicating that the suspicious model and the original model have similar decision boundaries and are likely to be derivative models of the original model, judging that model stealing occurs, and otherwise, judging that model stealing does not occur.
Further, in the step 1, based on the probability vector output of the training set on the original model, a 2-norm is calculated as a Gini coefficient, and a part of samples with the largest Gini coefficient are selected as seeds, so that the original model can be better represented, and the accuracy of final judgment is improved.
Further, in the step 1, for each seed sample xiGenerating corresponding confrontation samples using a targetless loss gradient descent algorithm (PGD) and saving the generated confrontation samples x'iAnd corresponding reference label yiObtaining a challenge sample fingerprint set T { (x {'1,y1),(x′2,y2),…}。
Further, in the step 2,
a. designing a RobD (Robustness Distance) and JSD (Jensen-Shanon Distance) Distance index based on the robust attribute of the deep learning model;
where f is the original model label mapping equation for a given input xiThe original model outputs a prediction tag f (x)i) (ii) a In a similar manner, the first and second substrates are,for the suspicious model tag mapping equation, for the same input xiThe suspicious model outputs a prediction tagT={(x′1,y1),(x′2,y2) … is a set of countermeasure sample fingerprints generated based on the prototype model, where x'iTo fight the sample, yiIs x'iThe reference label of (a); II is a true boolean function when f (x'i)=yiIf so, returning to 1, otherwise, returning to 0;
wherein f isLOutput probability vector mapping equations for the original model, for a given input xiThe original model outputs a probability vector fL(xi) (ii) a In a similar manner, the first and second substrates are,output probability vector mapping equations for the suspect model for the same input xiThe suspicious model outputs a probability vectorT={(x′1,y1),(x′2,y2) … is a set of countermeasure sample fingerprints generated based on the prototype model, where x'iTo challenge the sample;KL is Kullback-Leibler divergence (KL divergence); compared with the RobD distance index, the JSD distance index can compare the difference degree of the probability vector distribution output by the original model and the suspicious model in a finer granularity.
b. And (3) verifying the suspicious model by using the countermeasure sample fingerprint generated in the step (1), and calculating corresponding index difference, wherein the smaller the index difference is, the higher the similarity between the suspicious model and the original model is, and the more possible stealing behaviors occur.
c. The distance index can be expanded based on model attributes such as fairness, characteristics of an original model can be more comprehensively depicted, and a more comprehensive basis is provided for finally judging whether stealing behaviors occur.
Further, in step 3, the index difference obtained in step 2 is divided by using a threshold, data analysis can be performed according to actual application requirements, and the index threshold is dynamically determined. For the RobD and JSD distance index, the following method can be adopted:
training a group of reference models with the same structure from a random initial starting point by using a training set of an original model, defaulting to 24, obtaining a 95% confidence interval lower limit of RobD and JSD index values on the group of reference models by using T-test based on a confrontation sample fingerprint set T, and marking as LBRobDAnd LBJSD(ii) a Threshold value tau corresponding to two indexesRobDAnd τJSDThe formula of (1) is as follows:
τRobD=LBRobD·α
τJSD=LBJSD·α
wherein alpha is a dynamic threshold coefficient, the default is 0.9, and the alpha can be adjusted according to the actual application requirement.
Further, in the step 3, a voting mechanism is adopted to determine whether model stealing occurs finally, and when all index values of the suspicious model are smaller than respective threshold values, the model stealing is determined to occur; when all index values of the suspicious model are larger than respective threshold values, judging that model stealing does not occur; for other cases, if the index is different, it is determined that model stealing may occur, and subsequent analysis is required.
Further, in case of exposure against the sample fingerprint set T, the effectiveness of the protection method can be restored by replacing the seed.
Compared with the existing deep learning model watermarking technology, the method has the following advantages:
1) normal training of the interventional model is not required, so that additional accuracy loss is not caused;
2) the countercheck sample fingerprint generation and verification efficiency is high, and the calculation consumption is low;
3) the method has high flexibility, and only the final layer output of the suspicious model needs to be obtained for comparison;
4) the method has good robustness on various attack modes such as model fine tuning, pruning and the like;
5) the universality and the expandability are strong, and the new difference index and the model fingerprint set generation method can be incorporated into a framework to provide more comprehensive model description and improve the accuracy of final judgment.
Drawings
FIG. 1 is a schematic diagram of the method of the present invention;
FIG. 2 is a schematic diagram of the method of the present invention using challenge samples as model fingerprints;
FIG. 3 is a ROC plot (index independent calculation) for model stealing behavior determination, using CIFAR-10 dataset and ResNet-20 model structure as an example;
fig. 4 is a comparison of the inventive method and the existing black-box watermarking method.
Detailed Description
The invention will be further explained with reference to the drawings.
The basic architecture of the embodiment of the invention is shown in fig. 1, and an original Model (Victim Model) and a part of training data sets are given, the method can automatically select seeds, generate a confrontation sample fingerprint set (Fingerprints), calculate the index difference degree of a suspicious Model (Suscope Model) and the original Model based on the output of the last layer of the Model, and give final judgment on whether the suspicious Model is stolen. All steps are realized in the form of a function API, and the method is based on a Python language and a Tensorflow deep learning framework. The method comprises the following four main function interfaces:
the method of seed selection: and selecting the seeds with high priority based on the original model and the training set.
Finger print Generation method: a set of challenge sample fingerprints is generated based on the selected seeds.
The meticmeasuerement method: and calculating index difference degrees (RobD, JSD and the like) of the suspicious model and the original model based on the confrontation sample fingerprint set.
The finalJudge process: and finally judging based on the index difference degree result and a voting mechanism.
By calling the API, a confrontation sample fingerprint set can be automatically generated for the original model, and index difference degree and judgment can be efficiently calculated. And respectively calculating the RobD, JSD and other index difference degrees of the suspicious model and the original model based on the generated confrontation sample fingerprints, and providing a basis for final judgment. Fig. 2 illustrates the basic principle of using challenge samples as the original model boundary fingerprints. Stolen model copies are derived from the original model so they share similar decision boundaries with the original model; non-stolen models (i.e., independently trained models) are trained from the beginning using different data or different starting points, and therefore overlap less with the decision boundaries of the original model. Therefore, the countermeasure sample fingerprint is suitable for characterizing the original model, and can provide an accurate basis for finally judging whether the model stealing behavior occurs.
Example (c): training to obtain 1 original model based on a CIFAR-10 dataset and a ResNet-20 model structure, and obtaining 24 Negative model samples (non-stealing models) by using different training settings; based on the original model, different model stealing means (fine adjustment, pruning and the like) are used to obtain 30 Positive model samples (stolen models). Based on the fingerprint set generated by the original model, a total of 54 samples were calculated and ROC graphs were plotted, as shown in fig. 3, where AUC is 1, which is a good classifier. The result shows that the method can correctly identify all stolen models, and meanwhile, misjudgment does not occur, namely, the non-stolen models are not identified as stolen models by mistake. According to the above results, the present invention can verify the copyright of the deep learning model.
Applicants compare the method of the present invention to the black box watermarking method, as shown in fig. 4. The method can be found to be capable of obviously distinguishing the stolen models (five stealing methods) from the non-stolen models, the average interval is larger than that of the watermarking method, and the accuracy and robustness of the final judgment on the model stealing behavior can be improved. Meanwhile, compared with black box watermarking, the method does not need to interfere with the training process of the model, and is high in flexibility.
The above-described embodiments are intended to illustrate rather than to limit the invention, and any modifications and variations of the present invention are within the spirit of the invention and the scope of the appended claims.
Claims (9)
1. A black box deep learning model copyright protection method based on confrontation sample fingerprints is characterized by comprising the following steps:
step 1: selecting representative seeds in a training set by using a confidence degree priority strategy, and generating a unique confrontation sample fingerprint set by using a confrontation sample attack method based on a deep learning model (original model) needing copyright protection;
step 2: performing fingerprint matching on the suspicious models with the same functions, taking the confrontation sample fingerprint generated in the step 1 as input, obtaining the black box output of the suspicious models, and calculating the index difference degree of the suspicious models and the original models;
and step 3: and judging whether model stealing occurs or not based on the index difference, if the index difference is smaller than a set threshold, indicating that the suspicious model and the original model have similar decision boundaries and are likely to be derivative models of the original model, judging that model stealing occurs, and otherwise, judging that model stealing does not occur.
2. The method for protecting the copyright of the black-box deep learning model based on the fingerprint of the confrontation sample as claimed in claim 1, wherein in the step 1, based on the probability vector output of the training set on the original model, a 2-norm is calculated as the Gini coefficient, and a part of samples with the largest Gini coefficient are selected as seeds.
3. The method for protecting copyright of black-box deep learning model based on antagonistic sample fingerprints as claimed in claim 1, wherein in step 1, for each seed sample, a corresponding antagonistic sample is generated by using a targetless loss gradient descent algorithm, and the generated antagonistic sample and a corresponding reference label are saved to obtain an antagonistic sample fingerprint set.
4. The method for protecting the copyright of the black box deep learning model based on the antagonistic sample fingerprint as claimed in claim 1, wherein in the step 2, a RobD and JSD distance index is designed based on the robust property of the deep learning model; verifying the suspicious model by using the confrontation sample fingerprint generated in the step 1, and calculating corresponding index difference, wherein the smaller the index difference is, the higher the similarity between the suspicious model and the original model is;
where f is the original model label mapping equation for a given input xiThe original model outputs a prediction tag f (x)i);For the suspicious model tag mapping equation, for the same input xiThe suspicious model outputs a prediction tagT={(x′1,y1),(x′2,y2) … is a set of countermeasure sample fingerprints generated based on the prototype model, where x'iTo fight the sample, yiIs x'iThe reference label of (a);is a true boolean function when f (x'i)=yiIf so, returning to 1, otherwise, returning to 0;
wherein f isLOutput probability vector mapping equations for the original model, for a given input xiThe original model outputs a probability vector fL(xi);Output probability vector mapping equations for the suspect model for the same input xiThe suspicious model outputs a probability vectorKL is the Kullback-Leibler divergence.
5. The black box deep learning model copyright protection method based on antagonistic sample fingerprints as claimed in claim 4, wherein the index can be expanded based on model attributes, so that the characteristics of the original model can be more comprehensively characterized, and a more comprehensive basis is provided for finally judging whether a stealing behavior occurs.
6. The method for protecting copyright of the black box deep learning model based on the fingerprint of the confrontation sample according to claim 1, wherein in the step 3, the index difference degree obtained in the step 2 is divided by using a threshold value, so that data analysis can be performed according to actual application requirements, and the index threshold value can be dynamically determined.
7. The method for protecting copyright of black-box deep learning model based on countermeasure sample fingerprints as claimed in claim 4, wherein in step 3, a training set of original models is used to train a set of reference models with the same structure from a random initial starting point, based on the set of countermeasure sample fingerprints T, a lower limit of 95% confidence interval is obtained by T-test for obtaining RobD and JSD index values on the set of reference models, and the lower limit is marked as LBRobDAnd LBJSD(ii) a Threshold value tau corresponding to two indexesRobDAnd τJSDThe formula of (1) is as follows:
τRobD=LBRobD·α
τJSD=LBJSD·α
wherein alpha is a dynamic threshold coefficient which can be adjusted according to the actual application requirement.
8. The method for protecting the copyright of the black-box deep learning model based on the fingerprint of the countermeasure sample according to any one of claims 1 to 7, wherein in the step 3, a voting mechanism is adopted for whether model stealing finally occurs, and when all index values of a suspicious model are smaller than respective threshold values, the model stealing is judged to occur; when all index values of the suspicious model are larger than respective threshold values, judging that model stealing does not occur; for other cases, if the index is different, it is determined that model stealing may occur, and subsequent analysis is required.
9. The black-box deep learning model copyright protection method based on countermeasure sample fingerprints as claimed in any one of claims 1-7, wherein in case of exposure of the set of countermeasure sample fingerprints T, the effectiveness of the protection method can be restored by replacing the seeds.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111358058.XA CN114254275B (en) | 2021-11-16 | 2021-11-16 | Black box deep learning model copyright protection method based on antagonism sample fingerprint |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111358058.XA CN114254275B (en) | 2021-11-16 | 2021-11-16 | Black box deep learning model copyright protection method based on antagonism sample fingerprint |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114254275A true CN114254275A (en) | 2022-03-29 |
CN114254275B CN114254275B (en) | 2024-05-28 |
Family
ID=80790966
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111358058.XA Active CN114254275B (en) | 2021-11-16 | 2021-11-16 | Black box deep learning model copyright protection method based on antagonism sample fingerprint |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114254275B (en) |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190370440A1 (en) * | 2018-06-04 | 2019-12-05 | International Business Machines Corporation | Protecting deep learning models using watermarking |
CN110768971A (en) * | 2019-10-16 | 2020-02-07 | 伍军 | Confrontation sample rapid early warning method and system suitable for artificial intelligence system |
CN111291828A (en) * | 2020-03-03 | 2020-06-16 | 广州大学 | HRRP (high resolution ratio) counterattack method for sample black box based on deep learning |
CN112312541A (en) * | 2020-10-09 | 2021-02-02 | 清华大学 | Wireless positioning method and system |
US20210056404A1 (en) * | 2019-08-20 | 2021-02-25 | International Business Machines Corporation | Cohort Based Adversarial Attack Detection |
WO2021042665A1 (en) * | 2019-09-04 | 2021-03-11 | 笵成科技南京有限公司 | Dnn-based method for protecting passport against fuzzy attack |
US20210157912A1 (en) * | 2019-11-26 | 2021-05-27 | Harman International Industries, Incorporated | Defending machine learning systems from adversarial attacks |
CN113127857A (en) * | 2021-04-16 | 2021-07-16 | 湖南大学 | Deep learning model defense method for adversarial attack and deep learning model |
WO2021159898A1 (en) * | 2020-02-12 | 2021-08-19 | 深圳壹账通智能科技有限公司 | Privacy protection-based deep learning method, system and server, and storage medium |
CN113362217A (en) * | 2021-07-09 | 2021-09-07 | 浙江工业大学 | Deep learning model poisoning defense method based on model watermark |
-
2021
- 2021-11-16 CN CN202111358058.XA patent/CN114254275B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190370440A1 (en) * | 2018-06-04 | 2019-12-05 | International Business Machines Corporation | Protecting deep learning models using watermarking |
US20210056404A1 (en) * | 2019-08-20 | 2021-02-25 | International Business Machines Corporation | Cohort Based Adversarial Attack Detection |
WO2021042665A1 (en) * | 2019-09-04 | 2021-03-11 | 笵成科技南京有限公司 | Dnn-based method for protecting passport against fuzzy attack |
CN110768971A (en) * | 2019-10-16 | 2020-02-07 | 伍军 | Confrontation sample rapid early warning method and system suitable for artificial intelligence system |
US20210157912A1 (en) * | 2019-11-26 | 2021-05-27 | Harman International Industries, Incorporated | Defending machine learning systems from adversarial attacks |
WO2021159898A1 (en) * | 2020-02-12 | 2021-08-19 | 深圳壹账通智能科技有限公司 | Privacy protection-based deep learning method, system and server, and storage medium |
CN111291828A (en) * | 2020-03-03 | 2020-06-16 | 广州大学 | HRRP (high resolution ratio) counterattack method for sample black box based on deep learning |
CN112312541A (en) * | 2020-10-09 | 2021-02-02 | 清华大学 | Wireless positioning method and system |
CN113127857A (en) * | 2021-04-16 | 2021-07-16 | 湖南大学 | Deep learning model defense method for adversarial attack and deep learning model |
CN113362217A (en) * | 2021-07-09 | 2021-09-07 | 浙江工业大学 | Deep learning model poisoning defense method based on model watermark |
Non-Patent Citations (2)
Title |
---|
杭杰: "集成对抗性机器学习及其应用研究", 31 December 2020 (2020-12-31) * |
陈慧: "对抗性训练防御学习及其应用研究", 31 December 2019 (2019-12-31) * |
Also Published As
Publication number | Publication date |
---|---|
CN114254275B (en) | 2024-05-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
AprilPyone et al. | Block-wise image transformation with secret key for adversarially robust defense | |
Agarwal et al. | Image transformation-based defense against adversarial perturbation on deep learning models | |
Wu et al. | A novel convolutional neural network for image steganalysis with shared normalization | |
Li et al. | Piracy resistant watermarks for deep neural networks | |
Han et al. | Content-based image authentication: current status, issues, and challenges | |
Kanwal et al. | Detection of digital image forgery using fast fourier transform and local features | |
Li et al. | Tamper detection and self-recovery of biometric images using salient region-based authentication watermarking scheme | |
JP7140317B2 (en) | Method for learning data embedding network that generates marked data by synthesizing original data and mark data, method for testing, and learning device using the same | |
Ye et al. | Detection defense against adversarial attacks with saliency map | |
Joshi et al. | A multiple reversible watermarking technique for fingerprint authentication | |
CN115168210A (en) | Robust watermark forgetting verification method based on confrontation samples in black box scene in federated learning | |
Vatsa et al. | Digital watermarking based secure multimodal biometric system | |
Liu et al. | Data protection in palmprint recognition via dynamic random invisible watermark embedding | |
Ouyang et al. | A semi-fragile watermarking tamper localization method based on QDFT and multi-view fusion | |
Ahmad et al. | A novel image tamper detection approach by blending forensic tools and optimized CNN: Sealion customized firefly algorithm | |
Inamdar et al. | Offline handwritten signature based blind biometric watermarking and authentication technique using biorthogonal wavelet transform | |
Stamm et al. | Anti-forensic attacks using generative adversarial networks | |
WO2023093346A1 (en) | Exogenous feature-based model ownership verification method and apparatus | |
Chen et al. | A study on the photo response non-uniformity noise pattern based image forensics in real-world applications | |
CN112907431A (en) | Steganalysis method for resisting steganography robustness | |
Boroumand et al. | Boosting steganalysis with explicit feature maps | |
CN110163163B (en) | Defense method and defense device for single face query frequency limited attack | |
CN114254275B (en) | Black box deep learning model copyright protection method based on antagonism sample fingerprint | |
Conotter | Active and passive multimedia forensics | |
CN114254274B (en) | White-box deep learning model copyright protection method based on neuron output |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |