CN112269988A - Dynamic defense method, system, medium, equipment and application of model extraction attack - Google Patents
Dynamic defense method, system, medium, equipment and application of model extraction attack Download PDFInfo
- Publication number
- CN112269988A CN112269988A CN202011030540.6A CN202011030540A CN112269988A CN 112269988 A CN112269988 A CN 112269988A CN 202011030540 A CN202011030540 A CN 202011030540A CN 112269988 A CN112269988 A CN 112269988A
- Authority
- CN
- China
- Prior art keywords
- model
- information leakage
- privacy
- leakage degree
- request
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/60—Protecting data
- G06F21/62—Protecting access to data via a platform, e.g. using keys or access control rules
- G06F21/6218—Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
- G06F21/6245—Protecting personal data, e.g. for financial or medical purposes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/50—Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
- G06F21/55—Detecting local intrusion or implementing counter-measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computer Security & Cryptography (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Bioethics (AREA)
- Data Mining & Analysis (AREA)
- Medical Informatics (AREA)
- Evolutionary Computation (AREA)
- Mathematical Physics (AREA)
- Computer Hardware Design (AREA)
- Biophysics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Databases & Information Systems (AREA)
- Computer And Data Communications (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention belongs to the technical field of network security, and discloses a dynamic defense method, a system, a medium, equipment and application of model extraction attack, wherein an MLaS is used for deploying an intelligent model to be protected on line; introducing a differential privacy technology, setting a privacy budget, and applying the privacy budget to a model; after receiving the request, the model generates a normal response and sets the model information leakage degree; generating noise by using a differential privacy technology to disturb the model response and generate a response with noise; monitoring the request received by the model and the given noisy reply, and comparing the noisy reply with the training data set to calculate the information leakage degree of the model caused by receiving the request; calculating a model information leakage degree evaluation value; substituting the information leakage degree into an adaptive allocation algorithm of the privacy budget; and inputting the calculated new privacy budget to the differential privacy technology. The invention can self-adaptively adjust the added noise according to the model information leakage degree, ensure the safety of the model and simultaneously improve the performance of the model.
Description
Technical Field
The invention belongs to the technical field of network security, and particularly relates to a dynamic defense method, a dynamic defense system, a dynamic defense medium, a dynamic defense device and a dynamic defense application of a model extraction attack.
Background
At present: with the rapid development and progress of artificial intelligence technology, more and more production scenes depend on the prediction and judgment of an intelligent model, so that a model intelligent Service (MLaS) appears, and a user can apply the most advanced intelligent Service to production in a payment query mode, wherein the functions comprise data analysis, language processing, safety detection and the like. Such intelligent services generally require a model provider to perform learning training and parameter optimization on a large number of data sets in advance, the used data sets generally have strong specificity or relate to business confidentiality, and the optimized models are very sensitive to privacy problems, so MLaaS also magnifies the privacy problems therein while bringing convenience to production. The model extraction attack can successfully attack the unprotected machine learning model, the effect is outstanding, and the application range is very wide. The attack mode can effectively deduce the internal structure of the model by disguising as a normal user sending an inquiry request, and the deployed model is successfully stolen, so that the paid use is avoided, even if the stolen model is used for security detection, such as detection of malicious software, flow and the like, an attacker can use the stolen model parameters to escape the detection, and greater harm is caused, so that the privacy security problem brought by the intelligent model is increasingly prominent, and the attack mode has a very important effect on the national network security for all unprotected model security.
At present, two types of defense schemes are mainly used for the attack, one type is a scheme based on model disturbance, and the other type is a scheme based on query request monitoring. The scheme based on model perturbation mainly adds perturbation when the model returns a result, so that an attacker cannot obtain an accurate prediction result to deduce the model. At present, the only technology which can ensure that the accuracy of the overall returned result of the model after disturbance is not reduced too much and can also protect the model from being stolen is mainly a differential privacy technology, the technology proves the effectiveness of privacy protection through strict mathematical definition, and simultaneously provides a controllable noise and evaluable performance loss method, and is a defense technology which is very suitable for resisting model extraction attack, but the existing scheme based on the differential privacy protection lacks an effective privacy budget distribution mode, the protection effect is very limited in practical use, for example, the protection mode of fixing the privacy budget has a prominent defense effect at the beginning stage, but with the increase of attack times, an attacker can eliminate the influence of noise by means of related knowledge such as a majority law, statistics and the like to obtain a real result, and finally the defense effect of the scheme can obviously slide down, the model is not protected enough. The scheme based on request monitoring mainly monitors the request received by the model in real time, and stops providing prediction service when the amount of information leaked by the model through inquiry reaches a preset range, so as to protect the model. However, the scheme does not perform correlation analysis on query and stolen information, the problem of the possibility of model extraction attack is not solved fundamentally, the effect is also limited by the monitoring application range and the feedback speed in a specific scene, for example, a common proxy model scheme has large detection delay, and when a target model is in a high-speed queried state, the extracted degree of the proxy model cannot accurately reflect the attacked degree of the target model; in addition, how to set corresponding reasonable query thresholds in different scenes is also a key problem, for example, existing attacks can complete stealing before reaching the conventional detection threshold of a simple model.
Through the above analysis, the problems and defects of the prior art are as follows:
(1) the existing scheme based on differential privacy protection lacks an effective privacy budget allocation mode, and the protection effect is very limited in practical use.
(2) The prior art does not perform correlation analysis on query and stolen information, and cannot fundamentally solve the possibility of attack extraction of a model, the effect of the method is very limited by the monitoring application range and the feedback speed in a specific scene, and in addition, a key problem is also how to set corresponding reasonable query thresholds in different scenes.
The difficulty in solving the above problems and defects is: when the machine learning model faces the model extraction attack, the machine learning model cannot defend the model extraction attack effectively from the fundamental theory, and the security risk of the machine learning model also brings various AI applications, such as MLaaS. At present, no effective scheme can accurately reflect the state of the model extracted and attacked, and no reasonable method is used for disturbing the returned result of the model, so that the safety and the usability of the current model deployment are greatly limited, and how to defend the model extracted and attacked according to different use scenes of AI is a pending problem.
The significance of solving the problems and the defects is as follows: in recent years, the development of big data and cloud computing and the innovation of a machine learning algorithm enable the artificial intelligence technology to be widely applied, and great economic and social benefits are generated. However, the machine learning model algorithm does not consider the security threat during the design, so that the machine learning model algorithm is very easy to be attacked by malicious attacks such as model extraction attack, and the security hazard is particularly huge in the key fields of industry, medical treatment, traffic, monitoring and the like; if the machine learning model is attacked maliciously, property loss is caused slightly, and personal safety is threatened seriously. Therefore, the protection capability of the model is improved, the AI technology can be helped to defend malicious attacks, the robustness of AI products and services is enhanced, the risk is reduced to the maximum extent, the safe, controllable and reliable development of artificial intelligence is ensured, and the popularization and the utilization of intelligent services are promoted.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides a dynamic defense method, a system, a medium, equipment and application of model extraction attack.
The invention is realized in such a way that a dynamic defense method for model extraction attack comprises the following steps:
deploying an intelligent model to be protected on line by using MLaaS;
introducing a differential privacy technology, setting a privacy budget, and applying the privacy budget to a model;
after receiving the request, the model generates a normal response and sets the model information leakage degree;
generating noise by using a differential privacy technology to disturb the model response and generate a response with noise;
monitoring the request received by the model and the given noisy reply, and comparing the noisy reply with the training data set to calculate the information leakage degree of the model caused by receiving the request;
calculating an accumulated value of model information leakage degree;
substituting the information leakage degree into an adaptive allocation algorithm of the privacy budget;
and inputting the calculated new privacy budget to the differential privacy technology.
Further, differential privacy techniques were introducedAnd setting the privacy budget e as 1, and applying the privacy budget e to the model.
Further, the model receives the request, includes an attack query, generates a normal response y, and sets the model information leakage degree L to 0 at this time.
Further, use ofGenerating noise to perturb the model response, and generating a noisy reply as follows:
further, the request received by the monitoring model and the given noisy reply are compared with the training data set to calculate the information leakage degree L of the model caused by receiving the requesty;
Calculating an accumulated value of model information leakage degree:
L=L+Ly;
an adaptive allocation algorithm that substitutes the degree of information leakage into the privacy budget, as follows:
wherein L istThe maximum degree of model information leakage acceptable for a user, and p is a range parameter in the current deployment environment;
It is a further object of the invention to provide a computer device comprising a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to perform the steps of:
deploying an intelligent model to be protected on line by using MLaaS;
introducing a differential privacy technology, setting a privacy budget, and applying the privacy budget to a model;
after receiving the request, the model generates a normal response and sets the model information leakage degree;
generating noise by using a differential privacy technology to disturb the model response and generate a response with noise;
monitoring the request received by the model and the given noisy reply, and comparing the noisy reply with the training data set to calculate the information leakage degree of the model caused by receiving the request;
calculating an accumulated value of model information leakage degree;
substituting the information leakage degree into an adaptive allocation algorithm of the privacy budget;
and inputting the calculated new privacy budget to the differential privacy technology.
It is another object of the present invention to provide a computer-readable storage medium storing a computer program which, when executed by a processor, causes the processor to perform the steps of:
deploying an intelligent model to be protected on line by using MLaaS;
introducing a differential privacy technology, setting a privacy budget, and applying the privacy budget to a model;
after receiving the request, the model generates a normal response and sets the model information leakage degree;
generating noise by using a differential privacy technology to disturb the model response and generate a response with noise;
monitoring the request received by the model and the given noisy reply, and comparing the noisy reply with the training data set to calculate the information leakage degree of the model caused by receiving the request;
calculating an accumulated value of model information leakage degree;
substituting the information leakage degree into an adaptive allocation algorithm of the privacy budget;
and inputting the calculated new privacy budget to the differential privacy technology.
Another object of the present invention is to provide an information data processing terminal, which is used for implementing the dynamic defense method for model extraction attack.
Another object of the present invention is to provide a dynamic defense system against model extraction attack, which implements the dynamic defense method against model extraction attack, the dynamic defense system against model extraction attack comprising:
the intelligent model deployment module is used for deploying an intelligent model to be protected on line by using MLaaS;
the differential privacy application module is used for introducing a differential privacy technology, setting a privacy budget and applying the privacy budget to the model;
the normal response module is used for generating a normal response after the model receives the request and simultaneously setting the model information leakage degree at the moment;
the noisy reply module is used for generating noise by using a differential privacy technology to disturb the model response and generate a noisy reply;
the information leakage degree comparison module is used for monitoring the request received by the model and the given noisy reply, and comparing the noisy reply with the training data set to calculate the information leakage degree of the model caused by receiving the request;
the accumulated value calculation module is used for calculating the accumulated value of the model information leakage degree;
the adaptive distribution module is used for substituting the information leakage degree into an adaptive distribution algorithm of the privacy budget;
the differential privacy input module is used for inputting the calculated new privacy budget to the differential privacy technology;
and the query request module is used for continuing dynamic defense after receiving a new query.
Another object of the present invention is to provide a network security terminal equipped with the dynamic defense system for model extraction attack.
By combining all the technical schemes, the invention has the advantages and positive effects that: the method is used for preventing the model from being attacked, protecting the machine learning model from being attacked and stolen by an enemy, and improving the security of MLaaS. The method adopts a dynamic differential privacy protection technology, introduces the dynamic differential privacy protection technology into MLaaS to resist the model extraction attack, improves the safety of the model, and solves the problem of weak differential privacy protection capability in the prior art. Meanwhile, the scheme has wide application range and outstanding protection effect.
The invention provides an adaptive allocation algorithm of privacy budget, which can adaptively adjust the added noise according to the model information leakage degree, and the added noise is amplified more infinitely as the model information leakage degree approaches the limit of tolerable model information leakage of a user, so that compared with a mode of adding fixed disturbance, the model performance is improved while the model safety is ensured.
The method is used for guiding the differential privacy technology to dynamically add noise to the artificial intelligence model and helping the artificial intelligence model to effectively resist model extraction attack. Meanwhile, the invention can maximize the model performance while protecting the model privacy, and has strong practical application value.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings used in the embodiments of the present application will be briefly described below, and it is obvious that the drawings described below are only some embodiments of the present application, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart of a dynamic defense method for model extraction attack according to an embodiment of the present invention.
FIG. 2 is a schematic structural diagram of a dynamic defense system for model extraction attack according to an embodiment of the present invention;
in fig. 2: 1. an intelligent model deployment module; 2. a differential privacy application module; 3. a normal response module; 4. A noisy reply module; 5. an information leakage degree comparison module; 6. an accumulated value calculation module; 7. an adaptive allocation module; 8. a differential privacy input module; 9. and a query request module.
FIG. 3 is a comparison graph of defense effects of various schemes in a logistic regression model according to embodiments of the present invention.
Fig. 4 is a diagram comparing defense effects of various schemes in a neural network model according to embodiments of the present invention.
Fig. 5 is a graph of model performance effect under protection of the proposed scheme provided by the embodiment of the present invention.
Fig. 6 is a diagram of a privacy budget allocation function provided by an embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail with reference to the following embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Aiming at the problems in the prior art, the invention provides a dynamic defense method, a system, a medium, equipment and an application of model extraction attack, and the invention is described in detail with reference to the attached drawings.
As shown in fig. 1, the dynamic defense method for model extraction attack provided by the present invention includes the following steps:
s101: deploying an intelligent model to be protected on line by using MLaaS;
s102: introducing a differential privacy technology, setting a privacy budget, and applying the privacy budget to a model;
s103: after the model receives the request (including attack inquiry), generating a normal response, and simultaneously setting the model information leakage degree at the moment;
s104: generating noise by using a differential privacy technology to disturb the model response and generate a response with noise;
s105: monitoring the request received by the model and the given noisy reply, and comparing the noisy reply with the training data set to calculate the information leakage degree of the model caused by receiving the request;
s106: calculating an accumulated value of model information leakage degree;
s107: substituting the information leakage degree into an adaptive allocation algorithm of the privacy budget;
s108: inputting the calculated new privacy budget to a differential privacy technology;
s109: the above process continues after a new query is received. The noise added in the model reply changes adaptively along with the increase of the attack times, and finally, the model is effectively protected from information stealing.
Those skilled in the art can also implement the dynamic defense method for model extraction attack by using other steps, and the dynamic defense method for model extraction attack provided by the present invention in fig. 1 is only a specific embodiment.
As shown in fig. 2, the dynamic defense system for model extraction attack provided by the present invention includes:
the intelligent model deployment module 1 is used for deploying an intelligent model to be protected on line by using MLaaS;
the differential privacy application module 2 is used for introducing a differential privacy technology, setting a privacy budget and applying the privacy budget to the model;
the normal response module 3 is used for generating a normal response after the model receives the request (including the attack inquiry), and simultaneously setting the model information leakage degree at the moment;
the noisy reply module 4 is used for generating noise by using a differential privacy technology to disturb the model response and generate a noisy reply;
the information leakage degree comparison module 5 is used for monitoring the request received by the model and the given noisy reply, and comparing the noisy reply with the training data set to calculate the information leakage degree of the model caused by receiving the request;
an accumulated value calculation module 6, which is used for calculating the accumulated value of the model information leakage degree;
an adaptive allocation module 7, configured to substitute the information leakage degree into an adaptive allocation algorithm of the privacy budget;
the differential privacy input module 8 is used for inputting the calculated new privacy budget to the differential privacy technology;
and the query request module 9 is used for continuing dynamic defense after receiving a new query.
The dynamic defense method for the model extraction attack specifically comprises the following steps:
(1) and using MLaaS to deploy an intelligent model to be protected on line.
(2) Introduction of differential privacy techniquesAnd setting the privacy budget e as 1, and applying the privacy budget e to the model. At this time, the model has primary protection capability, but still cannot resist a large amount of model extraction attacks.
(3) When the model receives a request (including an attack inquiry), a normal response y is generated, and the model information leakage degree L is set to 0.
(5) monitoring the request received by the model and the given noisy reply, and comparing the noisy reply with the training data set to calculate the information leakage degree L of the model caused by receiving the requesty。
(6) Calculating an accumulated value of model information leakage degree:
L=L+Ly;
(7) an adaptive allocation algorithm that substitutes the degree of information leakage into the privacy budget, as follows:
wherein L istAnd p is a range parameter under the current deployment environment, and is the maximum degree of model information leakage acceptable for the user. The specific allocation algorithm function map can be seen in fig. 6.
(9) The above process continues after a new query is received. The noise added in the model reply changes adaptively along with the increase of the attack times, and finally, the model is effectively protected from information stealing.
The technical effects of the present invention will be described in detail with reference to experiments.
The actual scenario to which the present invention is applied is constructed prior to implementing the proposed solution. Two common machine learning models of different types and acting on different scenes are deployed as MLaaS, Logic Regression (LR) and Neural Network (NN). Four different types of data sets suitable for different scenes are selected, and the characteristic dimensions of the data sets are different, as shown in table 1.
TABLE 1
Wherein, the SocialAds is used for publishing advertisements in the social network, and a model is needed to judge whether a client is willing to buy the product; titanic is a model required to determine whether passengers on a ship can be rescued; the email spam is used by the model to judge whether the mail is a junk mail; mushrooms need to be judged whether Mushrooms are edible or not. The characteristic dimensions of these four data sets differ. Then, model extraction attack is reproduced, the attack effect is obvious, and the existence of the attack is proved to bring huge threat to MLaaS. Then, a dynamic defense method proposed by the scheme is deployed for testing, the most advanced model defense technology mentioned in the background art is added into the experiment for comparison, and the results are shown in fig. 3 and fig. 4.
Meanwhile, the actual performance of the model protected by the proposed scheme is tested, and the result is shown in fig. 5. The self-variation is a privacy budget parameter, and the smaller the value is, the larger the added noise is, and the stronger the protection capability of the model is. The two graphs in FIG. 5 illustrate that the actual accuracy of the model can be kept in a good range (80%) and the two graphs illustrate that the model is good in attack resistance (60%).
As shown in fig. 3, logistic regression models (with consistent model algorithms and different parameters, according to practical situations) are respectively trained on 4 different types of data sets, wherein the ordinate represents the extracted degree of the logistic regression model, and the abscissa r represents the number of attack queries sent by an attacker. Compared with two types of commonly used protection schemes RC and BDPL, the scheme MDP provided by the invention can protect a logistic regression model to the maximum extent, and the extraction degree of the model is far lower than that of the two schemes.
As shown in fig. 4, a neural network model is trained on 4 different types of data sets (the model algorithms are consistent, the parameters are different, and the model algorithms are in accordance with practical situations), wherein the ordinate represents the extracted degree of the neural network model, and the abscissa r represents the number of times of attack queries sent by an attacker. Compared with two types of commonly used protection schemes RC and BDPL, the scheme MDP provided by the invention can protect the neural network model to the maximum extent, and the extraction degree of the model is far lower than that of the two schemes.
As shown in fig. 5, the upper left graph represents the logistic regression model deployed on 4 different types of data sets, and the ordinate represents the accuracy performance of the model after MDP protection. The bottom left graph represents a logistic regression model deployed on 4 different types of data sets, and the ordinate represents the defense performance of the model after protection by MDP. The upper right graph represents the neural network model deployed on 4 different types of data sets, and the ordinate represents the accuracy performance of the model after being protected by MDP. The lower right graph represents the neural network model deployed on 4 different types of data sets, and the ordinate represents the defensive performance of the model after protection by MDP. The abscissas epsilon of the four figures represent different parameter choices of the MDP proposed by the invention. The scheme MDP provided by the invention can protect the model to the maximum extent under the condition of ensuring that the original precision of the model is influenced to the minimum, and can effectively resist the attack of model extraction.
As shown in fig. 6, when the mechanism MDP provided by the present invention protects the model, as the leakage degree of the abscissa information increases, the distribution function of the privacy budget parameter epsilon of the MDP is plotted.
It should be noted that the embodiments of the present invention can be realized by hardware, software, or a combination of software and hardware. The hardware portion may be implemented using dedicated logic; the software portions may be stored in a memory and executed by a suitable instruction execution system, such as a microprocessor or specially designed hardware. Those skilled in the art will appreciate that the apparatus and methods described above may be implemented using computer executable instructions and/or embodied in processor control code, such code provided on a carrier medium such as a diskette, CD-or DVD-ROM, a programmable memory such as read-only memory (firmware), or a data carrier such as an optical or electronic signal carrier, for example. The apparatus and its modules of the present invention may be implemented by hardware circuits such as very large scale integrated circuits or gate arrays, semiconductors such as logic chips, transistors, or programmable hardware devices such as field programmable gate arrays, programmable logic devices, etc., or by software executed by various types of processors, or by a combination of hardware circuits and software, e.g., firmware.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any modification, equivalent replacement, and improvement made by those skilled in the art within the technical scope of the present invention disclosed in the present invention should be covered within the scope of the present invention.
Claims (10)
1. A dynamic defense method for model extraction attack is characterized in that the dynamic defense method for the model extraction attack comprises the following steps:
deploying an intelligent model to be protected on line by using MLaaS;
introducing a differential privacy technology, setting a privacy budget, and applying the privacy budget to a model;
after receiving the request, the model generates a normal response and sets the model information leakage degree;
generating noise by using a differential privacy technology to disturb the model response and generate a response with noise;
monitoring the request received by the model and the given noisy reply, and comparing the noisy reply with the training data set to calculate the information leakage degree of the model caused by receiving the request;
calculating an accumulated value of model information leakage degree;
substituting the information leakage degree into an adaptive allocation algorithm of the privacy budget;
and inputting the calculated new privacy budget to the differential privacy technology.
3. The dynamic defense method for model extraction attack as claimed in claim 1, wherein the model receives the request, and includes an attack query, and generates a normal response y, and sets the model information leakage degree L at this time to 0.
5. the method as claimed in claim 1, wherein the monitoring of the requests received by the model and the noisy replies given thereto is compared with the training data set to calculate the degree of information leakage L of the model due to the reception of the requestsy;
Calculating an accumulated value of model information leakage degree:
L=L+Ly;
an adaptive allocation algorithm that substitutes the degree of information leakage into the privacy budget, as follows:
wherein L istIs acceptable to usersThe maximum degree of model information leakage, p is a range parameter under the current deployment environment;
6. A computer device, characterized in that the computer device comprises a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to carry out the steps of:
deploying an intelligent model to be protected on line by using MLaaS;
introducing a differential privacy technology, setting a privacy budget, and applying the privacy budget to a model;
after receiving the request, the model generates a normal response and sets the model information leakage degree;
generating noise by using a differential privacy technology to disturb the model response and generate a response with noise;
monitoring the request received by the model and the given noisy reply, and comparing the noisy reply with the training data set to calculate the information leakage degree of the model caused by receiving the request;
calculating an accumulated value of model information leakage degree;
substituting the information leakage degree into an adaptive allocation algorithm of the privacy budget;
and inputting the calculated new privacy budget to the differential privacy technology.
7. A computer-readable storage medium storing a computer program which, when executed by a processor, causes the processor to perform the steps of:
deploying an intelligent model to be protected on line by using MLaaS;
introducing a differential privacy technology, setting a privacy budget, and applying the privacy budget to a model;
after receiving the request, the model generates a normal response and sets the model information leakage degree;
generating noise by using a differential privacy technology to disturb the model response and generate a response with noise;
monitoring the request received by the model and the given noisy reply, and comparing the noisy reply with the training data set to calculate the information leakage degree of the model caused by receiving the request;
calculating an accumulated value of model information leakage degree;
substituting the information leakage degree into an adaptive allocation algorithm of the privacy budget;
and inputting the calculated new privacy budget to the differential privacy technology.
8. An information data processing terminal, characterized in that the information data processing terminal is used for realizing the dynamic defense method for the model extraction attack according to any one of claims 1 to 5.
9. A dynamic defense system against model extraction attack, which implements the dynamic defense method against model extraction attack according to any one of claims 1 to 5, characterized in that the dynamic defense system against model extraction attack comprises:
the intelligent model deployment module is used for deploying an intelligent model to be protected on line by using MLaaS;
the differential privacy application module is used for introducing a differential privacy technology, setting a privacy budget and applying the privacy budget to the model;
the normal response module is used for generating a normal response after the model receives the request and simultaneously setting the model information leakage degree at the moment;
the noisy reply module is used for generating noise by using a differential privacy technology to disturb the model response and generate a noisy reply;
the information leakage degree comparison module is used for monitoring the request received by the model and the given noisy reply, and comparing the noisy reply with the training data set to calculate the information leakage degree of the model caused by receiving the request;
the accumulated value calculation module is used for calculating the accumulated value of the model information leakage degree;
the adaptive distribution module is used for substituting the information leakage degree into an adaptive distribution algorithm of the privacy budget;
the differential privacy input module is used for inputting the calculated new privacy budget to the differential privacy technology;
and the query request module is used for continuing dynamic defense after receiving a new query.
10. A network security terminal, wherein the network security terminal mounts the dynamic defense system for the model extraction attack according to claim 9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011030540.6A CN112269988B (en) | 2020-09-27 | 2020-09-27 | Dynamic defense method, system, medium, equipment and application of model extraction attack |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011030540.6A CN112269988B (en) | 2020-09-27 | 2020-09-27 | Dynamic defense method, system, medium, equipment and application of model extraction attack |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112269988A true CN112269988A (en) | 2021-01-26 |
CN112269988B CN112269988B (en) | 2022-10-04 |
Family
ID=74348630
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011030540.6A Active CN112269988B (en) | 2020-09-27 | 2020-09-27 | Dynamic defense method, system, medium, equipment and application of model extraction attack |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112269988B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112818400A (en) * | 2021-02-18 | 2021-05-18 | 支付宝(杭州)信息技术有限公司 | Biological identification method, device and equipment based on privacy protection |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108763954A (en) * | 2018-05-17 | 2018-11-06 | 西安电子科技大学 | Linear regression model (LRM) multidimensional difference of Gaussian method for secret protection, information safety system |
CN109934004A (en) * | 2019-03-14 | 2019-06-25 | 中国科学技术大学 | The method of privacy is protected in a kind of machine learning service system |
US20190227980A1 (en) * | 2018-01-22 | 2019-07-25 | Google Llc | Training User-Level Differentially Private Machine-Learned Models |
-
2020
- 2020-09-27 CN CN202011030540.6A patent/CN112269988B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190227980A1 (en) * | 2018-01-22 | 2019-07-25 | Google Llc | Training User-Level Differentially Private Machine-Learned Models |
CN108763954A (en) * | 2018-05-17 | 2018-11-06 | 西安电子科技大学 | Linear regression model (LRM) multidimensional difference of Gaussian method for secret protection, information safety system |
CN109934004A (en) * | 2019-03-14 | 2019-06-25 | 中国科学技术大学 | The method of privacy is protected in a kind of machine learning service system |
Non-Patent Citations (3)
Title |
---|
F. TRAMÈR等: "Stealing machine learning models via prediction APIs", 《PROC. 25TH USENIX CONF. SECUR. SYMP》 * |
刘睿瑄等: "机器学习中的隐私攻击与防御", 《软件学报》 * |
李效光等: "差分隐私综述", 《信息安全学报》 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112818400A (en) * | 2021-02-18 | 2021-05-18 | 支付宝(杭州)信息技术有限公司 | Biological identification method, device and equipment based on privacy protection |
CN112818400B (en) * | 2021-02-18 | 2022-05-03 | 支付宝(杭州)信息技术有限公司 | Biological identification method, device and equipment based on privacy protection |
Also Published As
Publication number | Publication date |
---|---|
CN112269988B (en) | 2022-10-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
AU2019210493B2 (en) | Anomaly detection to identify coordinated group attacks in computer networks | |
US10929533B2 (en) | System and method of identifying malicious files using a learning model trained on a malicious file | |
US20180189697A1 (en) | Methods and apparatus for processing threat metrics to determine a risk of loss due to the compromise of an organization asset | |
US10104112B2 (en) | Rating threat submitter | |
Yan et al. | Monitoring-based differential privacy mechanism against query flooding-based model extraction attack | |
Rahim et al. | Detecting the Phishing Attack Using Collaborative Approach and Secure Login through Dynamic Virtual Passwords. | |
An et al. | A Novel Differential Game Model‐Based Intrusion Response Strategy in Fog Computing | |
Mishra et al. | Securing virtual machines from anomalies using program-behavior analysis in cloud environment | |
CN105141573A (en) | Security protection method and security protection system based on WEB access compliance auditing | |
CN112269988B (en) | Dynamic defense method, system, medium, equipment and application of model extraction attack | |
Ahmad et al. | Classification of internet security attacks | |
CN117749446A (en) | Attack object tracing method, device, equipment and medium | |
CN111177692B (en) | Terminal credibility level evaluation method, device, equipment and storage medium | |
CN116527317A (en) | Access control method, system and electronic equipment | |
Khemaissia et al. | Network countermeasure selection under blockchain based privacy preserving | |
Chen et al. | An autonomic detection and protection system for denial of service attack | |
Kshetri et al. | algoXSSF: Detection and analysis of cross-site request forgery (XSRF) and cross-site scripting (XSS) attacks via Machine learning algorithms | |
EP3588351A1 (en) | System and method of identifying malicious files using a learning model trained on a malicious file | |
CN111914998A (en) | Training method and device for server attack information generator | |
Al Mamun et al. | Advanced Persistent Threat Detection: A Particle Swarm Optimization Approach | |
Ge et al. | Defense Strategy Selection Method for Stackelberg Security Game Based on Incomplete Information | |
US12069081B1 (en) | Security systems and methods for detecting malleable command and control | |
Paste et al. | Malware: Detection, Classification and Protection | |
CN112269987B (en) | Intelligent model information leakage degree evaluation method, system, medium and equipment | |
CN111818017A (en) | Railway network security prediction method and system and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |