CN112800471B - Countermeasure domain self-adaptive differential privacy protection method in multi-source domain migration - Google Patents
Countermeasure domain self-adaptive differential privacy protection method in multi-source domain migration Download PDFInfo
- Publication number
- CN112800471B CN112800471B CN202110201597.6A CN202110201597A CN112800471B CN 112800471 B CN112800471 B CN 112800471B CN 202110201597 A CN202110201597 A CN 202110201597A CN 112800471 B CN112800471 B CN 112800471B
- Authority
- CN
- China
- Prior art keywords
- network
- feature extraction
- domain
- gradient
- sample set
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/60—Protecting data
- G06F21/62—Protecting access to data via a platform, e.g. using keys or access control rules
- G06F21/6218—Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
- G06F21/6245—Protecting personal data, e.g. for financial or medical purposes
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Bioethics (AREA)
- General Health & Medical Sciences (AREA)
- Theoretical Computer Science (AREA)
- Computer Hardware Design (AREA)
- Databases & Information Systems (AREA)
- Computer Security & Cryptography (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Medical Informatics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention discloses an anti-domain self-adaptive differential privacy protection method in multi-source domain migration, which comprises the steps of randomly sampling a multi-source domain data set and a target domain data set according to a set sampling probability, respectively inputting the obtained source domain sample set and the target domain sample set into a feature extraction network or a target task network for feature extraction and classification, and carrying out normalization processing on weights obtained after iteration; updating the characteristic extraction network parameters based on a set gradient reverse transmission layer; when the set segmentation threshold value is used for respectively updating the parameters of the feature extraction network and the target task network, the gradient is segmented, Gaussian noise is added, and corresponding privacy budget is calculated; and combining the feature extraction network and the target task network to obtain a differential privacy classification model until the privacy budget reaches a budget threshold value, so as to ensure privacy security in multi-source domain migration.
Description
Technical Field
The invention relates to the technical field of artificial intelligence and privacy protection, in particular to an adaptive differential privacy protection method for an antagonistic domain in multi-source domain migration.
Background
In many practical applications such as medical image classification and E-commerce comment emotion analysis, a deep neural network can be fully trained by obtaining enough large-scale label data, so that an accurate model is constructed for effective analysis. However, in many cases, the data is very sparse. Direct migration often results in significant performance degradation due to differences in data distribution between domains, and to alleviate this problem, it is now the main practice to employ domain adaptation to minimize the impact of data distribution between the source domain and the target domain. However, this simple application may lead to a sub-optimal solution, and therefore, researchers have proposed enhanced multi-source domain adaptation, extending single-source domain adaptation.
The technical difficulty in multi-source domain migration lies in feature alignment of multi-source domain data, the existing method is mainly based on a deep countermeasure technology, features extracted from samples sampled from a source domain and a target domain are input into a discriminator network through a feature extraction network, then the feature extraction network is trained in a reverse gradient mode, iteration is carried out for multiple times until the features extracted by the feature extraction network are mixed up with the discriminator network, the features between the multi-source domain and the target domain are aligned, more migratable representations are learned, and then the target domain data are classified. But if a malicious user exists, the malicious user can reversely deduce data privacy information (such as model extraction attack) used for model training from the input and output of the model. Privacy security is another key concern due to the involvement of multi-source domain and target domain data. Privacy protection in the existing domain adaptive technology mainly solves privacy protection in single-source domain migration, and does not consider the privacy protection problem in multi-source domain migration.
Disclosure of Invention
The invention aims to provide an adaptive differential privacy protection method for an antagonistic domain in multi-source domain migration, which ensures privacy security in the multi-source domain migration.
In order to achieve the above object, the present invention provides an adaptive differential privacy protection method for an antagonistic domain in multi-source domain migration, which comprises the following steps:
respectively inputting the obtained source domain sample set and the target domain sample set into a feature extraction network or a target task network for feature extraction and updating, and carrying out normalization processing on the weight obtained after iteration;
updating the characteristic extraction network parameters based on a set gradient reverse transmission layer;
according to the privacy leakage risk, the feature extraction network and the target task network are respectively subjected to gradient segmentation by using a set segmentation threshold, and corresponding privacy budgets are calculated after Gaussian noise is added;
and combining the feature extraction network and the target task network until the privacy budget reaches a budget threshold value to obtain a differential privacy classification model.
The method comprises the following steps of respectively inputting an acquired source domain sample set and an acquired target domain sample set into a feature extraction network or a target task network for feature extraction and updating, and carrying out normalization processing on weights obtained after iteration is completed, wherein the method comprises the following steps:
inputting a source domain sample set and a target domain sample set into a feature extraction network for feature extraction, and then performing countermeasure learning and parameter updating on a feature input domain classification network until the source domain sample set and the target domain sample set corresponding to a plurality of source domains are iterated;
inputting the source domain sample set into the feature extraction network for feature extraction, and inputting the obtained feature vectors into a target task network for classification and parameter updating until the source domain sample sets corresponding to a plurality of source domains are iterated;
and calculating the weight corresponding to each source domain according to a total loss function, and carrying out normalization processing on the weight.
Inputting a source domain sample set and a target domain sample set into a feature extraction network for feature extraction, and then performing countermeasure learning and parameter updating on a feature input domain classification network until the source domain sample set and the target domain sample set corresponding to a plurality of source domains are iteratively completed, wherein the method comprises the following steps:
inputting the source domain sample set and the target domain sample set into the feature extraction network for feature extraction, and inputting the obtained feature vectors into a domain classification network to obtain a domain prediction label;
calculating a domain classification network loss function between the domain prediction label and the domain real label by using a negative log-likelihood function;
and optimizing and updating the domain classification network parameters according to the domain classification network loss function until the source domain sample set and the target domain sample set corresponding to a plurality of source domains are iterated.
Inputting the source domain sample set into the feature extraction network for feature extraction, and inputting the obtained feature vectors into a target task network for classification and parameter update until the source domain sample set corresponding to a plurality of source domains is iterated, wherein the method comprises the following steps:
inputting the source domain sample set into the feature extraction network for feature extraction, and inputting the obtained feature vector into the target task network to obtain a prediction class label;
calculating a target task network loss function between the prediction class label and the real class label by using a negative log-likelihood function;
and optimizing and updating the target task network parameters according to the target task network loss function until the source domain sample sets corresponding to the plurality of source domains are iterated.
Wherein, updating the feature extraction network parameters based on a set gradient back-propagation layer comprises:
optimizing the feature extraction network parameters by using the total loss function, and solving the partial derivatives of the optimized feature extraction network parameters to obtain corresponding loss gradients;
and updating the feature extraction network parameters according to the loss gradient.
Before the obtained source domain sample set and target domain sample set are respectively input into a feature extraction network or a target task network for feature extraction and updating, and normalization processing is performed on weights obtained after iteration, the method further comprises the following steps:
acquiring a multi-source domain data set, a target domain data set and initialization network parameters, and respectively randomly sampling any one of the multi-source domain data set and the target domain data set according to a set sampling probability to obtain a source domain sample set and a target domain sample set.
After the feature extraction network and the target task network are combined to obtain a differential privacy classification model until the privacy budget reaches a budget threshold, the method further comprises the following steps:
and storing and releasing the differential privacy classification model.
The method comprises the steps of obtaining a multi-source domain data set, a target domain data set and initialization network parameters, respectively and randomly sampling any multi-source domain data set and any target domain data set according to set sampling probability to obtain a source domain sample set and a target domain sample set, respectively inputting the source domain sample set and the target domain sample set into a feature extraction network or a target task network to perform feature extraction and updating, and performing normalization processing on weights obtained after iteration is completed; updating the characteristic extraction network parameters based on a set gradient reverse transmission layer; according to the privacy leakage risk, the feature extraction network and the target task network are respectively segmented by using a set segmentation threshold value, and corresponding privacy budgets are calculated; and combining the feature extraction network and the target task network until the privacy budget reaches a budget threshold value to obtain a differential privacy classification model, and storing and uploading the differential privacy classification model to ensure privacy security in multi-source domain migration.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic step diagram of an adaptive differential privacy protection method for an antagonistic domain in multi-source domain migration according to the present invention.
Fig. 2 is a schematic flow chart of an adaptive differential privacy protection method for an antagonistic domain in multi-source domain migration according to the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
In the description of the present invention, "a plurality" means two or more unless specifically defined otherwise.
Referring to fig. 1 and fig. 2, the present invention provides a method for adaptive differential privacy protection of an anti-domain in multi-source domain migration, including the following steps:
s101, respectively inputting the obtained source domain sample set and the obtained target domain sample set into a feature extraction network or a target task network for feature extraction and updating, and carrying out normalization processing on the weight obtained after iteration is completed.
Specifically, firstly, a multi-source domain data set, a target domain data set and an initialization network parameter are obtained, and random sampling is respectively carried out on any one of the multi-source domain data set and the target domain data set according to a set sampling probability to obtain a source domain sample set and a target domain sample set.
Then, inputting the source domain sample set and the target domain sample set into a feature extraction network for feature extraction and parameter updating until the source domain sample set and the target domain sample set corresponding to a plurality of source domains are iterated, specifically: inputting the source domain sample set and the target domain sample set into the feature extraction network for feature extraction, and inputting the obtained feature vectors into a domain classification network to obtain a domain prediction label; calculating a domain classification network loss function between the domain prediction label and the domain real label by using a negative log-likelihood function; and optimizing the domain classification network parameters according to the domain classification network loss function, and after the domain classification network parameters are derived by using the domain classification network loss function, updating the domain classification network parameters by using the obtained sample gradient until the source domain sample set and the target domain sample set corresponding to a plurality of source domains are iterated.
Secondly, inputting the source domain sample set into the feature extraction network for feature extraction, and inputting the obtained feature vector into a target task network for optimization and parameter update until the source domain sample set corresponding to a plurality of source domains is iterated, specifically: inputting the source domain sample set into the feature extraction network for feature extraction, and inputting the obtained feature vector into the target task network to obtain a prediction class label; calculating a target task network loss function between the prediction class label and the real class label by using a negative log-likelihood function; optimizing the loss and optimizing the target task network parameters according to the target task network loss function, deriving the optimized target task network parameters, and updating the target task network parameters by using the obtained source domain sample gradient until the source domain sample sets corresponding to a plurality of source domains are iteratively completed.
And finally, calculating the weight corresponding to each source domain according to a total loss function, and carrying out normalization processing on the weight.
For example:
combining multiple source domain data sets DSTarget domain data set T and initialization network parameter inputs for the ith source domain data set DsiAnd randomly sampling the target domain data set T by sampling probability M/M respectively to obtain a source domain sample setAnd a target domain sample set Tt。
Step 1.1, collecting source domain samplesSample in (1) and target domain sample set TtSample of (1)Inputting the data into a feature extraction network E, wherein the feature extraction network E consists of 3 convolutional layers and 2 pooling layers, and obtaining a feature vector after a sample passes through the feature extraction network E.
And step 1.2, inputting the feature vector into a domain classification network D, wherein the domain classification network D consists of a 1-layer pooling layer and a 3-layer full-connection layer, finally outputting a domain prediction label of the sample, and calculating the domain classification network loss between the domain prediction label and a domain real label of the sample according to a negative log-likelihood function.
Step 1.3, optimizing the network parameters according to the domain classification network loss functionTo pairThe partial derivatives yield the gradient for each sample:
wherein the content of the first and second substances,is a loss function of the domain classification network, i denotes the ith source domain,representing the t-th set of iteration samplesThe (j) th sample of the (c),is the t-th iteration domain classification network sampleThe gradient of (a) of (b) is,is the domain classification network D parameter.
Step 1.4, according to the sample gradientUpdating domain classification network D parametersWherein the content of the first and second substances,
Step 1.5, collecting source domain samplesSample x ofmInputting the data into a feature extraction network E, inputting the obtained feature vector into a target task network Y, wherein the target task network Y consists of 1 pooling layer, 1 convolution layer and 3 full-connection layers, finally outputting a prediction class label of the source domain sample, and calculating the target task network loss between the prediction class label and a real class label of the source domain sample according to a negative log-likelihood function.
Step 1.6, optimizing the network parameters according to the loss function of the target task network YTo pairAnd (3) obtaining a source domain sample gradient by calculating partial derivatives:
wherein the content of the first and second substances,is a loss function of the domain classification network, i denotes the ith source domain,representing the ith sample set of the t iterationThe (u) th sample of (a) is,is the t-th iteration domain classification network sampleOf the gradient of (c).
Step 1.7, updating parameters in the target task network Y according to the source domain sample gradientWherein the calculation formula is as follows:
Step 2, under the condition that k source domains exist, repeating the step 1.1 to the step 1.7 for k times, and calculating the weight corresponding to each source domain according to the total loss function of the whole modelWhereinIs the loss value of the ith source domain and target domain versus the total loss function of the antibody domain adaptation.
Step 3, normalizing the weight of each source domain obtained in the step 2, wherein the normalization formula is as follows:
where γ is a hyperparameter.
And S102, updating the characteristic extraction network parameters based on the set gradient back transmission layer.
Specifically, the total loss function is used for optimizing the feature extraction network parameters, and the bias derivatives of the optimized feature extraction network parameters are solved to obtain corresponding loss gradients; and updating the feature extraction network parameters according to the loss gradient. The specific process is as follows:
step 4, setting a gradient reverse transmission layer (GRL) for updating the gradient of the domain classification network D according to the mode of the reverse transmission layer to extract the parameters of the network EThe gradient reverse transport layer (GRL) is defined as follows:
R(x)=x
where R (x) is the gradient inversion layer and I is the identity matrix.
Step 4.1, extracting parameters of the network E according to the total loss function optimization characteristics of the whole modelTo pairAnd (3) obtaining a loss gradient of the sample by derivation:
wherein, i represents the ith source domain,representing the t-th set of iteration samplesThe (j) th sample of the (c),is a loss function of the target mission network,is a loss function of the domain classification network, lambda is a hyper-parameter,is the weight of the source domain normalization,is the t-th time characteristic extraction network sampleOf the gradient of (c).
Step 4.2, according to the loss gradient update characteristics of the samples, extracting the parameters of the network EWherein
Where η is the learning rate and m is the size of the sample set.
And S103, according to the privacy leakage risk, performing gradient segmentation on the feature extraction network and the target task network respectively by using a set segmentation threshold, and calculating a corresponding privacy budget after adding Gaussian noise.
Specifically, according to the privacy disclosure risk of the release model, the gradient segmentation is performed on the feature extraction network and the target task network respectively by using a set segmentation threshold, and after gaussian disturbance is added to the obtained gradient, corresponding parameters are updated respectively, and meanwhile, a corresponding privacy budget is calculated, including:
according to the privacy disclosure risk of the release model, performing gradient segmentation on the target task network by using a set segmentation threshold value, and adding Gaussian noise disturbance to a first gradient obtained by segmentation to obtain a first Gaussian gradient; updating the network parameters of the target task according to the first Gaussian gradient, and calculating a privacy budget corresponding to the current iteration times;
according to the privacy disclosure risk of the release model, performing gradient segmentation on the feature extraction network by using a set segmentation threshold, and adding Gaussian noise disturbance to a second gradient obtained by segmentation to obtain a second Gaussian gradient; and updating the characteristic extraction network parameters according to the second Gaussian gradient, and calculating privacy budget corresponding to the current iteration number.
The detailed process comprises the following steps:
step 5, according to the privacy disclosure risk of the release model, the gradient of the target task network Y is divided, a division threshold value C is set firstly for gradient cutting, and the cutting is carried out according to l of the gradient2Ratio of paradigm to segmentation threshold CThe larger one was chosen for the cut compared to 1, resulting in a first gradient:
wherein the moleculeIs the t-th iteration domain classification network sampleOf the gradient of (c).
Step 6, adding Gaussian noise disturbance to the cut first gradient to obtain a first Gaussian gradient:
wherein, N (0, sigma)2C2I) Gaussian distributed noise, C is the segmentation threshold, σ is the noise specification of gaussian noise, and I is the identity matrix.
Step 7, updating the parameters of the target task network YAnd calculating the iteration privacy budget of the current iteration times tWhereinWhere η is the learning rate and Δ ε is the privacy consumption per pass.
Step 8, segmenting the gradient of the feature extraction network E according to the privacy disclosure risk of the release model, firstly setting a segmentation threshold value C for gradient segmentation, wherein the segmentation is performed according to l of the gradient2Ratio of paradigm to segmentation threshold CThe larger one was chosen for cleavage compared to 1, resulting in a second gradient:
wherein, the molecule isIs the t-th iteration characteristic extraction network sampleOf the gradient of (c).
Step 9, adding gaussian noise disturbance to the second gradient cut in step 8 to obtain a second gaussian gradient:
wherein, N (0, sigma)2C2I) Gaussian distributed noise, C is the segmentation threshold, σ is the noise specification of gaussian noise, and I is the identity matrix.
Step 10, updating the characteristic extraction network E parameterAnd calculating the iteration privacy budget of the current iteration times t timesWherein the content of the first and second substances,where η is the learning rate and Δ ε is the privacy consumption per pass.
And S104, combining the feature extraction network and the target task network until the privacy budget reaches a budget threshold value to obtain a differential privacy classification model.
Specifically, until the privacy budget is iterated to the privacy budget threshold value and is consumed, the multisource differential privacy domain self-adaptive model is obtained through convergence, and the finally trained differential privacy classification model formed by the feature extraction network E and the target task network Y is issued to the user for use.
Compared with the prior art, the invention has the following characteristics:
1. sensitive data needing protection in the multi-source domain migration based on confrontation domain adaptive model training is analyzed, and the fact that the corresponding network parameter updating in the model training process needs to be disturbed by a differential privacy technology is analyzed according to the privacy leakage risk of a model which is finally issued.
2. And then, according to privacy analysis, on the gradient of specific network parameter updating, gradient cutting is carried out firstly, and then Gaussian noise is added, so that gradient explosion is prevented, the convergence speed of the model is accelerated, and the added noise amount can be controlled, so that the effectiveness and the privacy of the finally issued model are balanced.
The method comprises the steps of obtaining a multi-source domain data set, a target domain data set and initialization network parameters, respectively and randomly sampling any multi-source domain data set and any target domain data set according to set sampling probability to obtain a source domain sample set and a target domain sample set, respectively inputting the source domain sample set and the target domain sample set into a feature extraction network or a target task network to perform feature extraction and updating, and performing normalization processing on weights obtained after iteration is completed; updating the characteristic extraction network parameters based on a set gradient reverse transmission layer; according to the privacy disclosure risk, when the feature extraction network and the target task network are respectively updated by using a set segmentation threshold, gradient cutting is firstly carried out, then Gaussian noise is added, and corresponding privacy budget is calculated; and combining the feature extraction network and the target task network until the privacy budget reaches a budget threshold value to obtain a differential privacy classification model, and storing and releasing the differential privacy classification model to ensure privacy security in multi-source domain migration.
While the invention has been described with reference to a preferred embodiment, it will be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.
Claims (7)
1. An adaptive differential privacy protection method for a countermeasure domain in multi-source domain migration is characterized by comprising the following steps:
respectively inputting the obtained source domain sample set and the target domain sample set into a feature extraction network or a target task network for feature extraction and updating, and carrying out normalization processing on the weight obtained after iteration;
updating the characteristic extraction network parameters based on a set gradient reverse transmission layer;
according to the privacy leakage risk, the feature extraction network and the target task network are respectively subjected to gradient segmentation by using a set segmentation threshold, and corresponding privacy budgets are calculated after Gaussian noise is added;
combining the feature extraction network and the target task network until the privacy budget reaches a budget threshold value to obtain a differential privacy classification model;
according to the privacy disclosure risk, the method comprises the following steps of respectively carrying out gradient segmentation on the feature extraction network and the target task network by using a set segmentation threshold value, and calculating a corresponding privacy budget after adding Gaussian noise, wherein the method comprises the following steps:
according to the privacy disclosure risk of the release model, the gradient of the target task network Y is divided, a division threshold value C is set for gradient cutting, and the cutting is carried out according to l of the gradient2Ratio of paradigm to segmentation threshold CThe larger one was chosen for the cut compared to 1, resulting in a first gradient:
adding Gaussian noise disturbance to the cut first gradient to obtain a first Gaussian gradient:
wherein, N (0, sigma)2C2I) Gaussian distributed noise, C is a segmentation threshold, sigma is the noise specification of the Gaussian noise, and I is a unit matrix;
updating parameters of target task network YAnd calculating the iteration privacy budget of the current iteration times tWhereinWhere η is the learning rate and Δ ε is the privacy consumption per pass;
according to the privacy disclosure risk of the release model, the gradient of the feature extraction network E is divided, a division threshold value C is set for gradient cutting, and the cutting is carried out according to l of the gradient2Ratio of paradigm to segmentation threshold CThe larger one was chosen for cleavage compared to 1, resulting in a second gradient:
wherein, the molecule isIs the t-th iteration characteristic extraction network sampleA gradient of (a);
and adding Gaussian noise disturbance to the cut second gradient to obtain a second Gaussian gradient:
wherein, N (0, sigma)2C2I) Gaussian distributed noise, C is a segmentation threshold, sigma is the noise specification of the Gaussian noise, and I is a unit matrix;
2. The method for adaptive differential privacy protection for the countermeasure domain in multi-source domain migration according to claim 1, wherein the steps of inputting the obtained source domain sample set and target domain sample set to a feature extraction network or a target task network respectively for feature extraction and update, and normalizing the weights obtained by iteration completion comprise:
inputting a source domain sample set and a target domain sample set into a feature extraction network for feature extraction, and then performing countermeasure learning and parameter updating on a feature input domain classification network until the source domain sample set and the target domain sample set corresponding to a plurality of source domains are iterated;
inputting the source domain sample set into the feature extraction network for feature extraction, and inputting the obtained feature vectors into a target task network for classification and parameter updating until the source domain sample sets corresponding to a plurality of source domains are iterated;
and calculating the weight corresponding to each source domain according to a total loss function, and carrying out normalization processing on the weight.
3. The method of claim 2, wherein the method for adaptive differential privacy protection of the countermeasure domain in multi-source domain migration comprises the steps of inputting a source domain sample set and a target domain sample set into a feature extraction network for feature extraction, and then performing countermeasure learning and parameter updating on a feature input domain classification network until iteration of the source domain sample set and the target domain sample set corresponding to a plurality of source domains is completed, and comprises the steps of:
inputting the source domain sample set and the target domain sample set into the feature extraction network for feature extraction, and inputting the obtained feature vectors into a domain classification network to obtain a domain prediction label;
calculating a domain classification network loss function between the domain prediction label and the domain real label by using a negative log-likelihood function;
and optimizing and updating the domain classification network parameters according to the domain classification network loss function until the source domain sample set and the target domain sample set corresponding to a plurality of source domains are iterated.
4. The method for adaptive differential privacy protection for countermeasure domains in multi-source domain migration according to claim 2, wherein the step of inputting the source domain sample set to the feature extraction network for feature extraction and inputting the obtained feature vector to a target task network for classification and parameter update until the source domain sample sets corresponding to a plurality of source domains are completed iteratively comprises the steps of:
inputting the source domain sample set into the feature extraction network for feature extraction, and inputting the obtained feature vector into the target task network to obtain a prediction class label;
calculating a target task network loss function between the prediction class label and the real class label by using a negative log-likelihood function;
and optimizing and updating the target task network parameters according to the target task network loss function until the source domain sample sets corresponding to the plurality of source domains are iterated.
5. The method for adaptive differential privacy protection for countermeasure domains in multi-source domain migration according to claim 2, wherein updating the feature extraction network parameters based on a set gradient back-propagation layer comprises:
optimizing the feature extraction network parameters by using the total loss function, and solving the partial derivatives of the optimized feature extraction network parameters to obtain corresponding loss gradients;
and updating the feature extraction network parameters according to the loss gradient.
6. The method for adaptive differential privacy protection for the countermeasure domain in multi-source domain migration according to claim 1, wherein before the acquired source domain sample set and target domain sample set are respectively input to a feature extraction network or a target task network for feature extraction and update, and normalization processing is performed on weights obtained after iteration completion, the method further comprises:
acquiring a multi-source domain data set, a target domain data set and initialization network parameters, and respectively randomly sampling any one of the multi-source domain data set and the target domain data set according to a set sampling probability to obtain a source domain sample set and a target domain sample set.
7. The method of claim 1, wherein the method further comprises, after combining the feature extraction network and the target task network to obtain a differential privacy classification model until the privacy budget reaches a budget threshold, the method further comprising:
and storing and releasing the differential privacy classification model.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110201597.6A CN112800471B (en) | 2021-02-23 | 2021-02-23 | Countermeasure domain self-adaptive differential privacy protection method in multi-source domain migration |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110201597.6A CN112800471B (en) | 2021-02-23 | 2021-02-23 | Countermeasure domain self-adaptive differential privacy protection method in multi-source domain migration |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112800471A CN112800471A (en) | 2021-05-14 |
CN112800471B true CN112800471B (en) | 2022-04-22 |
Family
ID=75815353
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110201597.6A Active CN112800471B (en) | 2021-02-23 | 2021-02-23 | Countermeasure domain self-adaptive differential privacy protection method in multi-source domain migration |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112800471B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114285545B (en) * | 2021-12-24 | 2024-05-17 | 成都三零嘉微电子有限公司 | Side channel attack method and system based on convolutional neural network |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CA2469598A1 (en) * | 2004-06-01 | 2005-12-01 | Daniel W. Onischuk | Computerized voting system |
CN109919934A (en) * | 2019-03-11 | 2019-06-21 | 重庆邮电大学 | A kind of liquid crystal display panel defect inspection method based on the study of multi-source domain depth migration |
CN110602099A (en) * | 2019-09-16 | 2019-12-20 | 广西师范大学 | Privacy protection method based on verifiable symmetric searchable encryption |
CN110647765A (en) * | 2019-09-19 | 2020-01-03 | 济南大学 | Privacy protection method and system based on knowledge migration under collaborative learning framework |
CN110969243A (en) * | 2019-11-29 | 2020-04-07 | 支付宝(杭州)信息技术有限公司 | Method and device for training countermeasure generation network for preventing privacy leakage |
CN111091193A (en) * | 2019-10-31 | 2020-05-01 | 武汉大学 | Domain-adapted privacy protection method based on differential privacy and oriented to deep neural network |
CN112203282A (en) * | 2020-08-28 | 2021-01-08 | 中国科学院信息工程研究所 | 5G Internet of things intrusion detection method and system based on federal transfer learning |
-
2021
- 2021-02-23 CN CN202110201597.6A patent/CN112800471B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CA2469598A1 (en) * | 2004-06-01 | 2005-12-01 | Daniel W. Onischuk | Computerized voting system |
CN109919934A (en) * | 2019-03-11 | 2019-06-21 | 重庆邮电大学 | A kind of liquid crystal display panel defect inspection method based on the study of multi-source domain depth migration |
CN110602099A (en) * | 2019-09-16 | 2019-12-20 | 广西师范大学 | Privacy protection method based on verifiable symmetric searchable encryption |
CN110647765A (en) * | 2019-09-19 | 2020-01-03 | 济南大学 | Privacy protection method and system based on knowledge migration under collaborative learning framework |
CN111091193A (en) * | 2019-10-31 | 2020-05-01 | 武汉大学 | Domain-adapted privacy protection method based on differential privacy and oriented to deep neural network |
CN110969243A (en) * | 2019-11-29 | 2020-04-07 | 支付宝(杭州)信息技术有限公司 | Method and device for training countermeasure generation network for preventing privacy leakage |
CN112203282A (en) * | 2020-08-28 | 2021-01-08 | 中国科学院信息工程研究所 | 5G Internet of things intrusion detection method and system based on federal transfer learning |
Non-Patent Citations (3)
Title |
---|
Context-aware Privacy Preservation in a Hierarchical Fog Computing System;Bruce Gu等;《网页在线公开:https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8761455》;20190715;第1-6页 * |
差分隐私的数据流关键模式挖掘方法;王金艳等;《软件学报》;20190424;第30卷(第3期);第648-666页 * |
模型参数自适应迁移的多源域适应;余欢欢等;《计算机技术与自动化》;20200314;第38卷(第4期);第89-90页 * |
Also Published As
Publication number | Publication date |
---|---|
CN112800471A (en) | 2021-05-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Fan et al. | Learning what data to learn | |
CN110048827B (en) | Class template attack method based on deep learning convolutional neural network | |
CN111126386A (en) | Sequence field adaptation method based on counterstudy in scene text recognition | |
CN111242157A (en) | Unsupervised domain self-adaption method combining deep attention feature and conditional opposition | |
Wang et al. | Distilling knowledge from an ensemble of convolutional neural networks for seismic fault detection | |
Chen et al. | Generalized Correntropy based deep learning in presence of non-Gaussian noises | |
CN109740057B (en) | Knowledge extraction-based enhanced neural network and information recommendation method | |
CN112884045B (en) | Classification method of random edge deletion embedded model based on multiple visual angles | |
CN113222068B (en) | Remote sensing image multi-label classification method based on adjacency matrix guidance label embedding | |
CN113469186A (en) | Cross-domain migration image segmentation method based on small amount of point labels | |
CN112765415A (en) | Link prediction method based on relational content joint embedding convolution neural network | |
CN114417427A (en) | Deep learning-oriented data sensitivity attribute desensitization system and method | |
CN112800471B (en) | Countermeasure domain self-adaptive differential privacy protection method in multi-source domain migration | |
CN117201122A (en) | Unsupervised attribute network anomaly detection method and system based on view level graph comparison learning | |
Fan et al. | Fast model update for iot traffic anomaly detection with machine unlearning | |
CN105809200B (en) | Method and device for autonomously extracting image semantic information in bioauthentication mode | |
Xu et al. | An efficient channel-level pruning for CNNs without fine-tuning | |
CN115952343A (en) | Social robot detection method based on multi-relation graph convolutional network | |
CN111401155B (en) | Image recognition method of residual error neural network based on implicit Euler jump connection | |
Yao | Exploration of Membership Inference Attack on Convolutional Neural Networks and Its Defenses | |
CN113361652A (en) | Individual income prediction oriented depolarization method and device | |
Yadav et al. | Modified adaptive inertia weight particle swarm optimisation for data clustering | |
CN112766336A (en) | Method for improving verifiable defense performance of model under maximum random smoothness | |
Cao et al. | Dual-drive opposition-based non-inertial particle swarm optimization for deep learning in IoTs | |
CN110210988B (en) | Symbolic social network embedding method based on deep hash |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |