CN112800471A - Countermeasure domain self-adaptive differential privacy protection method in multi-source domain migration - Google Patents

Countermeasure domain self-adaptive differential privacy protection method in multi-source domain migration Download PDF

Info

Publication number
CN112800471A
CN112800471A CN202110201597.6A CN202110201597A CN112800471A CN 112800471 A CN112800471 A CN 112800471A CN 202110201597 A CN202110201597 A CN 202110201597A CN 112800471 A CN112800471 A CN 112800471A
Authority
CN
China
Prior art keywords
domain
network
feature extraction
sample set
source
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110201597.6A
Other languages
Chinese (zh)
Other versions
CN112800471B (en
Inventor
王利娥
钟子力
李先贤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangxi Normal University
Original Assignee
Guangxi Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangxi Normal University filed Critical Guangxi Normal University
Priority to CN202110201597.6A priority Critical patent/CN112800471B/en
Publication of CN112800471A publication Critical patent/CN112800471A/en
Application granted granted Critical
Publication of CN112800471B publication Critical patent/CN112800471B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes

Abstract

The invention discloses an anti-domain self-adaptive differential privacy protection method in multi-source domain migration, which comprises the steps of randomly sampling a multi-source domain data set and a target domain data set according to a set sampling probability, respectively inputting the obtained source domain sample set and the target domain sample set into a feature extraction network or a target task network for feature extraction and classification, and carrying out normalization processing on weights obtained after iteration; updating the characteristic extraction network parameters based on a set gradient reverse transmission layer; when the set segmentation threshold value is used for respectively updating the parameters of the feature extraction network and the target task network, the gradient is segmented, Gaussian noise is added, and corresponding privacy budget is calculated; and combining the feature extraction network and the target task network to obtain a differential privacy classification model until the privacy budget reaches a budget threshold value, so as to ensure privacy security in multi-source domain migration.

Description

Countermeasure domain self-adaptive differential privacy protection method in multi-source domain migration
Technical Field
The invention relates to the technical field of artificial intelligence and privacy protection, in particular to an adaptive differential privacy protection method for an antagonistic domain in multi-source domain migration.
Background
In many practical applications such as medical image classification and E-commerce comment emotion analysis, a deep neural network can be fully trained by obtaining enough large-scale label data, so that an accurate model is constructed for effective analysis. However, in many cases, the data is very sparse. Direct migration often results in significant performance degradation due to differences in data distribution between domains, and to alleviate this problem, it is now the main practice to employ domain adaptation to minimize the impact of data distribution between the source domain and the target domain. However, this simple application may lead to a sub-optimal solution, and therefore, researchers have proposed enhanced multi-source domain adaptation, extending single-source domain adaptation.
The technical difficulty in multi-source domain migration lies in feature alignment of multi-source domain data, the existing method is mainly based on a deep countermeasure technology, features extracted from samples sampled from a source domain and a target domain are input into a discriminator network through a feature extraction network, then the feature extraction network is trained in a reverse gradient mode, iteration is carried out for multiple times until the features extracted by the feature extraction network are mixed up with the discriminator network, the features between the multi-source domain and the target domain are aligned, more migratable representations are learned, and then the target domain data are classified. But if a malicious user exists, the malicious user can reversely deduce data privacy information (such as model extraction attack) used for model training from the input and output of the model. Privacy security is another key concern due to the involvement of multi-source domain and target domain data. Privacy protection in the existing domain adaptive technology mainly solves privacy protection in single-source domain migration, and does not consider the privacy protection problem in multi-source domain migration.
Disclosure of Invention
The invention aims to provide an adaptive differential privacy protection method for an antagonistic domain in multi-source domain migration, which ensures privacy security in the multi-source domain migration.
In order to achieve the above object, the present invention provides an adaptive differential privacy protection method for an antagonistic domain in multi-source domain migration, which comprises the following steps:
respectively inputting the obtained source domain sample set and the target domain sample set into a feature extraction network or a target task network for feature extraction and updating, and carrying out normalization processing on the weight obtained after iteration;
updating the characteristic extraction network parameters based on a set gradient reverse transmission layer;
according to the privacy leakage risk, the feature extraction network and the target task network are respectively subjected to gradient segmentation by using a set segmentation threshold, and corresponding privacy budgets are calculated after Gaussian noise is added;
and combining the feature extraction network and the target task network until the privacy budget reaches a budget threshold value to obtain a differential privacy classification model.
The method comprises the following steps of respectively inputting an acquired source domain sample set and an acquired target domain sample set into a feature extraction network or a target task network for feature extraction and updating, and carrying out normalization processing on weights obtained after iteration is completed, wherein the method comprises the following steps:
inputting a source domain sample set and a target domain sample set into a feature extraction network for feature extraction, and then performing countermeasure learning and parameter updating on a feature input domain classification network until the source domain sample set and the target domain sample set corresponding to a plurality of source domains are iterated;
inputting the source domain sample set into the feature extraction network for feature extraction, and inputting the obtained feature vectors into a target task network for classification and parameter updating until the source domain sample sets corresponding to a plurality of source domains are iterated;
and calculating the weight corresponding to each source domain according to a total loss function, and carrying out normalization processing on the weight.
Inputting a source domain sample set and a target domain sample set into a feature extraction network for feature extraction, and then performing countermeasure learning and parameter updating on a feature input domain classification network until the source domain sample set and the target domain sample set corresponding to a plurality of source domains are iteratively completed, wherein the method comprises the following steps:
inputting the source domain sample set and the target domain sample set into the feature extraction network for feature extraction, and inputting the obtained feature vectors into a domain classification network to obtain a domain prediction label;
calculating a domain classification network loss function between the domain prediction label and the domain real label by using a negative log-likelihood function;
and optimizing and updating the domain classification network parameters according to the domain classification network loss function until the source domain sample set and the target domain sample set corresponding to a plurality of source domains are iterated.
Inputting the source domain sample set into the feature extraction network for feature extraction, and inputting the obtained feature vectors into a target task network for classification and parameter update until the source domain sample set corresponding to a plurality of source domains is iterated, wherein the method comprises the following steps:
inputting the source domain sample set into the feature extraction network for feature extraction, and inputting the obtained feature vector into the target task network to obtain a prediction class label;
calculating a target task network loss function between the prediction class label and the real class label by using a negative log-likelihood function;
and optimizing and updating the target task network parameters according to the target task network loss function until the source domain sample sets corresponding to the plurality of source domains are iterated.
Wherein, updating the feature extraction network parameters based on a set gradient back-propagation layer comprises:
optimizing the feature extraction network parameters by using the total loss function, and solving the partial derivatives of the optimized feature extraction network parameters to obtain corresponding loss gradients;
and updating the feature extraction network parameters according to the loss gradient.
Before the obtained source domain sample set and target domain sample set are respectively input into a feature extraction network or a target task network for feature extraction and updating, and normalization processing is performed on weights obtained after iteration, the method further comprises the following steps:
acquiring a multi-source domain data set, a target domain data set and initialization network parameters, and respectively randomly sampling any one of the multi-source domain data set and the target domain data set according to a set sampling probability to obtain a source domain sample set and a target domain sample set.
After the feature extraction network and the target task network are combined to obtain a differential privacy classification model until the privacy budget reaches a budget threshold, the method further comprises the following steps:
and storing and releasing the differential privacy classification model.
The method comprises the steps of obtaining a multi-source domain data set, a target domain data set and initialization network parameters, respectively and randomly sampling any multi-source domain data set and any target domain data set according to set sampling probability to obtain a source domain sample set and a target domain sample set, respectively inputting the source domain sample set and the target domain sample set into a feature extraction network or a target task network to perform feature extraction and updating, and performing normalization processing on weights obtained after iteration is completed; updating the characteristic extraction network parameters based on a set gradient reverse transmission layer; according to the privacy leakage risk, the feature extraction network and the target task network are respectively segmented by using a set segmentation threshold value, and corresponding privacy budgets are calculated; and combining the feature extraction network and the target task network until the privacy budget reaches a budget threshold value to obtain a differential privacy classification model, and storing and uploading the differential privacy classification model to ensure privacy security in multi-source domain migration.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic step diagram of an adaptive differential privacy protection method for an antagonistic domain in multi-source domain migration according to the present invention.
Fig. 2 is a schematic flow chart of an adaptive differential privacy protection method for an antagonistic domain in multi-source domain migration according to the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
In the description of the present invention, "a plurality" means two or more unless specifically defined otherwise.
Referring to fig. 1 and fig. 2, the present invention provides a method for adaptive differential privacy protection of an anti-domain in multi-source domain migration, including the following steps:
s101, respectively inputting the obtained source domain sample set and the obtained target domain sample set into a feature extraction network or a target task network for feature extraction and updating, and carrying out normalization processing on the weight obtained after iteration is completed.
Specifically, firstly, a multi-source domain data set, a target domain data set and an initialization network parameter are obtained, and random sampling is respectively carried out on any one of the multi-source domain data set and the target domain data set according to a set sampling probability to obtain a source domain sample set and a target domain sample set.
Then, inputting the source domain sample set and the target domain sample set into a feature extraction network for feature extraction and parameter updating until the source domain sample set and the target domain sample set corresponding to a plurality of source domains are iterated, specifically: inputting the source domain sample set and the target domain sample set into the feature extraction network for feature extraction, and inputting the obtained feature vectors into a domain classification network to obtain a domain prediction label; calculating a domain classification network loss function between the domain prediction label and the domain real label by using a negative log-likelihood function; and optimizing the domain classification network parameters according to the domain classification network loss function, and after the domain classification network parameters are derived by using the domain classification network loss function, updating the domain classification network parameters by using the obtained sample gradient until the source domain sample set and the target domain sample set corresponding to a plurality of source domains are iterated.
Secondly, inputting the source domain sample set into the feature extraction network for feature extraction, and inputting the obtained feature vector into a target task network for optimization and parameter update until the source domain sample set corresponding to a plurality of source domains is iterated, specifically: inputting the source domain sample set into the feature extraction network for feature extraction, and inputting the obtained feature vector into the target task network to obtain a prediction class label; calculating a target task network loss function between the prediction class label and the real class label by using a negative log-likelihood function; optimizing the loss and optimizing the target task network parameters according to the target task network loss function, deriving the optimized target task network parameters, and updating the target task network parameters by using the obtained source domain sample gradient until the source domain sample sets corresponding to a plurality of source domains are iteratively completed.
And finally, calculating the weight corresponding to each source domain according to a total loss function, and carrying out normalization processing on the weight.
For example:
combining multiple source domain data sets DSTarget domain data set T and initialization network parameter inputs for the ith source domain data set DsiAnd randomly sampling the target domain data set T by sampling probability M/M respectively to obtain a source domain sample set
Figure BDA0002948040950000051
And a target domain sample set Tt
Step 1.1, collecting source domain samples
Figure BDA0002948040950000052
Sample in (1) and target domain sample set TtSample of (1)
Figure BDA0002948040950000053
Inputting the data into a feature extraction network E, wherein the feature extraction network E consists of 3 convolutional layers and 2 pooling layers, and a sample passes through the feature extraction network E to obtain a feature vector。
And step 1.2, inputting the feature vector into a domain classification network D, wherein the domain classification network D consists of a 1-layer pooling layer and a 3-layer full-connection layer, finally outputting a domain prediction label of the sample, and calculating the domain classification network loss between the domain prediction label and a domain real label of the sample according to a negative log-likelihood function.
Step 1.3, optimizing the network parameters according to the domain classification network loss function
Figure BDA0002948040950000054
To pair
Figure BDA0002948040950000055
The partial derivatives yield the gradient for each sample:
Figure BDA0002948040950000056
wherein the content of the first and second substances,
Figure BDA0002948040950000057
is a loss function of the domain classification network, i denotes the ith source domain,
Figure BDA0002948040950000058
representing the t-th set of iteration samples
Figure BDA0002948040950000059
The (j) th sample of the (c),
Figure BDA00029480409500000510
is the t-th iteration domain classification network sample
Figure BDA00029480409500000511
The gradient of (a) of (b) is,
Figure BDA00029480409500000512
is the domain classification network D parameter.
Step 1.4, according to the sample gradient
Figure BDA0002948040950000061
Updating domain classification network D parameters
Figure BDA0002948040950000062
Wherein the content of the first and second substances,
Figure BDA0002948040950000063
where η is the learning rate and 2m is the set of samples
Figure BDA0002948040950000064
The size of (2).
Step 1.5, collecting source domain samples
Figure BDA0002948040950000065
Sample x ofmInputting the data into a feature extraction network E, inputting the obtained feature vector into a target task network Y, wherein the target task network Y consists of 1 pooling layer, 1 convolution layer and 3 full-connection layers, finally outputting a prediction class label of the source domain sample, and calculating the target task network loss between the prediction class label and a real class label of the source domain sample according to a negative log-likelihood function.
Step 1.6, optimizing the network parameters according to the loss function of the target task network Y
Figure BDA0002948040950000066
To pair
Figure BDA0002948040950000067
And (3) obtaining a source domain sample gradient by calculating partial derivatives:
Figure BDA0002948040950000068
wherein the content of the first and second substances,
Figure BDA0002948040950000069
is a loss function of the domain classification network, i denotes the ith source domain,
Figure BDA00029480409500000610
representing the ith sample set of the t iteration
Figure BDA00029480409500000611
The (u) th sample of (a) is,
Figure BDA00029480409500000612
is the t-th iteration domain classification network sample
Figure BDA00029480409500000613
Of the gradient of (c).
Step 1.7, updating parameters in the target task network Y according to the source domain sample gradient
Figure BDA00029480409500000614
Wherein the calculation formula is as follows:
Figure BDA00029480409500000615
where η is the learning rate and m is the set of samples
Figure BDA00029480409500000616
The size of (2).
Step 2, under the condition that k source domains exist, repeating the step 1.1 to the step 1.7 for k times, and calculating the weight corresponding to each source domain according to the total loss function of the whole model
Figure BDA00029480409500000617
Wherein
Figure BDA00029480409500000618
Is the loss value of the ith source domain and target domain versus the total loss function of the antibody domain adaptation.
Step 3, normalizing the weight of each source domain obtained in the step 2, wherein the normalization formula is as follows:
Figure BDA00029480409500000619
where γ is a hyperparameter.
And S102, updating the characteristic extraction network parameters based on the set gradient back transmission layer.
Specifically, the total loss function is used for optimizing the feature extraction network parameters, and the bias derivatives of the optimized feature extraction network parameters are solved to obtain corresponding loss gradients; and updating the feature extraction network parameters according to the loss gradient. The specific process is as follows:
step 4, setting a gradient reverse transmission layer (GRL) for updating the gradient of the domain classification network D according to the mode of the reverse transmission layer to extract the parameters of the network E
Figure BDA0002948040950000071
The gradient reverse transport layer (GRL) is defined as follows:
R(x)=x
Figure BDA0002948040950000072
where R (x) is the gradient inversion layer and I is the identity matrix.
Step 4.1, extracting parameters of the network E according to the total loss function optimization characteristics of the whole model
Figure BDA0002948040950000073
To pair
Figure BDA0002948040950000074
And (3) obtaining a loss gradient of the sample by derivation:
Figure BDA0002948040950000075
wherein, i represents the ith source domain,
Figure BDA0002948040950000076
representing the t-th set of iteration samples
Figure BDA0002948040950000077
The (j) th sample of the (c),
Figure BDA0002948040950000078
is a loss function of the target mission network,
Figure BDA0002948040950000079
is a loss function of the domain classification network, lambda is a hyper-parameter,
Figure BDA00029480409500000710
is the weight of the source domain normalization,
Figure BDA00029480409500000711
is the t-th time characteristic extraction network sample
Figure BDA00029480409500000712
Of the gradient of (c).
Step 4.2, according to the loss gradient update characteristics of the samples, extracting the parameters of the network E
Figure BDA00029480409500000713
Wherein
Figure BDA00029480409500000714
Where η is the learning rate and m is the size of the sample set.
And S103, according to the privacy leakage risk, performing gradient segmentation on the feature extraction network and the target task network respectively by using a set segmentation threshold, and calculating a corresponding privacy budget after adding Gaussian noise.
Specifically, according to the privacy disclosure risk of the release model, the gradient segmentation is performed on the feature extraction network and the target task network respectively by using a set segmentation threshold, and after gaussian disturbance is added to the obtained gradient, corresponding parameters are updated respectively, and meanwhile, a corresponding privacy budget is calculated, including:
according to the privacy disclosure risk of the release model, performing gradient segmentation on the target task network by using a set segmentation threshold value, and adding Gaussian noise disturbance to a first gradient obtained by segmentation to obtain a first Gaussian gradient; updating the network parameters of the target task according to the first Gaussian gradient, and calculating a privacy budget corresponding to the current iteration times;
according to the privacy disclosure risk of the release model, performing gradient segmentation on the feature extraction network by using a set segmentation threshold, and adding Gaussian noise disturbance to a second gradient obtained by segmentation to obtain a second Gaussian gradient; and updating the characteristic extraction network parameters according to the second Gaussian gradient, and calculating privacy budget corresponding to the current iteration number.
The detailed process comprises the following steps:
step 5, according to the privacy disclosure risk of the release model, the gradient of the target task network Y is divided, a division threshold value C is set firstly for gradient cutting, and the cutting is carried out according to l of the gradient2Ratio of paradigm to segmentation threshold C
Figure BDA0002948040950000081
The larger one was chosen for the cut compared to 1, resulting in a first gradient:
Figure BDA0002948040950000082
wherein the molecule
Figure BDA0002948040950000083
Is the t-th iteration domain classification network sample
Figure BDA0002948040950000084
Of the gradient of (c).
Step 6, adding Gaussian noise disturbance to the cut first gradient to obtain a first Gaussian gradient:
Figure BDA0002948040950000085
wherein, N (0, sigma)2C2I) Gaussian distributed noise, C is the segmentation threshold, σ is the noise specification of gaussian noise, and I is the identity matrix.
Step 7, updating the parameters of the target task network Y
Figure BDA0002948040950000086
And calculating the iteration privacy budget of the current iteration times t
Figure BDA0002948040950000087
Wherein
Figure BDA0002948040950000088
Where η is the learning rate and Δ ε is the privacy consumption per pass.
Step 8, segmenting the gradient of the feature extraction network E according to the privacy disclosure risk of the release model, firstly setting a segmentation threshold value C for gradient segmentation, wherein the segmentation is performed according to l of the gradient2Ratio of paradigm to segmentation threshold C
Figure BDA0002948040950000089
The larger one was chosen for cleavage compared to 1, resulting in a second gradient:
Figure BDA00029480409500000810
wherein, the molecule is
Figure BDA00029480409500000811
Is the t-th iteration characteristic extraction network sample
Figure BDA00029480409500000812
Of the gradient of (c).
Step 9, adding gaussian noise disturbance to the second gradient cut in step 8 to obtain a second gaussian gradient:
Figure BDA0002948040950000091
wherein, N (0, sigma)2C2I) Gaussian distributed noise, C is the segmentation threshold, σ is the noise specification of gaussian noise, and I is the identity matrix.
Step 10, updating the characteristic extraction network E parameter
Figure BDA0002948040950000092
And calculating the iteration privacy budget of the current iteration times t times
Figure BDA0002948040950000093
Wherein the content of the first and second substances,
Figure BDA0002948040950000094
where η is the learning rate and Δ ε is the privacy consumption per pass.
And S104, combining the feature extraction network and the target task network until the privacy budget reaches a budget threshold value to obtain a differential privacy classification model.
Specifically, until the privacy budget is iterated to the privacy budget threshold value and is consumed, the multisource differential privacy domain self-adaptive model is obtained through convergence, and the finally trained differential privacy classification model formed by the feature extraction network E and the target task network Y is issued to the user for use.
Compared with the prior art, the invention has the following characteristics:
1. sensitive data needing protection in the multi-source domain migration based on confrontation domain adaptive model training is analyzed, and the fact that the corresponding network parameter updating in the model training process needs to be disturbed by a differential privacy technology is analyzed according to the privacy leakage risk of a model which is finally issued.
2. And then, according to privacy analysis, on the gradient of specific network parameter updating, gradient cutting is carried out firstly, and then Gaussian noise is added, so that gradient explosion is prevented, the convergence speed of the model is accelerated, and the added noise amount can be controlled, so that the effectiveness and the privacy of the finally issued model are balanced.
The method comprises the steps of obtaining a multi-source domain data set, a target domain data set and initialization network parameters, respectively and randomly sampling any multi-source domain data set and any target domain data set according to set sampling probability to obtain a source domain sample set and a target domain sample set, respectively inputting the source domain sample set and the target domain sample set into a feature extraction network or a target task network to perform feature extraction and updating, and performing normalization processing on weights obtained after iteration is completed; updating the characteristic extraction network parameters based on a set gradient reverse transmission layer; according to the privacy disclosure risk, when the feature extraction network and the target task network are respectively updated by using a set segmentation threshold, gradient cutting is firstly carried out, then Gaussian noise is added, and corresponding privacy budget is calculated; and combining the feature extraction network and the target task network until the privacy budget reaches a budget threshold value to obtain a differential privacy classification model, and storing and releasing the differential privacy classification model to ensure privacy security in multi-source domain migration.
While the invention has been described with reference to a preferred embodiment, it will be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (7)

1. An adaptive differential privacy protection method for a countermeasure domain in multi-source domain migration is characterized by comprising the following steps:
respectively inputting the obtained source domain sample set and the target domain sample set into a feature extraction network or a target task network for feature extraction and updating, and carrying out normalization processing on the weight obtained after iteration;
updating the characteristic extraction network parameters based on a set gradient reverse transmission layer;
according to the privacy leakage risk, the feature extraction network and the target task network are respectively subjected to gradient segmentation by using a set segmentation threshold, and corresponding privacy budgets are calculated after Gaussian noise is added;
and combining the feature extraction network and the target task network until the privacy budget reaches a budget threshold value to obtain a differential privacy classification model.
2. The method for adaptive differential privacy protection for the countermeasure domain in multi-source domain migration according to claim 1, wherein the steps of inputting the obtained source domain sample set and target domain sample set to a feature extraction network or a target task network respectively for feature extraction and update, and normalizing the weights obtained by iteration completion comprise:
inputting a source domain sample set and a target domain sample set into a feature extraction network for feature extraction, and then performing countermeasure learning and parameter updating on a feature input domain classification network until the source domain sample set and the target domain sample set corresponding to a plurality of source domains are iterated;
inputting the source domain sample set into the feature extraction network for feature extraction, and inputting the obtained feature vectors into a target task network for classification and parameter updating until the source domain sample sets corresponding to a plurality of source domains are iterated;
and calculating the weight corresponding to each source domain according to a total loss function, and carrying out normalization processing on the weight.
3. The method of claim 2, wherein the method for adaptive differential privacy protection of the countermeasure domain in multi-source domain migration comprises the steps of inputting a source domain sample set and a target domain sample set into a feature extraction network for feature extraction, and then performing countermeasure learning and parameter updating on a feature input domain classification network until iteration of the source domain sample set and the target domain sample set corresponding to a plurality of source domains is completed, and comprises the steps of:
inputting the source domain sample set and the target domain sample set into the feature extraction network for feature extraction, and inputting the obtained feature vectors into a domain classification network to obtain a domain prediction label;
calculating a domain classification network loss function between the domain prediction label and the domain real label by using a negative log-likelihood function;
and optimizing and updating the domain classification network parameters according to the domain classification network loss function until the source domain sample set and the target domain sample set corresponding to a plurality of source domains are iterated.
4. The method for adaptive differential privacy protection for countermeasure domains in multi-source domain migration according to claim 2, wherein the step of inputting the source domain sample set to the feature extraction network for feature extraction and inputting the obtained feature vector to a target task network for classification and parameter update until the source domain sample sets corresponding to a plurality of source domains are completed iteratively comprises the steps of:
inputting the source domain sample set into the feature extraction network for feature extraction, and inputting the obtained feature vector into the target task network to obtain a prediction class label;
calculating a target task network loss function between the prediction class label and the real class label by using a negative log-likelihood function;
and optimizing and updating the target task network parameters according to the target task network loss function until the source domain sample sets corresponding to the plurality of source domains are iterated.
5. The method for adaptive differential privacy protection for countermeasure domains in multi-source domain migration according to claim 2, wherein updating the feature extraction network parameters based on a set gradient back-propagation layer comprises:
optimizing the feature extraction network parameters by using the total loss function, and solving the partial derivatives of the optimized feature extraction network parameters to obtain corresponding loss gradients;
and updating the feature extraction network parameters according to the loss gradient.
6. The method for adaptive differential privacy protection for the countermeasure domain in multi-source domain migration according to claim 1, wherein before the acquired source domain sample set and target domain sample set are respectively input to a feature extraction network or a target task network for feature extraction and update, and normalization processing is performed on weights obtained after iteration completion, the method further comprises:
acquiring a multi-source domain data set, a target domain data set and initialization network parameters, and respectively randomly sampling any one of the multi-source domain data set and the target domain data set according to a set sampling probability to obtain a source domain sample set and a target domain sample set.
7. The method of claim 1, wherein the method further comprises, after combining the feature extraction network and the target task network to obtain a differential privacy classification model until the privacy budget reaches a budget threshold, the method further comprising:
and storing and releasing the differential privacy classification model.
CN202110201597.6A 2021-02-23 2021-02-23 Countermeasure domain self-adaptive differential privacy protection method in multi-source domain migration Active CN112800471B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110201597.6A CN112800471B (en) 2021-02-23 2021-02-23 Countermeasure domain self-adaptive differential privacy protection method in multi-source domain migration

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110201597.6A CN112800471B (en) 2021-02-23 2021-02-23 Countermeasure domain self-adaptive differential privacy protection method in multi-source domain migration

Publications (2)

Publication Number Publication Date
CN112800471A true CN112800471A (en) 2021-05-14
CN112800471B CN112800471B (en) 2022-04-22

Family

ID=75815353

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110201597.6A Active CN112800471B (en) 2021-02-23 2021-02-23 Countermeasure domain self-adaptive differential privacy protection method in multi-source domain migration

Country Status (1)

Country Link
CN (1) CN112800471B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114285545A (en) * 2021-12-24 2022-04-05 成都三零嘉微电子有限公司 Side channel attack method and system based on convolutional neural network
CN114285545B (en) * 2021-12-24 2024-05-17 成都三零嘉微电子有限公司 Side channel attack method and system based on convolutional neural network

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2469598A1 (en) * 2004-06-01 2005-12-01 Daniel W. Onischuk Computerized voting system
CN109919934A (en) * 2019-03-11 2019-06-21 重庆邮电大学 A kind of liquid crystal display panel defect inspection method based on the study of multi-source domain depth migration
CN110602099A (en) * 2019-09-16 2019-12-20 广西师范大学 Privacy protection method based on verifiable symmetric searchable encryption
CN110647765A (en) * 2019-09-19 2020-01-03 济南大学 Privacy protection method and system based on knowledge migration under collaborative learning framework
CN110969243A (en) * 2019-11-29 2020-04-07 支付宝(杭州)信息技术有限公司 Method and device for training countermeasure generation network for preventing privacy leakage
CN111091193A (en) * 2019-10-31 2020-05-01 武汉大学 Domain-adapted privacy protection method based on differential privacy and oriented to deep neural network
CN112203282A (en) * 2020-08-28 2021-01-08 中国科学院信息工程研究所 5G Internet of things intrusion detection method and system based on federal transfer learning

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2469598A1 (en) * 2004-06-01 2005-12-01 Daniel W. Onischuk Computerized voting system
CN109919934A (en) * 2019-03-11 2019-06-21 重庆邮电大学 A kind of liquid crystal display panel defect inspection method based on the study of multi-source domain depth migration
CN110602099A (en) * 2019-09-16 2019-12-20 广西师范大学 Privacy protection method based on verifiable symmetric searchable encryption
CN110647765A (en) * 2019-09-19 2020-01-03 济南大学 Privacy protection method and system based on knowledge migration under collaborative learning framework
CN111091193A (en) * 2019-10-31 2020-05-01 武汉大学 Domain-adapted privacy protection method based on differential privacy and oriented to deep neural network
CN110969243A (en) * 2019-11-29 2020-04-07 支付宝(杭州)信息技术有限公司 Method and device for training countermeasure generation network for preventing privacy leakage
CN112203282A (en) * 2020-08-28 2021-01-08 中国科学院信息工程研究所 5G Internet of things intrusion detection method and system based on federal transfer learning

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
BRUCE GU等: "Context-aware Privacy Preservation in a Hierarchical Fog Computing System", 《网页在线公开:HTTPS://IEEEXPLORE.IEEE.ORG/STAMP/STAMP.JSP?TP=&ARNUMBER=8761455》 *
余欢欢等: "模型参数自适应迁移的多源域适应", 《计算机技术与自动化》 *
王金艳等: "差分隐私的数据流关键模式挖掘方法", 《软件学报》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114285545A (en) * 2021-12-24 2022-04-05 成都三零嘉微电子有限公司 Side channel attack method and system based on convolutional neural network
CN114285545B (en) * 2021-12-24 2024-05-17 成都三零嘉微电子有限公司 Side channel attack method and system based on convolutional neural network

Also Published As

Publication number Publication date
CN112800471B (en) 2022-04-22

Similar Documents

Publication Publication Date Title
Fan et al. Learning what data to learn
CN110048827B (en) Class template attack method based on deep learning convolutional neural network
CN111126386A (en) Sequence field adaptation method based on counterstudy in scene text recognition
CN110929848B (en) Training and tracking method based on multi-challenge perception learning model
CN111242157A (en) Unsupervised domain self-adaption method combining deep attention feature and conditional opposition
Wang et al. Distilling knowledge from an ensemble of convolutional neural networks for seismic fault detection
CN113128701A (en) Sample sparsity-oriented federal learning method and system
CN113222068B (en) Remote sensing image multi-label classification method based on adjacency matrix guidance label embedding
CN111460157A (en) Cyclic convolution multitask learning method for multi-field text classification
CN113469186A (en) Cross-domain migration image segmentation method based on small amount of point labels
CN114417427A (en) Deep learning-oriented data sensitivity attribute desensitization system and method
CN113283524A (en) Anti-attack based deep neural network approximate model analysis method
Qin et al. Making deep neural networks robust to label noise: Cross-training with a novel loss function
CN111144500A (en) Differential privacy deep learning classification method based on analytic Gaussian mechanism
Metawa et al. Internet of things enabled financial crisis prediction in enterprises using optimal feature subset selection-based classification model
Fan et al. Fast model update for iot traffic anomaly detection with machine unlearning
Kumar et al. A unified framework for optimization-based graph coarsening
CN114036308A (en) Knowledge graph representation method based on graph attention neural network
CN112800471B (en) Countermeasure domain self-adaptive differential privacy protection method in multi-source domain migration
Royston et al. A linguistic decision tree approach to predicting storm surge
Liang et al. Differentially private federated learning with Laplacian smoothing
Xu et al. An efficient channel-level pruning for CNNs without fine-tuning
CN111401155B (en) Image recognition method of residual error neural network based on implicit Euler jump connection
CN115952343A (en) Social robot detection method based on multi-relation graph convolutional network
CN113516199B (en) Image data generation method based on differential privacy

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant