CN111144500A - Differential privacy deep learning classification method based on analytic Gaussian mechanism - Google Patents

Differential privacy deep learning classification method based on analytic Gaussian mechanism Download PDF

Info

Publication number
CN111144500A
CN111144500A CN201911388912.XA CN201911388912A CN111144500A CN 111144500 A CN111144500 A CN 111144500A CN 201911388912 A CN201911388912 A CN 201911388912A CN 111144500 A CN111144500 A CN 111144500A
Authority
CN
China
Prior art keywords
privacy
training
loss function
training data
classification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911388912.XA
Other languages
Chinese (zh)
Inventor
朱笑岩
赵智城
马建峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201911388912.XA priority Critical patent/CN111144500A/en
Publication of CN111144500A publication Critical patent/CN111144500A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Abstract

The invention discloses a differential privacy deep learning classification method based on an analytic Gaussian mechanism, which mainly solves the problem of complex calculation of the total privacy budget in the prior art, and adopts the following scheme: obtaining classification data, and interfering an input layer by using an analytic Gaussian mechanism to ensure that the input layer meets the requirement of differential privacy; constructing a hidden layer after an input layer, activating by using an activation function, and normalizing and limiting the result of the activation function by using local response; selecting a cross entropy function as a loss function, and analyzing Gaussian noise interference to meet the requirement of differential privacy to form a differential privacy deep learning classification model; training the differential privacy deep learning classification model by using a random gradient descent method to obtain a trained classification model; and inputting the test data set into the trained classification model to obtain a classification result. The whole privacy budget of the model does not increase along with the training process, can be distributed before the model training, is simple in calculation, and can be used for information security.

Description

Differential privacy deep learning classification method based on analytic Gaussian mechanism
Technical Field
The invention belongs to the technical field of computers, and further relates to a differential privacy deep learning classification method based on an analytic Gaussian mechanism, which can be used for information security.
Background
In recent years, deep learning based on artificial neural networks has been rapidly developed. The development of artificial intelligence will be beyond expectations in the foreseeable future. However, the massive amount of training data required for deep learning presents significant privacy concerns. The method has the advantages that privacy protection is carried out on the deep learning model, the large-scale application of deep learning can be further promoted while sensitive information is protected, and the method has very important theoretical and practical significance.
Differential privacy is a relatively novel and practical privacy protection technology, and is mainly used in privacy information collection and release scenes. The IOS system of apple and Chrome browser of google both use differential privacy techniques for user data collection statistics. Random interference is introduced into a machine learning model by a machine learning algorithm based on differential privacy, so that the model obtains differential privacy protection at the cost of reducing the accuracy of the model, and member reasoning attack and gradient reverse attack are avoided.
In 2016, paper Deep learning with differential classification using a differential privacy stochastic gradient descent method published by Abadi M et al at the ACM SIGSAC Conference on Computer and communications Security Conference, Deep learning classification models were trained. And tracking and calculating the overall privacy budget of the model by using a time counting method on the basis of the time counting method. The method needs an additional calculation program and can be obtained after the classification model is trained, so that the privacy budget calculation is complex.
The invention content is as follows:
the invention aims to provide a differential privacy deep learning classification method based on an analytic Gaussian mechanism to simplify the calculation of the overall privacy budget of a classification model aiming at the defects of the prior art.
The technical scheme of the invention is that a differential privacy analysis Gaussian noise mechanism is used for interfering a data input stage and a loss function calculation stage of a deep learning basic classification model, so that the model integrally meets the differential privacy guarantee, and the differential privacy budget, namely the privacy protection degree of the deep learning classification model can be determined before the model is trained, and the implementation steps comprise the following steps:
(1) acquiring a classification data set from an open website, carrying out normalization pretreatment on the classification data set, and dividing the pretreated data into a training data set and a test data set according to a ratio of 6: 1;
(2) carrying out random interference on the training data by utilizing an analytic Gaussian mechanism to obtain an interference input layer which can meet the guarantee of differential privacy;
(3) sequentially connecting n hidden layers behind the interference input layer, wherein each hidden layer uses a ReLU activation function to carry out nonlinear activation, and uses local response normalization to limit the result of the activation function to [0,1], wherein n is more than or equal to 1;
(4) selecting a cross entropy function as a loss function, expanding the loss function by using a Taylor series, and analyzing Gaussian noise interference on a polynomial coefficient of the expanded loss function so as to enable the loss function to meet the difference privacy;
(5) training the differential privacy deep learning classification model formed by the steps (1) to (4) by using a random gradient descent algorithm to obtain a trained classification model;
(6) and inputting the test data set into the trained classification model for testing to obtain a classification result.
The invention has the following advantages:
1) according to the invention, because the overall privacy budget does not increase along with the model training process, the overall privacy budget can be distributed before the model training, and therefore, the calculation is simpler.
2) The differential privacy analytic Gaussian mechanism is used, and compared with the traditional differential privacy Gaussian mechanism, the differential privacy analytic Gaussian mechanism can apply smaller interference to realize differential privacy protection to the same degree.
Description of the drawings:
FIG. 1 is a flow chart of an implementation of the present invention;
FIG. 2 is a simulation diagram of classification accuracy under different privacy budgets according to the present invention.
Detailed Description
Embodiments and effects of the present invention will be described in further detail below with reference to the accompanying drawings.
The method is based on a basic deep learning classification model, interference is carried out on an input layer and a loss function of the model by using a differential privacy analytic Gaussian mechanism, and the classification model integrally meets the differential privacy through the combination of the input layer and the loss function, namely, an expected system is approximated through a sensitivity bounded function.
Referring to fig. 1, the implementation steps of this example are as follows:
step one, a training data set and a testing data set are obtained.
1.1) acquiring an original classification data set, and carrying out normalization pretreatment on the original classification data set:
let D denote a data set containing n classification data x1,x2,...,xi,...,xnEach classification data set contains d +1 characteristics, respectively A1,A2,...,Aj,...,AdB, wherein AjAttribute features representing classification data, B class label representing classification data, xiDenotes the ith classification data, xi=(xi1,xi2,...,xij,...,xid,yi),xijJ attribute, y, representing the ith classification dataiLabels representing the ith classification data, i ∈ [1, n],j∈[1,d]D represents the number of attributes;
1.2) sorting data xiAnd (3) normalization is carried out to obtain normalized classification data:
Figure BDA0002344400110000031
so that
Figure BDA0002344400110000032
α thereinjRepresenting the minimum of the classified data attributes, βjClassifying a maximum value in the data attributes; will label yiConverted into a vector, denoted yi={yi1,...,yiMWhere M denotes the number of classifications;
1.3) classifying the classification data after the normalization pretreatment into a training data set and a test data set according to the ratio of 6: 1.
And step two, carrying out random interference on the training data by utilizing an analytic Gaussian mechanism to obtain an interference input layer which can meet the requirement of differential privacy guarantee.
The example is based on a deep learning classification model which is composed of an input layer, a hidden layer and a loss function, wherein the input layer and the hidden layer are composed of a plurality of neurons, and affine change of each neuron can be expressed as:
Figure BDA0002344400110000033
wherein W represents a weight parameter, b represents a static bias, and T represents a transpose;
the specific implementation of this step is as follows:
2.1) calculating input layer h0Is/are as follows
Figure BDA0002344400110000036
Sensitivity Δ2hL
Figure BDA0002344400110000034
Wherein L represents a training batch, L' represents a training batch with only one training data different from L, and xnDenotes the training data, x, of L that is different from Ln'represents the training data of L' that is different from L, xnjDenotes xnThe jth feature of (1), x'njDenotes xn' the j-th feature, d, represents the number of features of the training data, xiRepresents one training data, x 'in L'iRepresenting one training data in L' | |2Represents the L2 norm;
2.2) calculating the interference Gaussian noise parameter sigma by using an analytic Gaussian mechanism1
According to the theorem of differential privacy combination, in order to make h0Satisfy (epsilon)11) Differential privacy, need to let h0Satisfies (epsilon) for each neuron in1/|h0|,δ1/|h0|) -differential privacy, where | h0| represents h0Number of middle neurons, ∈1And delta1Representing a first pair of pre-allocated privacy budgets, (. epsilon.)11) Differential privacy, meaning that the privacy budget is respectively ε1And delta1Differential privacy of (1);
it is known that
Figure BDA0002344400110000035
Sensitivity Δ2hLAnd a pre-allocated first pair of privacy budgets epsilon1And privacy budget delta1Then, calculating an interference Gaussian noise parameter sigma through an analytic Gaussian mechanism1Wherein ε ═ ε1/|h0|,δ=δ1/|h0|,Δ=Δ2hL
2.2.1) calculating the cut-off Point
Figure BDA0002344400110000041
Wherein
Figure BDA0002344400110000042
t is 0 or
Figure BDA0002344400110000043
2.2.2) separating Delta from the cut-off point Delta0And (3) comparison:
if delta is larger than or equal to delta0Then define
Figure BDA0002344400110000044
And calculate
Figure BDA0002344400110000045
To obtain
Figure BDA0002344400110000046
If delta < delta0Then define
Figure BDA0002344400110000047
And calculate
Figure BDA0002344400110000048
To obtain
Figure BDA0002344400110000049
2.2.3) according to the result, calculating to obtain an interference Gaussian noise parameter:
Figure BDA00023444001100000410
this calculation is denoted as σ1=AGM(Δ2hL1/|h0|,δ1/|h0|);
2.3) at the input level h0In the training method, a Gaussian interference noise is applied to all input features of each training data in a training batch L
Figure BDA00023444001100000411
Let h0Satisfy (epsilon)11) -differential privacy, where | L | represents the number of training data in a training batch L and N represents a gaussian distribution probability density function.
And step three, sequentially constructing a plurality of hidden layers and carrying out local response normalization, and limiting the result to [0,1 ].
Obtaining an input layer h satisfying differential privacy0Constructing subsequent hidden layers on the basis, using a modified linear unit ReLU as an activation function of each hidden layer, and limiting the result to [0,1] through local response normalization LRN]。
The hidden layer adopts a full-connection layer or a convolution layer.
The specific implementation of this step is as follows:
3.1) in the fully-connected layer, given a training data x in LiWill be a single interfered neuron
Figure BDA00023444001100000412
The local response normalization is expressed as:
Figure BDA00023444001100000413
so that
Figure BDA00023444001100000414
Is limited to a value of [0,1]]Wherein
Figure BDA00023444001100000415
And χ represents the maximum and minimum values, respectively, among all the interfered neurons, L is the training batch, and W is the connection weight matrix.
3.2) in the convolutional layer, the interfered pixel value at the (i, j) position of the k-th feature map
Figure BDA0002344400110000051
The local response normalization is expressed as:
Figure BDA0002344400110000052
so that
Figure BDA0002344400110000053
Is limited to a value of [0,1]]Wherein N is the total number of the feature maps, and q, l and α are respectively hyperparameters with different values.
And fourthly, using Taylor series expansion to the loss function, and analyzing Gaussian mechanism interference to the polynomial coefficient of the expanded loss function so as to enable the loss function to meet the difference privacy.
4.1) selecting the loss function:
selecting a cross entropy function
Figure BDA0002344400110000054
As a loss function of the classification model, a single loss function of the classification model with M output variables is represented as follows:
Figure BDA0002344400110000055
wherein
Figure BDA0002344400110000056
Indicating the last passNormalized hidden layer ηkA training data x of the training data setiThe state of the light beam obtained is,
Figure BDA0002344400110000057
representing a calculated output variable, y, corresponding to an output result of lilRepresenting the true output variable, W, corresponding to an output result of ll(k)Representing a connection weight matrix, and theta represents a classification model parameter;
4.2) carrying out Taylor expansion on the loss function, and taking the first three terms to obtain the expanded loss function
Figure BDA0002344400110000058
Figure BDA0002344400110000059
Wherein
Figure BDA00023444001100000510
f1l(z)=yillog(1+e-z) And f2l(z)=(1-yil)log(1+ez);
4.3) calculating the loss function after expansion
Figure BDA00023444001100000511
Sensitivity of the device
Figure BDA00023444001100000512
Since the noise interference only in step one cannot make the classification model satisfy the differential privacy as a whole, secondary interference combination is required. In order to satisfy the loss function of (ε)22) Differential privacy, to pass through a post-expansion loss function
Figure BDA00023444001100000513
Applying Gaussian noise to interfere, where ε2And delta2Is a pre-allocated second pair of privacy budgets.
Loss function after unfolding
Figure BDA00023444001100000514
In the expression, the expression is shown,
Figure BDA00023444001100000515
and y isil∈[0,1]From which is calculated
Figure BDA00023444001100000516
Is/are as follows
Figure BDA00023444001100000517
Sensitivity:
Figure BDA00023444001100000518
wherein, | η(k)I represents hidden layer ηkThe number of neurons implied in;
4.4) pairs of post-unwrapped loss functions
Figure BDA00023444001100000519
And (3) analyzing the Gaussian mechanism and adding noise interference to ensure that the difference privacy is satisfied:
according to a second pair of privacy budgets epsilon2And delta2And
Figure BDA0002344400110000061
is/are as follows
Figure BDA0002344400110000062
Sensitivity of the device
Figure BDA0002344400110000063
Calculating an interference Gaussian noise parameter through an analytic Gaussian mechanism:
Figure BDA0002344400110000064
for the post-expansion loss function
Figure BDA0002344400110000065
Coefficient of (2) imposes gaussian interference noise distributionN(0,σ2 2) So that the loss function satisfies (epsilon) as a whole22) -differential privacy. According to the differential privacy combination theorem and the differential privacy post-processing theorem, the classification model integrally satisfies (epsilon)1212) -differential privacy;
through the steps from the first step to the fourth step, a differential privacy deep learning classification model is formed.
And fifthly, training the differential privacy deep learning classification model by using a random gradient descent algorithm.
5.1) using a training data set, randomly selecting a training batch, calculating the gradient of a loss function of the classification model by using a back propagation algorithm, and updating the weight along the opposite direction of the gradient;
5.2) repeating 5.1), carrying out multiple times of training batch random selection and weight updating until the classification accuracy is stable, and obtaining a trained classification model, wherein the weight updating is expressed as follows:
Figure BDA0002344400110000066
wherein the content of the first and second substances,
Figure BDA0002344400110000067
representing the overall loss function after being disturbed, ηtDenotes a learning rate at the t-th time, thetatRepresents the model weight at time t, and L represents the training batch.
And step six, inputting the test data set into the trained classification model to obtain a classification result.
The effect of the present invention is further explained by combining the simulation experiment as follows:
1. experimental operating conditions
The language used in the experiment is Python, the operating environment is a CPU with main frequency of 3.2GHZ, and the computer is a 16G memory.
Two convolutional layers are provided for the classification model of the present invention, with feature numbers of 32 and 64, respectively. The modified linear unit ReLU is used as the activation function and the cross entropy function is used as the loss function.
The MNIST picture data set consisted of 60000 training examples and 10000 test examples. Each example is a 28 x 28 size grayscale image of 1-10.
2. Contents and results of the experiments
Under the above conditions, the simulation experiment of handwritten digit classification is carried out on MNIST picture data set by the method of the invention, and the result is shown in figure 2.
As can be seen from fig. 2, as the privacy budget increases, the model privacy protection effect increases, the model interference increases, and therefore the classification accuracy gradually decreases. When the privacy budget is larger, the classification accuracy of the privacy protection model is close to that of the reference model without the privacy protection effect.

Claims (6)

1. A differential privacy deep learning classification method based on an analytic Gaussian mechanism is characterized by comprising the following steps:
(1) acquiring classification data from an open website, carrying out normalization pretreatment on the classification data, and dividing the pretreated data into a training data set and a test data set according to a ratio of 6: 1;
(2) carrying out random interference on the training data by utilizing an analytic Gaussian mechanism to obtain an interference input layer which can meet the guarantee of differential privacy;
(3) sequentially connecting n hidden layers behind the interference input layer, wherein each hidden layer uses a ReLU activation function to carry out nonlinear activation, and uses local response normalization to limit the result of the activation function to [0,1], wherein n is more than or equal to 1;
(4) selecting a cross entropy function as a loss function, expanding the loss function by using a Taylor series, and analyzing Gaussian noise interference on a polynomial coefficient of the expanded loss function so as to enable the loss function to meet the difference privacy;
(5) training the differential privacy deep learning classification model formed by the steps (1) to (4) by using a random gradient descent algorithm to obtain a trained classification model;
(6) and inputting the test data set into the trained classification model for testing to obtain a classification result.
2. The method according to claim 1, wherein (2) the training data is randomly interfered by using the analytic gaussian mechanism to obtain an interference input layer satisfying the differential privacy guarantee, and the method is implemented as follows:
(2a) computing input layer h0Is/are as follows
Figure FDA0002344400100000012
Sensitivity Δ2
Figure FDA0002344400100000011
Wherein L represents a training batch, L' represents a training batch with only one training data different from L, and xnDenotes the training data, x, of L that is different from Ln'represents the training data of L' that is different from L, xnjDenotes xnThe jth feature of (1), x'njDenotes xn' the j-th feature, d, represents the number of features of the training data, xiRepresents one training data, x 'in L'iRepresenting one training data in L' | |2Represents the L2 norm;
(2b) guaranteeing input layer h using analytic Gaussian mechanism0Satisfying differential privacy:
according to a known pre-allocated first pair of privacy budgets epsilon1And delta1Putting the privacy budget in the input layer h0Carrying out average distribution, and calculating an interference Gaussian noise parameter through an analytic Gaussian mechanism: sigma1=AGM(Δ21/|h0|,δ1/|h0|) where | h0I denotes input layer h0The number of neurons in (a), AGM, represents an analytic Gaussian mechanism;
applying a Gaussian noise distribution to all input features of each training data in a training batch L
Figure FDA0002344400100000021
To make the transfusionInto the stratum h0Differential privacy is satisfied, where | L | represents the number of training data in a training batch L and N represents a gaussian distribution probability density function.
3. The method of claim 1, wherein the activation function of (3) is expressed as follows:
g(x)=max(0,x)。
4. the method of claim 1, wherein (3) said using local response normalization limits the result of the activation function to [0,1] as follows:
(3a) in full concatenation, one training data x in L is giveniWill be a single interfered neuron
Figure FDA0002344400100000022
The local response normalization is expressed as:
Figure FDA0002344400100000023
so that
Figure FDA0002344400100000024
Is limited to a value of [0,1]]Wherein
Figure FDA0002344400100000025
And χ are respectively the maximum value and the minimum value of all interfered neurons, L is a training batch, and W is a connection weight matrix;
(3b) in the convolution layer, the interfered pixel value at the (i, j) position of the k-th feature map
Figure FDA0002344400100000026
The local response normalization is expressed as:
Figure FDA0002344400100000027
so that
Figure FDA0002344400100000028
Is limited to a value of [0,1]]Wherein N is the total number of feature maps and q, l, α are hyper-parameters.
5. Method according to claim 1, characterized in that said (4) is implemented as follows:
(4a) selecting a cross entropy function
Figure FDA0002344400100000029
As a loss function of the classification model, a single loss function of the classification model with M output variables is represented as follows:
Figure FDA00023444001000000210
wherein
Figure FDA00023444001000000211
Representing the last normalized hidden layer ηkA training data x of the training data setiThe state of the light beam obtained is,
Figure FDA0002344400100000031
representing a calculated output variable, y, corresponding to an output result of lilRepresenting the true output variable, W, corresponding to an output result of ll(k)Representing a connection weight matrix, and theta represents a classification model parameter;
(4b) and (3) expanding the single loss function according to the Taylor series, and taking the first three terms of the single loss function to obtain an expanded loss function:
Figure FDA0002344400100000032
wherein
Figure FDA0002344400100000033
f1l(z)=yillog(1+e-z) And f2l(z)=(1-yil)log(1+ez);
(4c) Loss function after unfolding
Figure FDA0002344400100000034
In the expression, the expression is shown,
Figure FDA0002344400100000035
and y isil∈[0,1]From which is calculated
Figure FDA0002344400100000036
Is/are as follows
Figure FDA00023444001000000312
Sensitivity Δ2';
(4d) According to a known pre-allocated second pair of privacy budgets epsilon2And privacy budget delta2
Figure FDA0002344400100000037
Is/are as follows
Figure FDA00023444001000000311
Sensitivity Δ2' calculating an interference Gaussian noise parameter by an analytic Gaussian mechanism: sigma2=AGM(Δ2',ε2/M,δ2/M) and for the loss function
Figure FDA0002344400100000038
All polynomial coefficients of (2) impose a gaussian noise distribution disturbance N (0, σ)2 2) And ensuring that the difference privacy is satisfied, wherein N represents a Gaussian distribution probability density function, M represents the classification number, and AGM represents an analytic Gaussian mechanism.
6. The method of claim 1, wherein the differential privacy deep learning classification model is trained in (5) using a stochastic gradient descent algorithm, which is implemented as follows:
(5a) using a training data set, randomly selecting a training batch, calculating the loss function gradient of the classification model along a back propagation technology, and updating the weight in the opposite direction;
(5b) and (5a) repeating the updating for a plurality of times, ending the training until the classification accuracy is stable, and obtaining the classification model parameters of the optimal solution, wherein the parameters are expressed as follows:
Figure FDA0002344400100000039
wherein the content of the first and second substances,
Figure FDA00023444001000000310
representing the overall loss function after being disturbed, ηtDenotes a learning rate at the t-th time, thetatThe model parameters at time t are shown, and L represents the training batch.
CN201911388912.XA 2019-12-30 2019-12-30 Differential privacy deep learning classification method based on analytic Gaussian mechanism Pending CN111144500A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911388912.XA CN111144500A (en) 2019-12-30 2019-12-30 Differential privacy deep learning classification method based on analytic Gaussian mechanism

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911388912.XA CN111144500A (en) 2019-12-30 2019-12-30 Differential privacy deep learning classification method based on analytic Gaussian mechanism

Publications (1)

Publication Number Publication Date
CN111144500A true CN111144500A (en) 2020-05-12

Family

ID=70521521

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911388912.XA Pending CN111144500A (en) 2019-12-30 2019-12-30 Differential privacy deep learning classification method based on analytic Gaussian mechanism

Country Status (1)

Country Link
CN (1) CN111144500A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112613231A (en) * 2020-12-17 2021-04-06 大连理工大学 Track training data perturbation mechanism with balanced privacy in machine learning
CN112949230A (en) * 2021-02-26 2021-06-11 山东英信计算机技术有限公司 Nonlinear circuit macro model extraction method, system and medium
CN116805082A (en) * 2023-08-23 2023-09-26 南京大学 Splitting learning method for protecting private data of client

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112613231A (en) * 2020-12-17 2021-04-06 大连理工大学 Track training data perturbation mechanism with balanced privacy in machine learning
CN112613231B (en) * 2020-12-17 2022-09-20 大连理工大学 Track training data perturbation mechanism with balanced privacy in machine learning
CN112949230A (en) * 2021-02-26 2021-06-11 山东英信计算机技术有限公司 Nonlinear circuit macro model extraction method, system and medium
CN116805082A (en) * 2023-08-23 2023-09-26 南京大学 Splitting learning method for protecting private data of client
CN116805082B (en) * 2023-08-23 2023-11-03 南京大学 Splitting learning method for protecting private data of client

Similar Documents

Publication Publication Date Title
Tian et al. An intrusion detection approach based on improved deep belief network
Diallo et al. Deep embedding clustering based on contractive autoencoder
Dong et al. Automatic age estimation based on deep learning algorithm
CN108388651B (en) Text classification method based on graph kernel and convolutional neural network
Thai et al. Image classification using support vector machine and artificial neural network
Lee et al. Transfer learning for deep learning on graph-structured data
US11042802B2 (en) System and method for hierarchically building predictive analytic models on a dataset
CN109284406B (en) Intention identification method based on difference cyclic neural network
CN107563407B (en) Feature representation learning system for multi-modal big data of network space
CN112861936B (en) Graph node classification method and device based on graph neural network knowledge distillation
CN111144500A (en) Differential privacy deep learning classification method based on analytic Gaussian mechanism
Xu et al. Learning transferable features in meta-learning for few-shot text classification
Wang et al. Filter pruning with a feature map entropy importance criterion for convolution neural networks compressing
CN111460157A (en) Cyclic convolution multitask learning method for multi-field text classification
Singh Gill et al. Efficient image classification technique for weather degraded fruit images
Wang et al. A pseudoinverse incremental algorithm for fast training deep neural networks with application to spectra pattern recognition
Aich et al. Convolutional neural network-based model for web-based text classification.
CN111813939A (en) Text classification method based on representation enhancement and fusion
Hur et al. Entropy-based pruning method for convolutional neural networks
Lin et al. A deep clustering algorithm based on gaussian mixture model
Jia et al. Semi-supervised node classification with discriminable squeeze excitation graph convolutional networks
CN116258504A (en) Bank customer relationship management system and method thereof
Berradi Symmetric power activation functions for deep neural networks
Amirizadeh et al. CDEC: a constrained deep embedded clustering
CN113344069B (en) Image classification method for unsupervised visual representation learning based on multi-dimensional relation alignment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200512