CN111242290B - Lightweight privacy protection generation countermeasure network system - Google Patents

Lightweight privacy protection generation countermeasure network system Download PDF

Info

Publication number
CN111242290B
CN111242290B CN202010062453.2A CN202010062453A CN111242290B CN 111242290 B CN111242290 B CN 111242290B CN 202010062453 A CN202010062453 A CN 202010062453A CN 111242290 B CN111242290 B CN 111242290B
Authority
CN
China
Prior art keywords
protocol
secure
layer
model
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010062453.2A
Other languages
Chinese (zh)
Other versions
CN111242290A (en
Inventor
杨旸
穆轲
郭文忠
刘西蒙
程红举
刘耿耿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fuzhou University
Original Assignee
Fuzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fuzhou University filed Critical Fuzhou University
Priority to CN202010062453.2A priority Critical patent/CN111242290B/en
Publication of CN111242290A publication Critical patent/CN111242290A/en
Application granted granted Critical
Publication of CN111242290B publication Critical patent/CN111242290B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioethics (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Computer Hardware Design (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer And Data Communications (AREA)

Abstract

The invention relates to a lightweight privacy protection generation countermeasure network system, wherein entities comprise a data provider DS, a service provider SP and a first edge serverS 1And a second edge serverS 2The software comprises an LP-GAN security computing framework; the LP-GAN security computation framework comprises a security generation model SG and a security discrimination model SD. The invention can ensure the double privacy of the data and the model of the user.

Description

Lightweight privacy protection generation countermeasure network system
Technical Field
The invention relates to the field of machine learning, in particular to a lightweight privacy protection generation countermeasure network system.
Background
Deep learning has achieved important breakthroughs in a variety of computer vision tasks, such as image recognition, image translation, medical image diagnosis and the like. Deep learning algorithms typically require a large number of training samples for improved accuracy of the model. However, due to privacy and legal restrictions, there is not enough training data in many data sensitive areas (e.g., medical, military), and the absence of data makes building a model more difficult.
The generation countermeasure network (GAN) has the ability to learn the distribution of real data (e.g., images, text, video), and can generate generated samples that are difficult to distinguish from real samples, especially over many traditional frameworks in generating composite images, and thus GAN has found wide application in the field of computer vision, while providing a method to alleviate the problem of scarcity of training data. In 2014, Goodfellow et al proposed the concept of GAN, which was trained to generate models through the antagonism process. Then, Radford et al proposed a deep convolutional gan (dcgan) in conjunction with a convolutional neural network that was able to produce high quality false image samples with statistical properties similar to real images. Et al propose a hierarchical recursive GAN (LR-GAN) that uses different generators to generate foreground and background to obtain a clearer image. Denton et al propose Laplacian pyramidGAN (LAPGAN), which uses iteration to generate images from coarse to fine, further improving the quality of GAN synthetic images.
Despite the significant advantages of GAN in generating synthetic images, it still requires significant computational and memory resources to be consumed during the model training phase, and to solve this problem, many users choose to outsource their data and intensive computation to a cloud platform. However, the traditional centralized cloud architecture may cause unpredictable delays in processing and transmitting large amounts of data, which makes this approach unsuitable for time-sensitive applications. Therefore, edge computing is employed to optimize the performance of cloud computing. Edge computing handles computing at the edge of the network near the user's data source, reducing latency and bandwidth consumption between the user and the data center. Security and privacy remain major problems in the task of edge computing to handle GAN, data for outsourcing computations is often unencrypted, and third party edge computing servers are not fully trusted, so many sensitive image data (e.g., facial images, medical images) are at risk of eavesdropping and misuse. Furthermore, the distribution of data samples generated from GAN is similar to training samples, which may lead to implicit privacy leakage. Arjovsky et al can recover the training samples by repeatedly sampling from the model. Therefore, both the training data in the GAN and the privacy of the model need to be protected.
Homomorphic Encryption (HE) is a privacy preserving computing scheme that supports flexible computation of encrypted data. Xie et al propose Cryptonets that implement neural network prediction based on homomorphic encryption. Zhang et al propose a privacy-preserving deep computation model based on homomorphic encryption, which distributes complex computation tasks to a cloud platform for execution. Hesamifard et al propose CryptoDL, which performs training and predictive computations in convolutional neural networks using multiple layers of homomorphic encryption. However, the existing framework based on homomorphic encryption has high time complexity and large storage consumption, and is not suitable for practical application.
Another efficient privacy protection scheme is Differential Privacy (DP), which protects data by adding random noise. In 2017, Abadi et al propose a gradient pruning scheme based on differential privacy, and protect data privacy in the machine learning training process. Shokri et al propose a deep neural network distributed training scheme based on a differential privacy technique. Xie et al propose a differential privacy WGAN model (DPGAN) that achieves differential privacy by adding noise to the gradient during training. Although the performance of the differential privacy technology is better than that of the homomorphic encryption technology, since the algorithm needs to add random noise based on laplacian distribution or gaussian distribution, when the privacy requirement is high, the scheme based on the differential privacy generates more errors, and the method based on the differential privacy needs to make a trade-off between accuracy and privacy.
Disclosure of Invention
In view of this, the present invention provides a lightweight privacy protection generation countermeasure network system, which can ensure the double privacy of the data and model of the user.
The invention is realized by adopting the following scheme: a lightweight privacy protection generation countermeasure network system, the entity includes data provider DS, service provider SP, first edge server S1And a second edge server S2The software comprises an LP-GAN security computing framework; the LP-GAN safety calculation framework comprises a safety generation model SG and a safety discrimination model SD;
the data provider DS has training image data and provides initial parameters for the training of the DCGAN neural network; in order to protect the privacy of the image data I and the model parameters P, the DS splits them locally at random into two sets of secret shared data I ', I' and P ', P', which the DS then sends to the first edge server S respectively1And a second edge server S2(ii) a Meanwhile, the DS is responsible for generating a random value to meet the requirements of the interactive security protocol;
said first edge server S1With a second edge server S2Responsible for executing interactive secure computing protocols; s1And S2Secret sharing I ', P ' and I ' are obtained from the data provider,p' is used as input, and the training and generating process with privacy protection is run, and then S1And S2Respectively returning the trained privacy protection parameters P ', P' and the generated images O ', O', and sending the parameters to the service provider SP;
service provider SP receives S1And S2The generated secret is shared and the plaintext training parameters P of the DCGAN and the generated image O of the plaintext are recovered.
Further, the generative model SG comprises a deconvolution layer, a batch normalization layer, a ReLU layer and a full connection layer; the discrimination model SD comprises a convolution layer, a batch processing normalization layer, a LeakyReLU layer and a full connection layer; each layer includes a secure computing protocol, specifically as follows:
wherein, in the convolutional layer, x is assumedijThe method comprises the following steps of (1) performing convolution calculation on elements of an ith row and a jth column of an input matrix X; ω and b are the weight and offset, respectively, of the convolution kernel of size n × n; to protect the privacy of the training data, X is first randomly divided into two random secret shares X 'and X ", where X ═ X' + X", where X isijIs divided into two random values x'ijAnd x ″)ijMeanwhile, the weights and offsets are divided into two sets of random secret shares (ω'lm,ω″lm) And (b ', b'), wherein ωlm=ω′lm+ω″lmB ═ b' + b "; the secure convolution protocol SecConv is calculated as follows: s1And S2And (3) jointly calculating: (a'lm,a″lm)←SecMul(ωlm,xi+l,j+m);S1C 'is calculated'ij,S2Calculate c ″ij
Figure BDA0002374925230000041
In the deconvolution layer, the calculation of a secure deconvolution protocol SecDeconv is the same as that of SecConv, and the difference is that a plurality of 0 s need to be filled in an input image of deconvolution calculation to meet the size of an output matrix;
wherein, a safe batch normalization protocol SecBN is adopted in the batch processing normalization layer, and the input is xi,xi=x′i+x″iThe batch size is m, and the protocol flow is as follows: s1And S2Calculating the batch mean values respectively:
Figure BDA0002374925230000042
Figure BDA0002374925230000043
S1and S2Collectively calculating the batch variance: (a'i,a″i)←SSq(xiB),
Figure BDA0002374925230000044
S1And S2Co-computing normalized values
Figure BDA0002374925230000045
First, call SISqrt protocol calculation
Figure BDA0002374925230000046
Figure BDA0002374925230000047
Where ε is a constant used to ensure numerical stability, with ti=xiBExpressing standard deviation, calling secure multiplication protocol SecMul to calculate
Figure BDA0002374925230000048
Assuming that the scaling and offset parameters gamma and beta are publicly known as global parameters, the forward propagation time is a fixed value, the backward propagation time is updated, and S1And S2Separately computing normalized outputs
Figure BDA0002374925230000049
Comprises the following steps:
Figure BDA00023749252300000410
wherein, in the ReLU layer, SR protocol is adopted, and the flow is as follows: s1And S2SR (xi 'is calculated separately'j) And SR (x ″)ij):
Figure BDA00023749252300000411
Wherein, in the LeakyReLU layer, an SLR protocol is adopted, and the flow is as follows: s1And S2Separately calculating SLR (x'ij) And SLR (x ″)ij):
Figure BDA0002374925230000051
Wherein α ∈ (0,1) is a non-zero constant;
wherein, in the full connection layer, a secure full connection protocol SecFC is adopted, and the input of the full connection layer is set as the output x of the kth layerk,ωikRepresenting the connection weight of the ith neuron of the current layer and the kth neuron of the previous layer, biRepresenting the offset of the ith neuron of the current layer, the SecFC protocol flow is as follows: s1And S2And (3) jointly calculating: (a'ik,a″ik)←SecMul(ωik,xk),f′i=∑ka′ik+bi′,fi″=∑k a″ik+bi″。
The superscripts 'and' above represent the two data resulting from the superscript-free data split, respectively.
Further, the running of the training and generating process with privacy protection specifically includes the following steps:
step S1: data provider DS generates model parameters θ for SD and SGdAnd thetagAnd respectively splits the preprocessing into two groups of secret sharing data which are respectively sent to the first edge server S1And a second edge server S2
Step S2: training a discrimination model SD: the DS respectively takes m random noise samples z and real image samples x, respectively preprocesses and splits the m random noise samples z and the real image samples x into two groups of data to be shared secretly and sends the two groups of data to the S1And S2(ii) a For input noise samples, S1And S2Executing a secure generative model forward propagation protocol SG-FP to generate images
Figure BDA0002374925230000052
For each input data, S1And S2Executing the SD-FP and SG-FP, and calculating the loss function of the SD
Figure BDA0002374925230000053
S1And S2Executing a secure back propagation protocol SD-BP to update parameters of the SD;
step S3: training and generating a model SG: the DS takes m random noise samples z for preprocessing, splits the random noise samples into two groups of data, respectively shares the data secretly and sends the data to the S1And S2(ii) a For input noise samples, S1And S2Generation of images by executing forward propagation protocol SG-FP of generation model
Figure BDA0002374925230000054
For each generated image, S1And S2Executing a secure forward propagation protocol SG-FP for computing a loss function of the SG
Figure BDA0002374925230000055
S1And S2Executing a safe back propagation protocol SG-BP to update parameters of the SG;
step S4: first edge server S1And a second edge server S2And respectively returning the secret sharing of the trained model parameters to the service provider SP.
Further, the SD-FP is a forward propagation protocol of the discriminant model, and specifically includes: let the input of the discriminant model be the secret sharing I ', I' of the image, where the first edge server S1Input I', second edge server S2Input I', in layer I of the model, the result of the secure convolution calculation protocol SecConv is c′(i),c″(i)The calculation result of the secure batch normalization protocol SecBN is y′(i),y″(i)The calculation result of the secure LeakyReLU function is slr′(i),slr″(i)And finally, the protocol calls a secure full connection protocol SecFC to output the discrimination result f ', f' of the image.
Further onThe SG-FP is a forward propagation protocol of the generative model, and specifically includes: let the input to the generative model be the secret share Z ', Z' of the noisy image, the first edge server S1Input Z', second edge Server S2Input Z', in the i layer of the model, the calculation result of the security deconvolution protocol SecDeConv is c′(i),c″(i)The calculation result of the secure batch normalization protocol SecBN is y′(i),y″(i)The SR calculation result of the secure ReLU protocol is SR′(i),sr″(i)The protocol iterates the calculation output generation image f', f ".
Further, the SD-BP is a discriminant model back propagation protocol, which specifically includes:
step S11: the safety loss function propagates backwards: assuming that the parameters of the generated model SG are fixed, calculating and judging the gradient of the model SD loss function by using a random gradient descent algorithm SGD; taking m samples from the real data and the random noise data respectively, wherein the secret sharing of the samples is x′(i),z′(i)And x″(i),z″(i)Respectively sent to the first edge server S1And a second edge server S2I 1.., m; partial derivative of the loss function is represented by
Figure BDA0002374925230000061
Expressed, SD calculates the partial derivative of the loss function securely by using the secure reciprocal protocol SecInv, as follows:
Figure BDA0002374925230000062
step S12: the security activation layer propagates in reverse: in the discriminant model SD, let the partial derivative of the security activation function SLR be δSLRIndicating that the partial derivative of SLR is calculated using the secure comparison protocol SecCmp, α is a non-zero parameter as follows:
Figure BDA0002374925230000071
step S13: safe batch normalization layer back propagation: is provided with
Figure BDA0002374925230000072
Are respectively as
Figure BDA0002374925230000073
The partial derivative of (a) of (b),
Figure BDA0002374925230000074
respectively, the intermediate results of the SecBN calculation of the safe batch normalization protocol are set as the symbols of the gradient
Figure BDA0002374925230000075
The safety gradient of the parameters γ and β is then calculated as follows:
Figure BDA0002374925230000076
wherein the content of the first and second substances,
Figure BDA0002374925230000077
since the parameters γ and β are public parameters, after calculating the gradient, S1Will be provided with
Figure BDA0002374925230000078
Is sent to S2At the same time, S2Will be provided with
Figure BDA0002374925230000079
Is sent to S1For restoring the public parameters; assuming a learning rate of ηBγ and β are updated using the following algorithm:
Figure BDA00023749252300000710
input parameter x in protocol principal computation normalizationiPartial derivatives of
Figure BDA00023749252300000711
Is provided with
Figure BDA00023749252300000712
ti=x-μBIs an intermediate result of a secure forward calculation, the secure back-propagation protocol utilizes an intermediate variable Ii=I′i+Ii1.. m simplifies the inverse calculation process as follows:
Figure BDA00023749252300000713
(b′i,b″i)←SSq(φ′i,φ″i);(c′i,c″i)←SecMul(bii);
Figure BDA00023749252300000714
then, x is inputiPartial derivatives of
Figure BDA00023749252300000715
Calculated by the following algorithm:
Figure BDA00023749252300000716
step S14: safe convolutional layer back propagation: assuming that the learning rate of convolutional layer is ηC
Figure BDA00023749252300000717
Representing the partial derivative, S, of the (i, j) th neuron in the network model1And S2Calculating gradients of weights ω and bias values b in cooperation with executing the following protocol
Figure BDA00023749252300000718
And update the original parameters:
Figure BDA00023749252300000719
Figure BDA00023749252300000720
further, the SG-BP is a back propagation protocol of the generative model, and since a part of the protocol is substantially the same as the SD-BP, only two different secure back propagation calculations in the generative model are given here, specifically:
step S21: the safety loss function propagates backwards: in SG-BP, the back propagation protocol of the security loss function is as follows:
Figure BDA0002374925230000081
step S22: the security activation layer is propagated reversely: let deltaSRThe partial derivative of the safe ReLU algorithm SR is represented, and the safe partial derivative calculation algorithm of SR is as follows:
Figure BDA0002374925230000082
compared with the prior art, the invention has the following beneficial effects:
1. the invention adopts edge calculation, designs a series of protocols based on secret sharing, and is used for protecting the privacy of data and key model parameters in the DCGAN model training and image generation processes.
2. Aiming at the problems of low computation efficiency and high communication overhead of privacy protection machine learning, the invention provides a privacy protection basic computation protocol with higher efficiency and lower communication overhead. Meanwhile, based on the secret sharing design, the error problem caused by a difference privacy scheme is completely avoided, high-precision calculation is provided, and the error transmission problem in a deep network model is relieved.
3. Aiming at a complex DCGAN structure, the invention provides a security generation model (SG) and a security authentication model (SD), and constructs a security forward propagation protocol for safely judging the authenticity of an image and generating the image; a safe back propagation protocol is designed for safely training and updating model parameters, and the data of a user and the double privacy of the model can be ensured in the whole process.
Drawings
Fig. 1 is a schematic diagram of a system architecture according to an embodiment of the present invention.
Fig. 2 shows a DCGAN structure according to an embodiment of the present invention.
FIG. 3 is a software framework diagram of an embodiment of the present invention
Fig. 4 is a security discriminant model and a security generation model architecture according to an embodiment of the present invention.
FIG. 5 is a schematic diagram of a security comparison protocol calculation logic according to an embodiment of the present invention.
Detailed Description
The invention is further explained below with reference to the drawings and the embodiments.
It should be noted that the following detailed description is exemplary and is intended to provide further explanation of the disclosure. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments according to the present application. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
As shown in fig. 1 and fig. 3, the present embodiment provides a lightweight privacy protection generation countermeasure network system, and the entities include a data provider DS, a service provider SP, and a first edge server S1And a second edge server S2The software comprises a DCGAN neural network; the DCGAN neural network comprises a generation model SG and a discrimination model SD;
the data provider DS possesses training image data (possibly containing sensitive information) and provides initial parameters for training of the DCGAN neural network; in order to protect the privacy of the image data I and the model parameters P, the DS splits them locally at random into two sets of secret shared data I ', I' and P ', P', which the DS then sends to the first edge server S respectively1And a secondEdge server S2(ii) a Meanwhile, the DS is responsible for generating random values to meet the requirements of the interactive security protocol, and the random values can be generated off-line and then used in an on-line computing stage;
the first edge server S1With a second edge server S2Responsible for executing interactive secure computing protocols; s1And S2Obtaining secret shares I ', P' and I ', P' from a data provider, respectively, as input, and running a training, generation process with privacy protection, followed by S1And S2Respectively returning the trained privacy protection parameters P ', P' and the generated images O ', O', and sending the parameters to the service provider SP;
service provider SP receives S1And S2The generated secret is shared and the plaintext training parameters P of the DCGAN and the generated image O of the plaintext are recovered.
The method provided by the invention is used for preventing malicious adversaries
Figure BDA0002374925230000101
Capable of attacking one server S1(cannot successfully attack both servers at the same time),
Figure BDA0002374925230000102
the original plaintext data still cannot be recovered because of S1The held data are meaningless random secret shares. After the training process is complete, the Service Provider (SP) can go from S1And S2Secret sharing of the trained model parameters and the synthesized image is obtained, and the original data is restored by using addition. As shown in FIG. 3, the software system of the present invention consists of two models: a security discriminant model (SD) and a security Generator model (SG). SD is used to predict whether the input data is authentic or generated by providing an authentic image or a generated image. Random noise is input, and SG can generate an image. To generate near-true images, the LP-GAN constructs the SD and SG using the structure of DCGAN. As shown in fig. 4: for training SD and SG, the present embodiment proposes a secure forward propagation algorithm (SD-FP protocol 8, SG-FP protocol 9) and AnnThe full back propagation algorithm (SD-BP, SG-BP) is used for protecting data and model privacy in forward and back propagation calculation.
In the present embodiment, as shown in fig. 2 and 4, the generative model SG includes an deconvolution layer, a batch normalization layer, a ReLU layer, and a full connection layer; the discrimination model SD comprises a convolution layer, a batch processing normalization layer, a LeakyReLU layer and a full connection layer;
wherein, convolution layer and deconvolution layer: in convolutional neural networks, convolution operations are typically used to extract features from input data. Suppose the input data (matrix) is
Figure BDA0002374925230000111
The size of the convolution kernel is n × n. Neurons having the same weight wlm(l, m ═ 0, 1.., n-1) and a bias value b, the convolution output of the (i, j) th neuron in the model is
Figure BDA0002374925230000112
The computation of deconvolution is similar to convolution, except that the size of the input matrix X needs to satisfy the requirements of the output matrix by filling in multiple 0 values. Batch normalization layer: the batch normalization operation normalizes the input data by adjustment and scaling, and stabilizes the activation value distribution of the neurons into normal distribution. Assume that the input data is { x }i}i∈[1,m]The batch size is m, and batch normalization can be calculated through the following steps. First, an average value of inputs is calculated
Figure BDA0002374925230000113
Sum variance
Figure BDA0002374925230000114
Then, input data is normalized to
Figure BDA0002374925230000115
Where ε is a minimum amount to prevent the denominator from being 0. Finally, calculate
Figure BDA0002374925230000116
Where γ and β are the scaling and shifting parameters. An active layer: the activation function introduces a non-linear element into the neural network. In generative model G, neurons use the ReLU function (f (x) ═ max (0, x)) as an activation function. In the discriminant model D, leak relu (f (x) ═ max (x, α x), α ∈ (0,1)) is used as an activation function. Full connection layer: the neurons in the fully connected layer connect all neurons in the previous layer for merging features extracted from upper layer neurons. Let x bekInput data for the kth neuron in the upper layer, ωikRepresenting the weight connecting the kth neuron of the upper layer and the ith neuron of the current layer, biRepresenting the bias value of the ith neuron in the current layer. The output of the ith neuron in the fully connected layer is
Figure BDA0002374925230000117
Preferably, the embodiment also provides a protocol based on secret sharing and a secure basic computing protocol. The method comprises the following specific steps:
the protocol based on secret sharing adopts a secret sharing scheme to execute security basic calculation, the system model of the embodiment is based on two-party calculation (2PC), and the protocol is formed by two servers S1And S2And the two random numbers are operated together, and a plurality of groups of random numbers are required to be generated in advance for protocol calculation. The symbols A 'and A' indicate the allocation to two servers S1And S2Is shared (where a ═ a' + a ").
And (4) a safe addition protocol: given an input
Figure BDA0002374925230000121
Secure addition protocol SecAdd at S1Output f1At S2Output f2Wherein f is1+f2U + v. In this process, the two servers perform computations only locally, without interaction.
Safe multiplication protocol: the secure multiplication protocol SecMul is based on the Beaver's triplet scheme. Given an input
Figure BDA0002374925230000122
SecMul at S1Output f1At S2Output f2And f is and1+f2u · v. In this process, three random numbers (a, b, c) will be generated to ensure that no information leaks during the calculation.
Safe natural logarithm protocol: the secure natural logarithm protocol SecLog is based on the mclaurin formula, iteratively approximating the natural logarithm f (u) ═ ln (u). Given an input
Figure BDA0002374925230000123
SecLog at S1Output f1At S2Output f2When iteration reaches the predefined precision, the calculation result satisfies f1+f2≈ln(u)。
Safety reciprocal protocol: the secure reciprocal protocol SecInv is based on the newton-laplace calculation scheme. Given an input
Figure BDA0002374925230000124
SecInv outputs an approximate result of f (u) of 1/u. When a predefined accuracy is reached, SecInv terminates the iterative process and at S1Output f1At S2Output f2Wherein f is1+f2≈1/u。
The protocol based on secret sharing is only suitable for an integer range, but the real number calculation involved in the machine learning process cannot be directly processed by adopting the secret sharing protocol, so that the real number is firstly converted into an integer form in the calculation of the invention, and then the security protocol is executed. Real number
Figure BDA0002374925230000125
Can be expressed in the form of fixed point number
Figure BDA0002374925230000126
Wherein
Figure BDA0002374925230000127
Is a part of the integer, and is,
Figure BDA0002374925230000128
is the fractional part. In the invention, only the integer part x participates in protocol calculation, and simultaneously epsilon is fixed to be a proper value so as to meet the precision requirement, and higher precision management can be realized by selecting larger epsilon.
Secure XOR (Secure XOR, SecXor) for calculating the result of the XOR operation on bit data. Input device
Figure BDA0002374925230000131
(u=u1+u2,v=v1+v2) In which S is1Input u1,v1,S2Input u2,v2. After executing SecXor protocol, S1Output of
Figure BDA0002374925230000132
S2Output of
Figure BDA0002374925230000133
And guarantee
Figure BDA0002374925230000134
In the protocol, Si(i ∈ (1,2)) calculating f respectivelyi=(ui+vi) mod2, derived as:
Figure BDA0002374925230000135
security or protocol (secuor, SecOr) for computing bit or result. Input device
Figure BDA0002374925230000136
(u=u1+u2,v=v1+v2) After executing SecOr protocol (e.g., protocol 1), S1Output of
Figure BDA0002374925230000137
S2Output of
Figure BDA0002374925230000138
And is
Figure BDA00023749252300001311
Wherein S1And S2The-u.v is jointly calculated using the secure multiplication protocol SecMul.
Figure BDA0002374925230000139
The Secure Most Significant Bit protocol (SecMSB) is a sub-protocol of the Secure compare protocol. Inputting binary bit sequence ul-1,...,u0]SecMSB protocol (e.g. protocol 2) outputs a sequence fl-1,...,f0]For locating the position of the highest bit number "1" in the input sequence. Let j denote the highest order ujIf 1, then f in the outputjThe remaining bits all have a value of 0. In addition, SecMSB additionally outputs a bit ζ if [ u ] is inputl-1,...,u0]All the bits in the data are 0, ζ is 0, otherwise ζ is 1.
Figure BDA00023749252300001310
Figure BDA0002374925230000141
The algorithm flow for SecMSB is as follows: first the data provider DS generates two random numbers t'lAnd t ″)l(satisfy t'l+t″l1) and sends the random number to the edge server S1And S2(ii) a Then, S1And S2Executing secure multiplication protocol SecMul to calculate a sequence t'l,...,t′0],[t″l,...,t″0](ti=t′i+t″i) This sequence locates the input ul-1,...,u0]Bit position j with the highest bit of 1, where [ t ]l,...,tj+1]=[1,....,1]And [ t)j,...,t0]=[0,....,0](ii) a Then S1And S2Respectively calculate fi′=t′i+1-t′i,fi″=t″i+1-t″iSequence [ f ]l-1,...,f0]Except that f is 1 at the j position of the highest positionjAll the remaining bits are 0. To determine if the input sequence is all 0S1And S2A flag bit ζ is additionally calculated,
Figure BDA0002374925230000142
and is
Figure BDA0002374925230000143
When the inputs are all 0, ζ is 0, otherwise, ζ is 1.
For example, if [ u ] is inputl-1,...,u0]=[00011010]SecMSB calculated j 4, [ t [ [ t ]l,...,t0]=[11100000]And output [ fl-1,...,f0]=[00000000]And ζ ═ 1. If input [ u ]l-1,...,u0]=[00000000]SecMSB calculation of [ t [ ]l,...,t0]=[11111111]And output [ fl-1,...,f0]=[00000000]And ζ ═ 0.
Secure compare protocol (SecCmp) compares two inputs
Figure BDA0002374925230000144
If u > v, the SecCmp protocol (e.g., protocol 3) outputs 1, and if u ≦ v, the protocol outputs 0. Due to S1And S2Respectively input u1,v1And u2,v2For convenience of comparison, the problem is here converted into a comparison of a to u1-v1And b ═ u2-v2It can be deduced that if a > b, then u > v, otherwise u ≦ v.
Because the SecCmp protocol adopts a bit-based comparison method, the protocol firstly uses a bit decomposition method DFC (data format conversion), and the method converts an integer into a whole number
Figure BDA0002374925230000145
Binary bit sequence format [ u ] decomposed into length of length ll-1,...,u0]. Converting | u | ═ 0,2l-1-1]Decomposition into a bit sequence ul-2,...,u0]Highest bit ul-1Representing a sign bit, if u < 0, u l-11, otherwise, u l-10. Definition of
Figure BDA0002374925230000146
Is an unsigned integer value of u.
Figure BDA0002374925230000147
Figure BDA0002374925230000151
The SecCmp protocol flow is as follows:
(1) the protocol first calculates a ═ u1-v1And b ═ u2-v2Is represented by a binary bit sequence. The protocol then compares the two unsigned integers
Figure BDA0002374925230000152
And
Figure BDA0002374925230000153
(2) to implement the comparison calculation, S1And S2First, the SecXor protocol is called to calculate [ al-1,...,a0]And [ b)l-1,...,b0]In different bit, output
Figure BDA0002374925230000154
(3) To position [ c)l-1,...,c0]Bit position with the highest median order 1, S1And S2Running SecMSB protocol across compute sequence [ d'l-1,...,d′0]Sum orderColumn [ dl-1,...,d″0](di=d′i+d′i). If a*≠b*,[dl-1,...,d0]Has only one bit djJ represents the identity of the first different bit of two numbers from high to low; if a*=b*,[dl-1,...,d0]All bits in (a) are 0.
(4)S1And S2Performing secure multiplication protocol SecMul computation ei=ai·diAnd separately calculate
Figure BDA0002374925230000155
Figure BDA0002374925230000156
For comparison of a*And b*If a*>b*And xi is 1, otherwise xi is 0. SecMSB additional calculation identifier ζ for determining a*Whether or not to be equal to b*If a*=b*And if not, the zeta is 1.
(5) Then, S1And S2Executing SecOr protocol to calculate iota ═ al-1∨bl-1And the judgment is used for judging the positive and negative of a and b, if a is more than or equal to 0, and b is more than or equal to 0, the iota is 0, otherwise, the iota is 1.
(6) When a ≠ b, S1And S2Computing
Figure BDA0002374925230000161
The results of (a) can be compared in magnitude.
(7) When a is b < 0, ν is not as expected, and the edge server needs to further calculate f is ν ζ to obtain the correct comparison result.
The computational logic of the SecCmp protocol is shown in fig. 5. Assume the input of SecCmp is u1=-1030,u2=583,v1=-929,v2551 (in which u is u1+u2=-447,v=v1+v2378), a can be obtained1-v1=-101,b=v2-u2-32; according to the DFC protocol, the signed binary representation of a, b is [ a ], respectivelyl-1,...,a0]=[11100101],[bl-1,...,b0]=[10100000](ii) a Since a < 0, b < 0, and a*>b*It can be deduced that iota ═ 1, ξ ═ 1,
Figure BDA0002374925230000162
the algorithm introduces the variable ζ segment a ≠ b, which is derived from above, and therefore ζ ≠ 1. The final result is calculated as f ═ ν · ζ ═ 0 · 1 ═ 0, and the result indicates a ≦ b, i.e., u ≦ v, which is the same as the plaintext data comparison result.
Secure Square Root Protocol (SSqrt) for computing
Figure BDA0002374925230000163
The protocol is based on a fast square root computation method, the Goldschmidt algorithm, which converges faster and can achieve higher accuracy in the same time as the Newton-Simpson algorithm. The flow of the Goldschmidt algorithm is as follows:
(1) an initialization stage: the algorithm initializes two iteration parameters g and h, g represents
Figure BDA0002374925230000164
The approximate iteration value of (1), h represents
Figure BDA0002374925230000165
Of g, where g0=uy0,h0=1/2y0,y0Is that
Figure BDA0002374925230000166
Initial approximation of, y0A linear formula y may be used0An approximation is calculated for α u + β, where α -0.8099868542 and β -1.787727479 are constants.
(2) An iterative computation stage: the algorithm calculates the square root according to the following iterative formula:
Figure BDA0002374925230000167
Figure BDA0002374925230000168
the Secure Range Conversion protocol (SRC) is a sub-protocol of the Secure square root protocol for converting integers into the form of bases and exponents and bases into a specific Range. To compute the square root quickly, the input data needs to be converted into the interval [1/2, 1). Suppose 2p-1≤u<2pThe SRC protocol (protocol 4) converts u to the interval 1/2 ≦ u · 2-pLess than 1; input device
Figure BDA0002374925230000171
S1And S2The cooperative computing converts it into u1=m1·2p,u2=m2·2p. The SRC protocol flows are as follows:
(1)S1and S2Respectively locally calculating
Figure BDA0002374925230000172
And a is toiRandom division into two random secret shares ai' and aiAnd guarantee ai∈[1/4,1/2)(i∈{1,2})。
(2)S1And a ″)1And p1Is sent to S2,S2A'2And p2Is sent to S1
(3)S1And S2Respectively take p1,p2Medium and large values, and separately calculate
Figure BDA0002374925230000173
And
Figure BDA0002374925230000174
it can be deduced that u ═ m1·2p+m2·2p=m·2p
Figure BDA0002374925230000175
Compared with the existing scheme, the safety Square Root protocol (SSqrt) improves the convergence rate on the premise of the same safety, and can obtain higher calculation accuracy under the condition of the same time consumption. The protocol is divided into three stages, an initialization stage, an iteration stage and a result conversion stage. The flow of the SSqrt protocol (protocol 5) is as follows:
(1) an initialization stage: s1And S2Interactively generating an initial parameter g0And h0. First of all S1And S2Performing the SRC protocol to calculate m1,m2P, guarantee u1=m1·2p,u2=m2·2p(ii) a Then, S1And S2Respectively calculate y'0 ′←αm1+β,h′0←1/2(αm1+β),y″0←αm2,h″0←1/2(αm2) Initialization parameter y0And g0(ii) a Finally, S1And S2Computing g by calling SecMul protocol0←my0
(2) An iteration stage: given u-m.2pIn this stage, S1And S2Interactively iterative computation
Figure BDA0002374925230000176
And
Figure BDA0002374925230000177
when the iteration parameter i < tau, S1And S2Collectively 3/2-gihi(ii) a In line 10, the edge server performs SecMul protocol computation iteration result gi+1=gi(3/2-gihi) And hi+1=hi(3/2-gihi) Wherein g isi+1And hi+1The computations of (a) may be performed in parallel; after the iteration is finished, the value g is outputiIs composed of
Figure BDA0002374925230000181
Approximation of hiIs composed of
Figure BDA0002374925230000182
An approximation of (d).
(3) And a result conversion stage: the goal of the protocol is to compute
Figure BDA0002374925230000183
And
Figure BDA0002374925230000184
the calculated relationship of (a) is as follows:
Figure BDA0002374925230000185
if p is an even number, setting a to 1; if p is odd, set up
Figure BDA0002374925230000186
And calculate
Figure BDA0002374925230000187
Can finally calculate
Figure BDA0002374925230000188
Figure BDA0002374925230000189
Secure reciprocal square root protocol: since the batch normalization operation in the GAN framework requires calculation
Figure BDA00023749252300001810
The invention designs a Secure Inverse Square Root protocol (SISqrt). In the SSqrt protocol (protocol 5), hiIterative computation
Figure BDA00023749252300001811
The SISqrt protocol is based on an SSqrt construction, only the 14 th to 15 th rows in protocol 5 need to be replaced by
Figure BDA00023749252300001812
And
Figure BDA00023749252300001813
by derivation:
Figure BDA0002374925230000191
secure Square Protocol (SSq): input device
Figure BDA0002374925230000192
SSq (protocol 6) calculates f (u) u2. SSq the calculation flow is as follows:
(1) input device
Figure BDA0002374925230000193
S1And S2Calling SecMul protocol to calculate 2u1v1
(2)SiComputing
Figure BDA0002374925230000194
Figure BDA0002374925230000195
Each layer includes a secure computing protocol, specifically as follows:
wherein, in the convolutional layer, x is assumedijThe method comprises the following steps of (1) performing convolution calculation on elements of an ith row and a jth column of an input matrix X; ω and b are the weight and offset, respectively, of the convolution kernel of size n × n; to protect the privacy of the training data, X is first randomly divided into two random secret shares X 'and X ", where X ═ X' + X", where X isijIs divided into two random values x'ijAnd x ″)ijWhile shifting the weights and offsetsQuantity is divided into two groups of random secret sharing (omega'lm,ω″lm) And (b ', b'), wherein ω islm=ω′lm+ω″lmB ═ b' + b "; the secure convolution protocol SecConv is calculated as follows: s1And S2And (3) jointly calculating: (a'lm,a″lm)←SecMul(ωlm,xi+l,j+m);S1C 'is calculated'ij,S2Calculate c ″ij
Figure BDA0002374925230000196
In the deconvolution layer, the calculation of a secure deconvolution protocol SecDeconv is the same as that of SecConv, and the difference is that a plurality of 0 s need to be filled in an input image of deconvolution calculation to meet the size of an output matrix;
wherein, a safe batch normalization protocol SecBN is adopted in the batch processing normalization layer, and the input is xi,xi=x′i+x″iThe batch size is m, and the protocol flow is as follows: s1And S2Calculating the batch mean values respectively:
Figure BDA0002374925230000197
Figure BDA0002374925230000201
S1and S2Collectively calculating the batch variance: (a'i,a″i)←SSq(xiB),
Figure BDA0002374925230000202
S1And S2Co-computing normalized values
Figure BDA0002374925230000203
First, call SISqrt protocol calculation
Figure BDA0002374925230000204
Figure BDA0002374925230000205
Where ε is a constant used to ensure numerical stability, with ti=xiBExpressing standard deviation, calling secure multiplication protocol SecMul to calculate
Figure BDA0002374925230000206
Assuming that the scaling and offset parameters gamma and beta are publicly known as global parameters, the forward propagation time is a fixed value, the backward propagation time is updated, and S1And S2Separately computing normalized outputs
Figure BDA0002374925230000207
Comprises the following steps:
Figure BDA0002374925230000208
in the ReLU layer, SR protocol is adopted, and the activation layer provides non-linear calculation for the neural network. Let the input be x, S1And S2The security comparison protocol SecCmp is executed jointly to determine whether x is greater than 0, and if SecCmp (x,0) is 1, i.e., x > 0, sr (x) is x, otherwise sr (x) is 0. For the (i, j) th neuron, the SR procedure is as follows: s1And S2SR (x 'is calculated separately'ij) And SR (x ″)ij):
Figure BDA0002374925230000209
The drawback of the ReLU function is that the function can only learn the gradient when x is greater than 0, in order to solve the problem, LeakyReLU introduces a non-zero constant α ∈ (0,1) when calculating the case that x is less than or equal to 0, and for the (i, j) th neuron, an SLR protocol is adopted, and the flow is as follows: s1And S2Separately calculating SLR (x'ij) And SLR (x ″)ij):
Figure BDA00023749252300002010
Wherein α ∈ (0,1) is a non-zero constant;
wherein, in the fully-connected layer, the nerves in the fully-connected layerThe element is connected with all the neurons of the upper layer, a safe full connection protocol SecFC is adopted, and the input of the full connection layer is set as the output x of the kth layerk,ωikRepresenting the connection weight of the ith neuron of the current layer and the kth neuron of the previous layer, biRepresenting the offset of the ith neuron of the current layer, the SecFC protocol flow is as follows: s1And S2And (3) jointly calculating: (a'ik,a″ik)←SecMul(ωik,xk),f′i=∑k a′ik+bi′,fi″=∑k a″ik+bi″。
The superscripts 'and' above represent the two data resulting from the superscript-free data split, respectively.
In this embodiment, the running of the training and generating process with privacy protection specifically includes the following steps:
step S1: data provider DS generates model parameters θ for SD and SGdAnd thetagAnd respectively splits the preprocessing into two groups of secret sharing data which are respectively sent to the first edge server S1And a second edge server S2
Step S2: training a discrimination model SD: the DS respectively takes m random noise samples z and real image samples x, respectively preprocesses and splits the m random noise samples z and the real image samples x into two groups of data to be shared secretly and sends the two groups of data to the S1And S2(ii) a For input noise samples, S1And S2Executing a secure generative model forward propagation protocol SG-FP to generate images
Figure BDA0002374925230000211
For each input data, S1And S2Executing the SD-FP and SG-FP, and calculating the loss function of the SD
Figure BDA0002374925230000212
S1And S2Executing a secure back propagation protocol SD-BP to update parameters of the SD;
step S3: training and generating a model SG: the DS takes m random noise samples z as pretreatment and splits the samples into two groups of numbersRespectively secretly sharing and sending to S1And S2(ii) a For input noise samples, S1And S2Generation of images by executing forward propagation protocol SG-FP of generation model
Figure BDA0002374925230000213
For each generated image, S1And S2Executing a secure forward propagation protocol SG-FP for computing a loss function of the SG
Figure BDA0002374925230000214
S1And S2Executing a safe back propagation protocol SG-BP to update parameters of the SG;
step S4: first edge server S1And a second edge server S2And respectively returning the secret sharing of the trained model parameters to the service provider SP.
Wherein, the specific algorithm of the training step is as protocol 7, and the parameter of the discriminant model is set as thetadThe parameter of the generative model is thetag(ii) a The random noise sample is z, the real image sample is x, and the generated image is
Figure BDA0002374925230000215
The learning rate of the safety discrimination model SD is etadThe learning rate of the safety generative model SG is etagThe batch size is M, the total amount of training samples is M, and the number of model training rounds is NdAnd NgThe forward propagation protocol and the backward propagation protocol of the safety discrimination model SD are respectively SD-FP and SD-BP, and the forward propagation protocol and the backward propagation protocol of the safety generation model SG are respectively SG-FP and SG-BP.
Figure BDA0002374925230000221
Figure BDA0002374925230000231
The present embodiment proposes two secure forward propagation protocols: the system comprises a security discrimination model Forward Propagation protocol (SD-FP) and a security generation model Forward Propagation protocol (SG-FP) which are used for completing the security calculation of model Forward Propagation. Meanwhile, the invention designs a secure computation protocol for different layers in the model respectively, including a convolution layer, an activation layer, a batch normalization layer, a full connection layer and the like (as described above). In the protocol, the hyper-parameters and constants are in a plaintext form, and other data are shared by random secrets and are used for protecting the privacy of training images and models.
In this embodiment, the SD-FP is a forward propagation protocol of the discriminant model, which specifically includes: let the input of the discriminant model be the secret sharing I ', I' of the image, where the first edge server S1Input I', second edge server S2Input I', in layer I of the model, the result of the secure convolution calculation protocol SecConv is c′(i),c″(i)The calculation result of the secure batch normalization protocol SecBN is y′(i),y″(i)The calculation result of the secure LeakyReLU function is slr′(i),slr″(i)And finally, the protocol calls a secure full connection protocol SecFC to output the discrimination result f ', f' of the image. As shown in protocol 8.
Figure BDA0002374925230000232
In this embodiment, the SG-FP is a forward propagation protocol of a generative model, which specifically includes: let the input to the generative model be the secret share Z ', Z' of the noisy image, the first edge server S1Input Z', second edge Server S2Input Z', in the i layer of the model, the calculation result of the security deconvolution protocol SecDeConv is c′(i),c″(i)The calculation result of the secure batch normalization protocol SecBN is y′(i),y″(i)The SR calculation result of the secure ReLU protocol is SR′(i),sr″(i)The protocol iterates the calculation output generation image f', f ". As shown in protocol 9.
Figure BDA0002374925230000241
In this embodiment, the SD-BP is a discriminant model back propagation protocol, which specifically includes:
step S11: the safety loss function propagates backwards: assuming that the parameters of the generated model SG are fixed, calculating and judging the gradient of the model SD loss function by using a random gradient descent algorithm SGD; taking m samples from the real data and the random noise data respectively, wherein the secret sharing of the samples is x′(i),z′(i)And x″(i),z″(i)Respectively sent to the first edge server S1And a second edge server S2I 1.., m; partial derivative of the loss function is given by
Figure BDA0002374925230000242
Expressed, SD calculates the partial derivative of the loss function securely by using the secure reciprocal protocol SecInv, as follows:
Figure BDA0002374925230000243
step S12: the security activation layer propagates in reverse: in the discriminant model SD, let the partial derivative of the security activation function SLR be δSLRIndicating that the partial derivative of SLR is calculated using the secure comparison protocol SecCmp, α is a non-zero parameter as follows:
Figure BDA0002374925230000244
step S13: safe batch normalization layer back propagation: is provided with
Figure BDA0002374925230000251
Are respectively as
Figure BDA0002374925230000252
The partial derivative of (a) of (b),
Figure BDA0002374925230000253
respectively, the intermediate results of the SecBN calculation of the safe batch normalization protocol are set as the symbols of the gradient
Figure BDA0002374925230000254
The safety gradient of the parameters γ and β is then calculated as follows:
Figure BDA0002374925230000255
wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0002374925230000256
since the parameters γ and β are public parameters, after the gradient is calculated, S1Will be provided with
Figure BDA0002374925230000257
Is sent to S2At the same time, S2Will be provided with
Figure BDA0002374925230000258
Is sent to S1For restoring the public parameters; assuming a learning rate of ηBγ and β are updated using the following algorithm:
Figure BDA0002374925230000259
input parameter x in protocol principal computation normalizationiPartial derivatives of
Figure BDA00023749252300002510
Is provided with
Figure BDA00023749252300002511
ti=x-μBIs an intermediate result of a secure forward calculation, the secure back-propagation protocol utilizes an intermediate variable Ii=I′i+I″i1.. m simplifies the inverse calculation process as follows:
Figure BDA00023749252300002512
(b′i,b″i)←SSq(φ′i,φ″i);(c′i,c″i)←SecMul(bii);
Figure BDA00023749252300002513
then, x is inputiPartial derivatives of
Figure BDA00023749252300002514
Calculated by the following algorithm:
Figure BDA00023749252300002515
step S14: safe convolutional layer back propagation: assuming that the learning rate of convolutional layer is ηC
Figure BDA00023749252300002516
Representing the partial derivative, S, of the (i, j) th neuron in the network model1And S2Calculating gradients of weights ω and bias values b in cooperation with executing the following protocol
Figure BDA00023749252300002517
And update the original parameters:
Figure BDA00023749252300002518
Figure BDA00023749252300002519
in this embodiment, the SG-BP is a back propagation protocol of a generative model, and since a part of the protocol is substantially the same as the SD-BP, only two different secure back propagation calculations in the generative model are given here, specifically:
step S21: the safety loss function propagates backwards: in SG-BP, the back propagation protocol of the security loss function is as follows:
Figure BDA0002374925230000261
step S22: secure active layer reversalPropagation: let deltaSRThe partial derivative of the safety ReLU algorithm SR is expressed, and the calculation algorithm of the safety partial derivative of SR is as follows:
Figure BDA0002374925230000262
in summary, the embodiment supports the interactive execution of the secure computing protocol by the edge server, and a completely trusted third-party entity is not needed, the system is guaranteed to complete the machine learning computation safely without revealing training data and model parameters, and a user does not need to interact with the server during the protocol execution. Secondly, the framework provides a high-efficiency, low-communication-consumption and high-precision basic computing protocol, reduces the execution time and data transmission quantity of complex GAN model training and prediction computation, relieves the error transmission problem, and improves the practicability of the system while protecting the data privacy. Thirdly, the framework supports the functions of security image generation, security image authenticity prediction and the like of the GAN network. Fourthly, the framework provides a security training scheme of the GAN model, and model parameters can be updated under privacy protection by the edge server. Finally, the training data set and the model parameters for machine learning are kept secret from the edge server, so that better privacy protection is realized.
The foregoing is directed to preferred embodiments of the present invention, other and further embodiments of the invention may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow. However, any simple modification, equivalent change and modification of the above embodiments according to the technical essence of the present invention are within the protection scope of the technical solution of the present invention.

Claims (7)

1. A lightweight privacy protection generation countermeasure network system is characterized in that entities comprise a data provider DS, a service provider SP and a first edge server S1And a second edge server S2The software comprises an LP-GAN security computing framework; the LP-GAN is safeThe computing framework comprises a safety generation model SG and a safety discrimination model SD;
the data provider DS has training image data and provides initial parameters for training the DCGAN neural network; in order to protect the privacy of the image data I and the model parameters P, the DS splits them locally at random into two sets of secret shared data I ', I' and P ', P', which the DS then sends to the first edge server S respectively1And a second edge server S2(ii) a Meanwhile, the DS is responsible for generating a random value to meet the requirements of the interactive security protocol;
the first edge server S1With a second edge server S2Responsible for executing interactive secure computing protocols; s1And S2Secret shared data I ', P' and I ', P' are obtained as input from a data provider, respectively, and a training, generation protocol with privacy protection is run, after S1And S2Respectively returning the trained privacy protection parameters P ', P' and the generated images O ', O', and sending the parameters to the service provider SP;
service provider SP receives S1And S2The generated secret is shared and the plaintext training parameters P of the DCGAN and the generated image O of the plaintext are recovered.
2. The lightweight privacy protection generation countermeasure network system of claim 1, wherein the secure generation model SG includes an deconvolution layer, a batch normalization layer, a ReLU layer, and a full connection layer; the safety discrimination model SD comprises a convolution layer, a batch processing normalization layer, a LeakyReLU layer and a full connection layer; each layer includes a secure computing protocol, specifically as follows:
wherein, in the convolutional layer, x is assumedijThe method comprises the following steps of (1) performing convolution calculation on elements of an ith row and a jth column of an input matrix X; ω and b are the weight and offset, respectively, of the convolution kernel of size n × n; to protect the privacy of the training data, X is first randomly divided into two random secret shares X 'and X ", where X ═ X' + X", where X isijIs divided into two random values x'ijAnd x ″)ijWhile dividing the weight and the offset into twoGroup random secret sharing (ω'lm,ω″lm) And (b ', b'), wherein ωlm=ω′lm+ω″lmB ═ b' + b "; the secure convolution protocol SecConv is calculated as follows: s1And S2And (3) jointly calculating: (a'lm,a″lm)←SecMul(ωlm,xi+l,j+m);S1C 'is calculated'ij,S2Calculate c ″)ij
Figure FDA0002374925220000021
In the deconvolution layer, the calculation of a secure deconvolution protocol SecDeconv is the same as that of SecConv, and the difference is that a plurality of 0 s need to be filled in an input image of deconvolution calculation to meet the size of an output matrix;
wherein, a safe batch normalization protocol SecBN is adopted in the batch processing normalization layer, and the input is xi,xi=x′i+x″iThe batch size is m, and the protocol flow is as follows: s. the1And S2Calculating the batch mean values respectively:
Figure FDA0002374925220000022
Figure FDA0002374925220000023
S1and S2Collectively calculating the batch variance: (a'i,a″i)←SSq(xiB),
Figure FDA0002374925220000024
S1And S2Co-computing normalized values
Figure FDA0002374925220000025
First, call SISqrt protocol computation
Figure FDA0002374925220000026
Figure FDA0002374925220000027
Where ε is a constant, used to ensure numerical stability, with ti=xiBExpressing standard deviation, calling secure multiplication protocol SecMul to calculate
Figure FDA0002374925220000028
Figure FDA0002374925220000029
Assuming that the scaling and offset parameters gamma and beta are publicly known as global parameters, the forward propagation time is a fixed value, the backward propagation time is updated, and S1And S2Separately computing normalized outputs
Figure FDA00023749252200000210
Comprises the following steps:
Figure FDA00023749252200000211
wherein, in the ReLU layer, SR protocol is adopted, and the flow is as follows: s. the1And S2SR (x 'is calculated separately'ij) And SR (x ″)ij):
Figure FDA00023749252200000212
Wherein, in the LeakyReLU layer, an SLR protocol is adopted, and the flow is as follows: s1And S2Separately calculating SLR (x'ij) And SLR (x ″)ij):
Figure FDA00023749252200000213
Wherein α ∈ (0,1) is a non-zero constant;
wherein, in the full connection layer, a secure full connection protocol SecFC is adopted, and the input of the full connection layer is set as the output x of the kth layerk,ωikRepresenting the connection weight of the ith neuron of the current layer and the kth neuron of the previous layer, biRepresenting the offset of the ith neuron of the current layer, the SecFC protocol flow is as follows: s1And S2And (3) jointly calculating: (a'ik,a″ik)←SecMul(ωik,xk),fi′=∑ka′ik+bi′,fi″=∑ka″ik+bi″。
The superscripts 'and' above represent the two data resulting from the superscript-free data split, respectively.
3. The system of claim 2, wherein the running of the training and generation process with privacy protection specifically comprises the following steps:
step S1: data provider DS generates model parameters θ for SD and SGdAnd thetagAnd respectively splits the preprocessing into two groups of secret sharing data which are respectively sent to the first edge server S1And a second edge server S2
Step S2: training a discrimination model SD: the DS respectively takes m random noise samples z and real image samples x, respectively preprocesses and splits the m random noise samples z and the real image samples x into two groups of data to be shared secretly and sends the two groups of data to the S1And S2(ii) a For input noise samples, S1And S2Executing a secure generative model forward propagation protocol SG-FP to generate images
Figure FDA0002374925220000031
For each input data, S1And S2Executing the SD-FP and SG-FP, and calculating the loss function of the SD
Figure FDA0002374925220000032
S1And S2Executing a secure back propagation protocol SD-BP to update parameters of the SD;
step S3: training and generating a model SG: the DS takes m random noise samples z for preprocessing, splits the random noise samples into two groups of data, respectively shares the data secretly and sends the data to the S1And S2(ii) a For input noise samples, S1And S2Executing forward propagation protocol SG-FP generation of generation modelImaging
Figure FDA0002374925220000033
For each generated image, S1And S2Executing a secure forward propagation protocol SG-FP for computing a loss function of the SG
Figure FDA0002374925220000034
S1And S2Executing a safe back propagation protocol SG-BP to update parameters of the SG;
step S4: first edge server S1And a second edge server S2And respectively returning the secret sharing of the trained model parameters to the service provider SP.
4. The system of claim 3, wherein the SD-FP is a forward propagation protocol of a discriminant model, and specifically comprises: let the input of the discriminant model be the secret sharing I ', I' of the image, where the first edge server S1Input I', second edge server S2Input I', in layer I of the model, the result of the secure convolution calculation protocol SecConv is c′(i),c″(i)Y 'is the calculation result of SecBN'(i),y″(i)The calculation result of the secure LeakyReLU function is slr'(i),slr″(i)And finally, the protocol calls a secure full connection protocol SecFC to output the discrimination result f ', f' of the image.
5. The lightweight privacy protection generation countermeasure network system of claim 3, wherein the SG-FP is a forward propagation protocol of a generation model, specifically: let the input to the generative model be the secret share Z ', Z' of the noisy image, the first edge server S1Input Z', second edge Server S2Input Z ', in the ith layer of the model, the calculation result of the secure deconvolution protocol SecDeConv is c'(i),c″(i)The secure batch normalization protocol SecBN calculation result is y'(i),y″(i)The calculation result of the SR of the safe ReLU protocol is SR'(i),sr″(i)The protocol iterates the calculation output generation image f', f ".
6. The lightweight privacy protection generation countermeasure network system of claim 3, wherein the SD-BP is a discriminant model back propagation protocol, specifically:
step S11: the safety loss function propagates backwards: assuming that the parameters of the generated model SG are fixed, calculating and judging the gradient of the model SD loss function by using a random gradient descent algorithm SGD; taking m samples from the real data and the random noise data respectively, wherein the secret sharing x 'of the samples'(i),z′(i)And x ″)(i),z″(i)Respectively sent to the first edge server S1And a second edge server S2I 1.., m; partial derivative of the loss function is represented by
Figure FDA0002374925220000041
Expressed, SD calculates the partial derivative of the loss function securely by using the secure reciprocal protocol SecInv, as follows:
Figure FDA0002374925220000042
step S12: the security activation layer is propagated reversely: in the discriminant model SD, let the partial derivative of the security activation function SLR be δSLRIndicating that the partial derivative of SLR is calculated using the secure comparison protocol SecCmp, α is a non-zero parameter as follows:
Figure FDA0002374925220000051
step S13: safe batch normalization layer back propagation: is provided with
Figure FDA0002374925220000052
Are respectively xi,yi,
Figure FDA0002374925220000053
Partial derivative of (a), xi,yi,
Figure FDA0002374925220000054
Respectively, for the intermediate results of the secure batch normalization protocol SecBN calculation, assuming the sign of the gradient as ^ then the secure gradients for the parameters γ and β are calculated as follows:
Figure FDA0002374925220000055
wherein the content of the first and second substances,
Figure FDA0002374925220000056
since the parameters γ and β are public parameters, after the gradient is calculated, S1Sending ^ gamma ', (beta') to S2At the same time, S2Sending ^ gamma ^ and ^ beta "to S1For restoring the public parameters; assuming a learning rate of ηBγ and β are updated using the following algorithm: gamma raynew=γ-ηB·▽γ,βnew=β-ηBBeta; input parameter x in protocol principal computation normalizationiPartial derivatives of
Figure FDA0002374925220000057
Is provided with
Figure FDA0002374925220000058
ti=x-μBIs an intermediate result of a secure forward calculation, the secure back-propagation protocol utilizes an intermediate variable Ii=Ii′+Ii1.. m simplifies the inverse calculation process as follows:
Figure FDA0002374925220000059
(bi′,b″i)←SSq(φ′i,φ″i);(c′i,c″i)←SecMul(bii);
Figure FDA00023749252200000510
then, x is inputiPartial derivatives of
Figure FDA00023749252200000511
Calculated by the following algorithm:
Figure FDA00023749252200000512
step S14: safe convolutional layer back propagation: assuming that the learning rate of convolutional layer is ηC
Figure FDA00023749252200000513
Representing the partial derivative, S, of the (i, j) th neuron in the network model1And S2Calculate a gradient ∑ ω ++ v ++ of a weight ω and a bias value b ++ and update the original parameter in cooperation with the following protocol:
Figure FDA00023749252200000514
Figure FDA00023749252200000515
ω′lm(new)=ω′lmC·▽ω′lm;ω″lm(new)=ω″lmC·▽ω″lm;b′(new)=b′-ηC·▽b′;b″(new)=b″-ηC·▽b″。
7. the system according to claim 3, wherein the SG-BP is a back propagation protocol of a generative model, and since a part of the protocol is substantially the same as the SD-BP, only two different secure back propagation calculations in the generative model are given here, specifically:
step S21: the safety loss function propagates backwards: in SG-BP, the back propagation protocol of the security loss function is as follows:
Figure FDA0002374925220000061
step S22: the security activation layer propagates in reverse: let deltaSRThe partial derivative of the safety ReLU algorithm SR is expressed, and the calculation algorithm of the safety partial derivative of SR is as follows:
Figure FDA0002374925220000062
CN202010062453.2A 2020-01-20 2020-01-20 Lightweight privacy protection generation countermeasure network system Active CN111242290B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010062453.2A CN111242290B (en) 2020-01-20 2020-01-20 Lightweight privacy protection generation countermeasure network system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010062453.2A CN111242290B (en) 2020-01-20 2020-01-20 Lightweight privacy protection generation countermeasure network system

Publications (2)

Publication Number Publication Date
CN111242290A CN111242290A (en) 2020-06-05
CN111242290B true CN111242290B (en) 2022-05-17

Family

ID=70879593

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010062453.2A Active CN111242290B (en) 2020-01-20 2020-01-20 Lightweight privacy protection generation countermeasure network system

Country Status (1)

Country Link
CN (1) CN111242290B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111859267B (en) * 2020-06-22 2024-04-26 复旦大学 Operation method of privacy protection machine learning activation function based on BGW protocol
CN111783130B (en) * 2020-09-04 2021-01-29 支付宝(杭州)信息技术有限公司 Data processing method and device for privacy protection and server
CN112149171B (en) * 2020-10-27 2021-07-09 腾讯科技(深圳)有限公司 Method, device, equipment and storage medium for training federal neural network model
CN112465117B (en) * 2020-11-25 2024-05-07 平安科技(深圳)有限公司 Contract generation model construction method, device, equipment and storage medium
CN113051604B (en) * 2021-03-08 2022-06-14 中国地质大学(武汉) Secret-related geographic table type data protection method based on generative countermeasure network
CN113255757B (en) * 2021-05-20 2022-10-11 西华大学 Antagonistic sample detection method and system based on activation value distribution difference
CN113095490B (en) * 2021-06-07 2021-09-14 华中科技大学 Graph neural network construction method and system based on differential privacy aggregation
CN113780552B (en) * 2021-09-09 2024-03-22 浙江数秦科技有限公司 Safe multiparty computing method for bidirectional privacy protection
CN113792337B (en) * 2021-09-09 2023-08-11 浙江数秦科技有限公司 Qualification auditing system based on privacy calculation
CN115426205B (en) * 2022-11-05 2023-02-10 北京淇瑀信息科技有限公司 Encrypted data generation method and device based on differential privacy

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109194507A (en) * 2018-08-24 2019-01-11 曲阜师范大学 The protection privacy neural net prediction method of non-interactive type
CN110363183A (en) * 2019-07-30 2019-10-22 贵州大学 Service robot visual method for secret protection based on production confrontation network
CN110460600A (en) * 2019-08-13 2019-11-15 南京理工大学 The combined depth learning method generated to network attacks can be resisted

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10535120B2 (en) * 2017-12-15 2020-01-14 International Business Machines Corporation Adversarial learning of privacy protection layers for image recognition services

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109194507A (en) * 2018-08-24 2019-01-11 曲阜师范大学 The protection privacy neural net prediction method of non-interactive type
CN110363183A (en) * 2019-07-30 2019-10-22 贵州大学 Service robot visual method for secret protection based on production confrontation network
CN110460600A (en) * 2019-08-13 2019-11-15 南京理工大学 The combined depth learning method generated to network attacks can be resisted

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于DCGAN反馈的深度差分隐私保护方法;毛典辉等;《北京工业大学学报》;20180424(第06期);全文 *

Also Published As

Publication number Publication date
CN111242290A (en) 2020-06-05

Similar Documents

Publication Publication Date Title
CN111242290B (en) Lightweight privacy protection generation countermeasure network system
Chou et al. Faster cryptonets: Leveraging sparsity for real-world encrypted inference
Chaudhari et al. Trident: Efficient 4pc framework for privacy preserving machine learning
Liu et al. Oblivious neural network predictions via minionn transformations
US11354539B2 (en) Encrypted data model verification
Keller et al. Secure quantized training for deep learning
CN111324870B (en) Outsourcing convolutional neural network privacy protection system based on safe two-party calculation
US20200252198A1 (en) Secure Multi-Party Learning and Inferring Insights Based on Encrypted Data
CN113065145B (en) Privacy protection linear regression method based on secret sharing and random disturbance
CN114547643A (en) Linear regression longitudinal federated learning method based on homomorphic encryption
CN115811402B (en) Medical data analysis method based on privacy protection federal learning and storage medium
US20240013034A1 (en) Neural network prediction system for privacy preservation
Yang et al. Lightweight privacy-preserving GAN framework for model training and image synthesis
Zhou et al. Deep binarized convolutional neural network inferences over encrypted data
Li et al. GPU accelerated full homomorphic encryption cryptosystem, library and applications for iot systems
CN116388954B (en) General secret state data security calculation method
CN112101555A (en) Method and device for multi-party combined training model
Hidayat et al. Privacy-Preserving Federated Learning With Resource Adaptive Compression for Edge Devices
US11444926B1 (en) Privacy-preserving efficient subset selection of features for regression models in a multi-party computation setting
Zhao et al. PPCNN: An efficient privacy‐preserving CNN training and inference framework
Boura et al. High-precision privacy-preserving real-valued function evaluation
Aleksandrov et al. Factors affecting synchronization time of tree parity machines in cryptography
Li et al. FPCNN: A fast privacy-preserving outsourced convolutional neural network with low-bandwidth
Aharoni et al. Poster: Secure SqueezeNet inference in 4 minutes
Hiwatashi et al. Accelerating secure (2+ 1)-party computation by insecure but efficient building blocks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant