CN116541878A - Privacy protection method based on safe two-party calculation S-shaped function - Google Patents

Privacy protection method based on safe two-party calculation S-shaped function Download PDF

Info

Publication number
CN116541878A
CN116541878A CN202310470235.6A CN202310470235A CN116541878A CN 116541878 A CN116541878 A CN 116541878A CN 202310470235 A CN202310470235 A CN 202310470235A CN 116541878 A CN116541878 A CN 116541878A
Authority
CN
China
Prior art keywords
client
server
layer
function
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310470235.6A
Other languages
Chinese (zh)
Inventor
李洪伟
胡佳
冯宇扬
陈涵霄
郝猛
张希琳
张源
刘鹏飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202310470235.6A priority Critical patent/CN116541878A/en
Publication of CN116541878A publication Critical patent/CN116541878A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioethics (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Hardware Design (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Computer And Data Communications (AREA)

Abstract

The invention discloses a privacy protection method based on safe two-party calculation S-shaped functions, and belongs to the technical field of privacy protection of federal learning. In the federal learning system comprising a service end and a plurality of clients, the service end responds to a prediction task request initiated by the clients and transmits a neural network model matched with the prediction task request to the clients; based on the data to be predicted of the client, the client completes forward reasoning operation of the neural network model by the client layer by layer through data interaction with the server under the privacy protection data sharing rule set by the invention, and a prediction result of the data to be predicted is obtained. According to the invention, the related parameters participating in forward reasoning operation in the model training process of federal learning are stored in a secret sharing mode by both parties between the server and the client, and the calculated result is still stored in the secret sharing mode, so that the privacy safety is ensured. The calculation amount and the communication cost are reduced under a certain calculation accuracy.

Description

Privacy protection method based on safe two-party calculation S-shaped function
Technical Field
The invention belongs to the technical field of privacy protection of federal learning, and particularly relates to a privacy protection method based on safe two-party calculation S-shaped functions.
Background
With the development of deep learning technology, neural network prediction technology is applied to more and more fields. However, in federal learning systems, existing deep learning-based prediction systems are facing extremely serious privacy concerns. Privacy disclosure may result if the user sends data containing private information directly to the service provider, whereas intellectual property rights of the service provider are certainly violated if the service provider sends a neural network model to the user. In order to solve the privacy problem described above, a solution to provide a secure computational neural network model is needed.
An S-shaped function is a finite, slightly real function whose shape curve has at least 2 foci, also called a "bifocal curve function". The S-shaped function has values in a real number range, and the derivative is constant non-negative, has and has only one inflection point. Common sigmoid functions are Logistic and Tanh, and are widely used in deep learning, such as recurrent neural network prediction. Although some methods exist at present to provide safe implementation for the functions, there are serious performance bottlenecks, and the methods cannot be applied to actual scenes.
Some privacy protection methods for two-party secure computation of mathematical functions have been proposed. One class of methods uses higher order polynomials to approximate these mathematical functions, however they provide a low degree of accuracy unless the degree of the polynomial is sufficiently high (typically 7 or more times are required), which means that this approach requires a significant amount of computational and communication overhead. Another class of methods implements high-precision mathematical functions based on secure reasoning, however, to obtain ideal performance, this approach requires re-balancing accuracy and cost for each dataset and model, which is clearly impractical. Furthermore, there is a class of methods that propose approximate polynomial based protocols for mathematical functions of floating point representation, but these protocols have a high overhead. The most advanced protocols currently use look-up tables and OT expansion techniques to provide higher accuracy, but are still lacking in computational and communication efficiency.
Disclosure of Invention
The invention provides a privacy protection method based on safe two-party calculation S-shaped functions, which can be used for reducing resource expenditure and guaranteeing calculation accuracy on the premise of guaranteeing privacy protection of federal learning.
The invention adopts the technical scheme that:
a privacy protection method based on safe two-party calculation S-shaped function comprises a server P 1 And a plurality of clients P 0 In the federal learning system of (1), the following steps are performed:
service end P 1 Responsive to client P 0 The initiated predicted task request is issued to the client P by the neural network model matched with the predicted task request 0 The method comprises the steps of carrying out a first treatment on the surface of the The network layer of the neural network model includes two types: a linear layer and a nonlinear layer;
client P 0 The data to be predicted based on the local terminal is transmitted to the server P 1 The forward reasoning operation of the local terminal on the neural network model is completed layer by layer through the data interaction between the two to obtain the prediction result of the data to be predicted, and the method specifically comprises the following steps:
step 1, client P 0 Data preprocessing is carried out on the data to be predicted so as to match with the input of the neural network model;
step 2, for the first layer of the neural network model, client P 0 Adding and sharing the input data of the neural network model, and sending the sharing value to the server P 1 Defining l to represent the input data length of the current layer;
step 3, forward reasoning operation is executed based on the sharing value of the current layer;
i) If the current layer is a linear layer, the forward reasoning operation includes:
client P 0 TransmittingTo P 1 So that the server P 1 Extracting input data->Wherein,,representing client P 0 N represents the input data of the layer of the ring->R is in the ring +.>A random number selected from the group and is taken as a server P in a secret sharing mode 1 And client P 0 Holding together;
client P 0 Model intermediate parameter based on local endLet the output of the current layer->And will beAs client P 0 Input data of the next layer of (a)>
Service end P 1 Reconstructing data of a current layerModel intermediate parameter based on local terminal>Calculating the output +.>And will y 1 As the server P 1 Input data of the next layer of (a)>
Wherein W represents a server P 1 The parameters of the model of the neural network in possession,and->The specific calculation mode of (a) is as follows:
service end P 1 Transmitting W-b to client P 0 Client P 0 Locally calculating parameters
Service end P 1 Local computing
a 1 Representing the product sharing parameter of the current linear layer, and a 1 =ab-a 0 A represents a client P 0 The selected random number, b represents the server P 1 A selected random number;
II) if the current layer is a nonlinear layer, the forward reasoning operation comprises:
detecting an execution function of the current layer, if the execution function is a Logistic function, executing the step (3 a), and if the execution function is a Tanh function, executing the step (3 b);
(3a) The current layer performs computation of the Logistic function, comprising the steps of:
(3 a-1) client P 0 And server P 1 Respectively calling the most significant bit functionsInput device<x> l Obtain its sign bit<msb> B Wherein<x> l A shared value representing l bits;
(3 a-2) client P 0 And server P 1 Invoking multiplexing functions separatelyCalculate->
(3 a-3) client P 0 And server P 1 Respectively calling negative exponential functionsInput device<nx> l Obtaining<nxe> s+2 Wherein s represents a negative exponential function +.>The number of bits set in (3);
(3 a-3-1) client P 0 And server P 1 Locally calculating the sharing value respectively<px> l =-<nx> l
(3 a-3-2) Server P 1 Invoking a numerical partitioning functionTo be used for<px> l For input, a segmentation value { with c term length d is obtained<a j > d } j∈[c] Wherein->
(3 a-3-3) for each j ε [ c ]]Client P 0 And server P 1 Invoking lookup table functions separatelyInput device<a j > d Obtaining<T j [a j ]> s+2 Wherein T is j [a j ]Representing a look-up table T j Is the (a) th j Item, and T j [a j ]=Exp(-2 dj-s a j );
(3 a-3-4) client P 0 And server P 1 Calling multiplication functions separatelyAnd truncation function->Calculate the cumulative ++of all entries>Obtaining the result<nxe> s+2
(3 a-4) client P 0 And server P 1 Calling division functions separatelyCalculation of
(3 a-5) client P 0 And server P 1 Invoking multiplexing functions separatelyCalculation of<ch> s+2 =1+<mxb> B ·(<nxe> s+2 -1);
(3 a-6 client P 0 And server P 1 Calling multiplication functions separatelyAnd truncation function->Calculation of<xs> s+2 =<ch> s+2 ·<nxs> s+2
(3 a-7) client P 0 And server P 1 Calling bit extension functions separatelyInput device<xs> s+2 Obtaining the sharing value<y> l
(3b) The current layer performs the calculation of the Tanh function, comprising the steps of:
(3 b-1) client P 0 And server P 1 Respectively calling the most significant bit functionsInput device<x> l Obtain the sign bit thereof<msb> B
(3 b-2) client P 0 And server P 1 Invoking multiplexing functions separatelyCalculation of<nx> l =-<x> l +<msb> B ·2<x> l
(3 b-3) client P 0 And server P 1 Respectively calculate locally<n2x> l =2×<nx> l
(3 b-4) client P 0 And server P 1 Respectively calling negative exponential functionsInput device<n2x> l Obtaining<n2xe> s+2
3 b-5) client P 0 And server P 1 Respectively calculate locally<r 1 > s+2 =1-<n2xe> s+2 ,<r 2 > s+2 =1+<n2xe> s+2
(3 b-6) client P 0 And server P 1 Calling division functions separatelyCalculate->
(3 b-7) client P 0 And server P 1 Respectively calculate locally<t 1 > s+2 =-2×<nxt> s+2
(3 b-8) client P 0 And server P 1 Invoking multiplexing functions separatelyCalculation of<t 2 > s+2 =<msb> B ·<t 1 > s+2
(3 b-9) client P 0 And server P 1 Respectively calculate locally<xt> s+2 =<nxt> s+2 +<t 2 > s+2
(3 b-10) client P 0 And server P 1 Calling bit extension functions separatelyInput device<xt> s+2 Obtaining the sharing value<y> l
Step 3, detecting whether the current layer is the last layer of the neural network model, if so, the server side P 1 To hold the local terminal<y> l The share sent to client P 0
Client P 0 Obtaining a forward reasoning operation result of the neural network model according to the y obtained by restoring the shares held by the neural network model and obtaining a prediction result of the currently input data to be predicted;
if the current layer is not the last layer of the neural network model, the client P 0 And server P 1 In the form of the current result<y> l Step 2 is executed for the input of the next layer, and the forward reasoning operation of the next layer is continued.
Further, in order to reduce the bit width in the LUT protocol (look-up table protocol), the following is adoptedWhen the LUT protocol is executed, the sender deletes the least significant bit of m bits in the lookup table, and changes each item from l bits to l-m bits; after the LUT protocol is executed, the receiver fills the LUT protocol back into l bits in a random bit filling mode, wherein m is a preset value.
Further, in calling the functionAt this time, the shift operation is performed only at call Fang Benduan.
The technical scheme provided by the invention has at least the following beneficial effects:
(1) The privacy protection method and the device have the advantages that the related parameters participating in forward reasoning operation in the training process of the Union learning model are stored in a secret sharing mode by both parties between the server and the client, and the calculated result is still stored in the secret sharing mode, so that privacy safety is guaranteed.
(2) The resource cost is reduced, and compared with the existing method, the method has the advantages that the calculation and communication cost is less;
(3) Compared with a calculation mode without privacy protection, the calculation accuracy of the method is equivalent.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in further detail below.
The key objective of the privacy protection method based on the S-shaped function calculated by the two secure parties provided by the embodiment of the invention is that: the method for calculating three mathematical functions of an exponential function, logistic and Tanh under two-side safety calculation, wherein the exponential function is the basis of the two calculated functions. On the premise of ensuring privacy, the accuracy and the high efficiency of calculation are considered, and specifically, the aim of the embodiment of the invention is that:
1) Privacy protection. Parameters stored by both parties in a secret sharing mode, and the calculated result is still stored in the secret sharing mode, so that privacy safety is ensured.
2) And (5) high-efficiency evaluation. This method requires less computational and communication overhead than existing methods, which is particularly important in real-time scenarios or where resources are limited.
3) And calculating the accuracy. Compared with a calculation mode without privacy protection, the method has the advantage that the accuracy of calculation is not sacrificed. For ease of understanding, the basic algorithm involved in embodiments of the present invention is briefly described as follows:
1) Secret sharing:
the invention adopts two kinds of additive secret sharing, namely arithmetic secret sharing and Boolean secret sharing, on different rings.
In arithmetic secret sharing, a one-bit secret x is split into two random values:and->And the two random values are respectively P 0 And P 1 Two sides (e.g. P 0 Representing user side, P in federal learning 1 Representing a server in federal learning) and in the ring +.>(from [0,2 ] l -1]An integer ring of inner integers) is satisfied +.>
In Boolean secret sharing, secret Boolean values(i.e., [0,1 ]]) Split into two random boolean values +.>And-> And->Quilt P 0 And P 1 Holds in both directions and satisfies->B represents one bit.
Secret sharingIs only given by<x> 0 And<x> 1 in one of these cases, the secret x is completely hidden.
2) 2PC function: based on the secret sharing scheme, the following 2PC functions are used in the embodiment of the present invention:
zero extension (ZExt): zero spread functionThe number of m bits is +.>Digital extended to l bitsAnd y=x in value. Wherein (1)>Represented by [0,2 ] m -1]An inner integer forms an integer ring.
Multiplexing (MUX): multiplexing functionTo be used for<x> B And<y> l as input, output<z> l And z=y when x=1, and z=0 when x=0.<z> l A value representing l bits.
Look-up table (LUT): give a possession of 2 n Term (superscript n denotes power), table T of each term l bits, lookup table functionTo be used for<x> n As input (i.e. data of length n as input), output<z> l Wherein z=tx [ x ]]。
Multiplication (Mult): multiplication functionIn s-bit decimal<x> l And<y> l as input, output 2 s-bit decimal<z> l+s ,z=x·y。
Cut-off (TR)): cut-off functionTo be used for<x> l As input, output<y> l-b Y=x > b, where b represents the number of bits shifted to the right.
Numerical segmentation (DigDec): numerical dividing functionTo be used for<x> l As input, a set of values { is output<z i > d } i∈[c] . The set of output values is a division of the input x, i.e. x=z c-1 ||...||z 0 Wherein->
Division (Div): division functionIn s x Decimal place<x> l Sum s y Decimal place<y> l As input, output s z Decimal place<z> l ,z=x/y。
Most Significant Bit (MSB): most significant bit functionTo be used for<z> l As input, one bit is output<y> B Y is the most significant bit of x. I.e.<y> B A value of length B, i.e. the superscript B indicates bit, is considered b=1.
In the embodiment of the invention, the safe calculation of the exponential function is realized first, which is the basis for calculating logics and Tanh. The exponential function can be expressed as follows:
wherein nExp (x) represents e which receives only negative parameter x x . In this embodiment, the algorithm process for implementing the nExp function is shown in Table 1Next, the algorithm of Exp function was implemented by means of the ndexp function, and the algorithm procedure is shown in table 2.
TABLE 1
TABLE 2
That is, in step 1 of the nExp function, both sides locally multiply x by-1 to calculate the positive value px of x. Considering that the communication bottleneck of the LUT protocol increases linearly with the size of the LUT table, the px is split into several substrings { a } of length d in step 2 using the data splitting protocol j } j∈[c] . Next in step 3, the LUT protocol is used for each j ε [ c ]]Calculating Exp (-a) j ·2 dj-s ). In steps 4 and 5 { Exp (-a) is calculated j ·2 dj-s )} j∈[c] And expands the width from s +2 to l bits.
Whereas in the exponential function shown in table 2, which uses algorithm 1 as a building block, a general exponential algorithm is proposed. In step 1, the sign bit msb of x is first calculated. In step 2, nx= - |x| is calculated using msb. In steps 3 and 4, the negative and positive exponents of x are calculated by calling the nExp function and the division function. After the bit expansion of step 5, msb is again used in step 6 to obtain a correctly signed calculation result. The effect of invoking division in the present algorithm is toThe number of calls is reduced from 2 to 1.
Next, the embodiment of the present invention implements the implementation of Logistic and Tanh based on an exponential function:
1) Logistic function: the Logistic function plays a role in the logic activation function of the neural networkThe important function is that,based on the idea of the negative exponential algorithm, it can be rewritten as:
in the embodiment of the invention, the specific algorithm process of the Logistic algorithm is shown in table 3:
TABLE 3 Table 3
In steps 1 and 2 of algorithm 3 shown in table 3, the sign bit msb and the negative value nx of x are first calculated. In step 3, a negative index of x is calculated. In step 4-6, a calculation is performedAnd a selection number ch, and multiplying them. When x is greater than or equal to 0, ch=1; when x < 0, ch=nexp (x). In step 7, the result bits are extended to l bits. When x < 0, < +.>Rather thanThe reason for calculating the Logistic (x) is that the former has a higher accuracy.
2) Tanh function: the Tanh function is also a basic mathematical function,it is rewritten as based on a negative exponential algorithm:
in the embodiment of the invention, the specific algorithm process of the Tanh algorithm is shown in table 4:
TABLE 4 Table 4
I.e. in steps 1 and 2 of algorithm 4 shown in table 4, the sign bit msb and the negative value nx of x are first calculated. N2x=2×nx is calculated in step 3. In step 4-6, the calculation is performed by divisionIn step 7-9, msb is used to select the correctly signed calculation result. In step 10, the result bits are extended to l bits.
In addition, in order to further improve the privacy protection performance, the embodiment of the invention optimizes the following two parts:
(1) Reducing bit width in LUT protocols: at the position ofIn the LUT used, the communication cost can be remarkably reduced on the premise of smaller ULP error by discarding the last few bits of each table entry. Specifically, when executing the LUT protocol, the sender deletes the m-bit least significant bit in the lookup table, changing each entry from l bits to (lm) bits; after the LUT protocol has been implemented, the receiver can pad these bits with random bits, filling them back in l bits. Wherein m is a preset value, and is set based on the actual application environment.
2) Approximate truncation operation: at the position ofThe last bit of the wrap is ignored. In other words, in call->The algorithm only needs to be carried out locallyThe shift operation results in only a small accuracy error with a probability of 1/2, but saves a lot of communication costs.
The invention relates to a privacy protection method applied to efficient two-party security calculation of mathematical functions, which comprises exponential, logistic and Tanh functions. The algorithm designed by the invention is based on silent OT expansion and is subjected to customized optimization. Experiments show that compared with the prior art, secMAth is improved to 7.6 times in communication efficiency and 2.4 times in operation efficiency. It is worth emphasizing that the protocol in SecMath can be used directly in machine learning for privacy protection, with considerable advantages over previous work.
As a possible implementation manner, the privacy protection method based on the security two-party computing S-shaped function provided by the embodiment of the invention comprises a server P 1 And a plurality of clients P 0 In the federal learning system of (1), the following steps are performed:
service end P 1 Responsive to client P 0 The initiated predicted task request is issued to the client P by the neural network model matched with the predicted task request 0 The method comprises the steps of carrying out a first treatment on the surface of the The network layer of the neural network model includes two types: a linear layer and a nonlinear layer;
client P 0 The data to be predicted based on the local terminal is transmitted to the server P 1 The forward reasoning operation of the local end on the neural network model is completed layer by layer through the data interaction between the two, and the forward reasoning operation comprises the following steps:
step one, for a first layer of a neural network model, a client P 0 Adding and sharing the input value x with the length of l (corresponding to the input data of the neural network model), and sending the shared value to the server P 1
If the current layer is not the first layer, P 0 And P 1 Receiving a shared value from an output of a previous layer<x> l . At this time, P 0 And P 1 Is held together<x> l . All variables in the subsequent step are likewise P 0 And P 1 Held together.
It should be noted that, in the implementation process, all the components are in the angle bracketsVariables of (2) are stored in secret sharing, the superscript of a variable indicating its number of bits, e.g<px> l Representing that the variable px is stored in secret sharing, is l bits long, and is P 0 The held part is recorded asQuilt P 1 The held part is marked as->
Step two, forward reasoning operation is executed based on the sharing value of the current layer;
i) If the current layer is a linear layer, the forward reasoning operation includes:
P 0 transmittingTo P 1 So that P 1 Extracting input data->Wherein (1)>Input data representing each layer of the client, N represents a ring->R is in the ring +.>Is a random number selected from the group, and is two parties (server P 1 And client P 0 ) Holding together;
P 0 model intermediate parameter based on local endLet the output of the current layer->And will->As P 0 Input data of the next layer of (a)>
P 1 Reconstructing data of a current layerModel intermediate parameter based on local endCalculating the output +.>And will y 1 Input data of next layer as server side +.>
Wherein W represents model parameters of the neural network model,and->The specific calculation mode of (a) is as follows:
P 1 transmitting W-b to P 0 The client calculates the parameters locally
Service side local computing
a 1 Representing the product sharing parameter of the current linear layer, and a 1 =ab-a 0 A represents P 0 Selected random number, b represents P 1 A selected random number.
II) if the current layer is a nonlinear layer, the forward reasoning operation comprises:
detecting an execution function of the current layer, if the execution function is a Logistic function, executing the step (3 a), and if the execution function is a Tanh function, executing the step (3 b);
(3a) The current layer needs to execute the calculation of the Logistic function, which comprises the following steps:
(3a-1)P 0 and P 1 Invoking a most significant bit functionInput device<x> l Obtain the sign bit thereof<msb> B
(3a-2)P 0 And P 1 Invoking a multiplexing functionCalculation of<nx> l =-<x> l +<msb> B ·2<x> l
(3a-3)P 0 And P 1 Invoking negative exponential functionInput device<nx> l Obtaining<nxe> s+2
(3a-3-1)P 0 And P 1 Local computing<px> l =-<nx> l
(3a-3-2)P 1 Invoking a numerical partitioning functionTo be used for<px> l For input, a segmentation value { with c term length d is obtained<a j > d } j∈[c] Wherein-><px> l =<a c-1 > d ||...||<a 0 > d
(3 a-3-3) for each j ε [ c ]],P 0 And P 1 Invoking a lookup table functionInput device<a j > d Obtaining<T j [a j ]> s+2 . Wherein T is j [a j ]Representing a look-up table T j Is the (a) th j Item, and T j [a j ]=Exp(-2 dj-s a j )。
(3a-3-4)P 0 And P 1 Calling multiplication functionsAnd truncation function->Calculating the cumulative +/of all the above list items>Obtaining the result<nxe> s+2
(3a-4)P 0 And P 1 Calling a division functionCalculate->
(3a-5)P 0 And P 1 Invoking a multiplexing functionCalculation of<ch> s+2 =1+<msb> B ·(<nxe> s+2 -1)。
(3a-6)P 0 And P 1 Calling multiplication functionsAnd truncation function->Calculation of<xs> s+2 =<ch> s+2 .<nxs> s+2
(3a-7)P 0 And P 1 Calling a bit-spread functionInput device<xs> s+2 Obtaining<y> l
(3b) The current layer is to execute the calculation of the Tanh function, which comprises the following steps:
(3b-1)p 0 and P 1 Invoking a most significant bit functionInput device<x> l Obtain the sign bit thereof<msb> B
(3b-2)P 0 And P 1 Invoking a multiplexing functionCalculation of<nx> l =-<x> l +<msb> B ·2<x> l
(3b-3)P 0 And P 1 Local computing<n2x> l =2×<nx> l
(3b-4)P 0 And P 1 Invoking negative exponential functionInput device<n2x> l Obtaining<n2xe> s+2 . The specific steps of this step are the same as when calculating the Logistic function.
(3b-5)P 0 And P 1 Local computing<r 1 > s+2 =1-<n2xe> s+2 ,<r 2 > s+2 =1+<n2xe> s+2
(3b-6)P 0 And P 1 Calling a division functionCalculate->
(3b-7)P 0 And P 1 Local computing<t 1 > s+2 =-2×<nxt> s+2
(3b-8)P 0 And P 1 Invoking a multiplexing functionCalculation of<t 2 > s+2 =<msb> B ·<t 1 > s+2
(3b-9)P 0 And P 1 Local computing<xt> s+2 =<nxt> s+2 +<t 2 > s+2
(3b-10)P 0 And P 1 Calling a bit-spread functionInput device<xt> s+2 Obtaining<y> l
Step 3, detecting whether the current layer is the last layer of the neural network model, if so, P 1 To hold itself in<y> l The share sent to P 0 ,P 0 And restoring the obtained result into y (output of the last layer) according to the shares held by the two to obtain a forward reasoning operation result of the neural network model.
If the current layer is not the last layer of the neural network model, P 0 And P 1 In the form of the current result<y> l For input, the calculation of the next layer is continued, i.e. step 2 is continued.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.
What has been described above is merely some embodiments of the present invention. It will be apparent to those skilled in the art that various modifications and improvements can be made without departing from the spirit of the invention.

Claims (3)

1. A privacy protection method based on safe two-party calculation S-shaped function is characterized by comprising a server P 1 And a plurality of clients P 0 The federal learning system of (1) performs the following steps:
service end P 1 Responsive to client P 0 The initiated predicted task request is issued to the client P by the neural network model matched with the predicted task request 0 The method comprises the steps of carrying out a first treatment on the surface of the The network layer of the neural network model includes two types: a linear layer and a nonlinear layer;
client P 0 The data to be predicted based on the local terminal is transmitted to the server P 1 The forward reasoning operation of the local terminal on the neural network model is completed layer by layer through the data interaction between the two to obtain the prediction result of the data to be predicted, and the method specifically comprises the following steps:
step 1, client P 0 Data preprocessing is carried out on the data to be predicted so as to match with the input of the neural network model;
step 2, for the first layer of the neural network model, client P 0 Adding and sharing the input data of the neural network model, and sending the sharing value to the server P 1 Defining l to represent the input data length of the current layer;
step 3, forward reasoning operation is executed based on the sharing value of the current layer;
i) If the current layer is a linear layer, the forward reasoning operation includes:
client P0 sendsTo P 1 So that the server P 1 Extracting input data->Wherein (1)>Representing client P 0 N represents the input data of the layer of the ring->R is in the ring +.>A random number selected from the group and is taken as a server P in a secret sharing mode 1 And client P 0 Holding together;
client P 0 Model intermediate parameter based on local endLet the output of the current layer->And will->As client P 0 Input data of the next layer of (a)>
Service end P 1 Reconstructing data of a current layerModel intermediate parameter based on local endCalculating the output +.>And will y 1 As the server P 1 Input data of the next layer of (a)>
Wherein W represents a server P 1 Held nerveThe parameters of the network model are set to be,and->The specific calculation mode of (a) is as follows:
service end P 1 Transmitting W-b to client P 0 Client P 0 Locally calculating parameters
Service end P 1 Local computing
a 1 Representing the product sharing parameter of the current linear layer, and a 1 =ab-a 0 A represents a client P 0 The selected random number, b represents the server P 1 A selected random number;
II) if the current layer is a nonlinear layer, the forward reasoning operation comprises:
detecting an execution function of the current layer, if the execution function is a Logistic function, executing the step (3 a), and if the execution function is a Tanh function, executing the step (3 b);
(3a) The current layer performs computation of the Logistic function, comprising the steps of:
(3 a-1) client P 0 And server P 1 Respectively calling the most significant bit functionsInput device<x> l Obtain its sign bit<msb> B Wherein<x> l A shared value representing l bits;
(3 a-2) client P 0 And server P 1 Invoking multiplexing functions separatelyCalculation of<nx> l =-<x> l +<msb> B ·2<x> l
(3 a-3) client P 0 And server P 1 Respectively calling negative exponential functionsInput device<nx> l Obtaining<nxe> s+2 Wherein s represents a negative exponential function +.>The number of bits set in (3);
(3 a-3-1) client P 0 And server P 1 Locally calculating the sharing value respectively<px> l =-<nx> l
(3 a-3-2) Server P 1 Invoking a numerical partitioning functionTo be used for<px> l For input, a segmentation value { with c term length d is obtained<a j > d } j∈[c] Wherein->
(3 a-3-3) for each j ε [ c ]]Client P 0 And server P 1 Invoking lookup table functions separatelyInput device<a j > d Obtaining<T j [a j ]> s+2 Wherein T is j [a j ]Representing a look-up table T j Is the (a) th j Item, and T j [a j ]=Exp(-2 dj-s a j );
(3 a-3-4) client P 0 And server P 1 Calling multiplication functions separatelyAnd truncation function->Calculate the cumulative ++of all entries>Obtaining the result<nxe> s+2
(3 a-4) client P 0 And server P 1 Calling division functions separatelyCalculate->
(3 a-5) client P 0 And server P 1 Invoking multiplexing functions separatelyCalculation of<ch> s+2 =1+<msb> B ·(<nxe> s+2 -1);
(3 a-6 client P 0 And server P 1 Calling multiplication functions separatelyAnd truncation function->Calculation of<xs> s+2 =<ch> s+2 ·<nxs> s+2
(3 a-7) client P 0 And server P 1 Calling bit extension functions separatelyInput device<xs< s+2 Obtaining the sharing value<y> l
(3b) The current layer performs the calculation of the Tanh function, comprising the steps of:
(3 b-1) client P 0 And server P 1 Respectively calling the most significant bit functionsInput device<x> l Obtain the sign bit thereof<msb> B
(3 b-2) client P 0 And server P 1 Invoking multiplexing functions separatelyCalculation of<nx> l =-<x> l +<msb> B ·2<x> l
(3 b-3) client P 0 And server P 1 Respectively calculate locally<n2x> l =2×<nx> l
(3 b-4) client P 0 And server P 1 Respectively calling negative exponential functionsInput device<n2x> l Obtaining<n2xe> s+2
3 b-5) client P 0 And server P 1 Respectively calculate locally<r 1 > s+2 =1-<n2xe> s+2 ,<r 2 > s+2 =1+<n2xe> s+2
(3 b-6) client P 0 And server P 1 Calling division functions separatelyCalculate->
(3 b-7) client P 0 And server P 1 Respectively calculate locally<t 1 > s+2 =-2×<nxt> s+2
(3 b-8) client P 0 And server P 1 Invoking multiplexing functions separatelyCalculation of<t 2 > s+2 =<msb> B ·<t 1 > s+2
(3 b-9) client P 0 And server P 1 Respectively calculate locally<xt> s+2 =<nxt> s+2 +<t 2 > s+2
(3 b-10) client P 0 And server P 1 Calling bit extension functions separatelyInput device<xt> s+2 Obtaining the sharing value<y> l
Step 3, detecting whether the current layer is the last layer of the neural network model, if so, the server side P 1 To hold the local terminal<y> l The share sent to client P 0
Client P 0 Obtaining a forward reasoning operation result of the neural network model according to the y obtained by restoring the shares held by the neural network model and obtaining a prediction result of the currently input data to be predicted;
if the current layer is not the last layer of the neural network model, the client P 0 And server P 1 In the form of the current result<y> l Step 2 is executed for the input of the next layer, and the forward reasoning operation of the next layer is continued.
2. The method of claim 1, wherein, in the step ofWhen the lookup table protocol is executed, the sender deletes the m least significant bits in the lookup table, and changes each item from l bits to l-m bits; after the lookup table protocol is executed, the receiver fills the lookup table protocol back into l bits in a random bit filling mode, wherein m is a preset value.
3. The method of claim 1, wherein, upon invoking a functionAt this time, the shift operation is performed only at call Fang Benduan.
CN202310470235.6A 2023-04-27 2023-04-27 Privacy protection method based on safe two-party calculation S-shaped function Pending CN116541878A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310470235.6A CN116541878A (en) 2023-04-27 2023-04-27 Privacy protection method based on safe two-party calculation S-shaped function

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310470235.6A CN116541878A (en) 2023-04-27 2023-04-27 Privacy protection method based on safe two-party calculation S-shaped function

Publications (1)

Publication Number Publication Date
CN116541878A true CN116541878A (en) 2023-08-04

Family

ID=87446335

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310470235.6A Pending CN116541878A (en) 2023-04-27 2023-04-27 Privacy protection method based on safe two-party calculation S-shaped function

Country Status (1)

Country Link
CN (1) CN116541878A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117520970A (en) * 2024-01-05 2024-02-06 同盾科技有限公司 Symbol position determining method, device and system based on multiparty security calculation
CN117648999A (en) * 2024-01-30 2024-03-05 上海零数众合信息科技有限公司 Federal learning regression model loss function evaluation method and device and electronic equipment

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117520970A (en) * 2024-01-05 2024-02-06 同盾科技有限公司 Symbol position determining method, device and system based on multiparty security calculation
CN117520970B (en) * 2024-01-05 2024-03-29 同盾科技有限公司 Symbol position determining method, device and system based on multiparty security calculation
CN117648999A (en) * 2024-01-30 2024-03-05 上海零数众合信息科技有限公司 Federal learning regression model loss function evaluation method and device and electronic equipment
CN117648999B (en) * 2024-01-30 2024-04-23 上海零数众合信息科技有限公司 Federal learning regression model loss function evaluation method and device and electronic equipment

Similar Documents

Publication Publication Date Title
CN116541878A (en) Privacy protection method based on safe two-party calculation S-shaped function
Aumasson et al. Improving stateless hash-based signatures
CN110689349B (en) Transaction hash value storage and searching method and device in blockchain
CN110719159A (en) Multi-party privacy set intersection method for resisting malicious enemies
CN113158239B (en) Selection problem processing method for protecting data privacy
US11205017B2 (en) Post quantum public key signature operation for reconfigurable circuit devices
CN111523144A (en) Method and device for performing secure operation aiming at private data of multiple parties
Brenner et al. Practical Applications of Homomorphic Encryption.
Moon et al. An Efficient Encrypted Floating‐Point Representation Using HEAAN and TFHE
CN110213034B (en) Method and system for synchronizing non-special frame bit frame of quantum key distribution system
Wang et al. MPC-Pipe: an Efficient Pipeline Scheme for Secure Multi-party Machine Learning Inference
CN117195984A (en) Multi-party machine learning security reasoning assembly line method and system based on GPU
Chen et al. Application of homomorphic encryption in blockchain data security
CN117134900A (en) Structure for realizing asymmetric encryption and control method
CN113806775B (en) Block chain message processing method and device based on convolution optimization
CN115859365A (en) Security fragment conversion method and device for protecting private data
Castro et al. Enhanced Rsa (Ersa): An Advanced Mechanism for Improving the Security.
CN113821826A (en) Boolean circuit, method and system for realizing XOR slicing input and output
Seo et al. MoTE-ECC based encryption on MSP430
CN113507367A (en) Online/offline integrity auditing method for outsourced data in cloud environment
Yang et al. Novel cryptographic hash function based on multiple compressive parallel structures
Zhang et al. OC-ORAM: Constant Bandwidth ORAM with Smaller Block Size using Oblivious Clear Algorithm.
CN109408027A (en) Method and device based on the quick gauge index function of FPGA hardware equipment
CN115225260B (en) Dynamic searchable encryption method
TWI814642B (en) Privacy computing method based on homomorphic encryption

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination