CN111475854A - Collaborative computing method and system for protecting data privacy of two parties - Google Patents

Collaborative computing method and system for protecting data privacy of two parties Download PDF

Info

Publication number
CN111475854A
CN111475854A CN202010587170.XA CN202010587170A CN111475854A CN 111475854 A CN111475854 A CN 111475854A CN 202010587170 A CN202010587170 A CN 202010587170A CN 111475854 A CN111475854 A CN 111475854A
Authority
CN
China
Prior art keywords
matrix
elements
party
norm
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010587170.XA
Other languages
Chinese (zh)
Other versions
CN111475854B (en
Inventor
张祺智
李漓春
殷山
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alipay Hangzhou Information Technology Co Ltd
Original Assignee
Alipay Hangzhou Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alipay Hangzhou Information Technology Co Ltd filed Critical Alipay Hangzhou Information Technology Co Ltd
Priority to CN202010587170.XA priority Critical patent/CN111475854B/en
Publication of CN111475854A publication Critical patent/CN111475854A/en
Application granted granted Critical
Publication of CN111475854B publication Critical patent/CN111475854B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/602Providing cryptographic facilities or services

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Bioethics (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Computer Security & Cryptography (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Storage Device Security (AREA)

Abstract

The embodiment of the specification discloses a collaborative computing method and a collaborative computing system for protecting data privacy of two parties, and the collaborative computing method and the collaborative computing system can be applied to multi-party model training. The first party holds a first private matrix and a private key, and the second party holds a second private matrix. And the two parties respectively amplify the first private matrix and the second private matrix to obtain a first input mapping matrix and a second input mapping matrix. The first party processes the first input mapping matrix to obtain a first ciphertext matrix and sends the first ciphertext matrix to the second party. The second party processes the first ciphertext matrix to obtain a second ciphertext matrix and sends the second ciphertext matrix to the first party. And the first party calculates a matrix to be approximated based on the second ciphertext matrix and the private key, and approximates the matrix to be approximated to obtain a first output mapping matrix. And the first party performs reduction processing on the first output mapping matrix to obtain a first output matrix which is used as a first fragment of the product of the first private matrix and the second private matrix. The second party obtains a second output matrix as a second slice of the product.

Description

Collaborative computing method and system for protecting data privacy of two parties
Technical Field
The present disclosure relates to the field of information technologies, and in particular, to a collaborative computing method and system for protecting privacy of data of two parties.
Background
In some scenarios, it is necessary to combine private data of multiple partners to complete a computing task, for example, combine sample data of multiple data providers to perform distributed model training. In order to protect data privacy of each party, private data of any partner can be split into a plurality of fragments, and each partner executes one fragment. In the multi-party joint calculation process, one or more calculation results (some calculation results can be regarded as private data if being disclosed with a risk of privacy disclosure) based on private data of each party can also be stored in each party in a form of fragments. The above computing mode for protecting data privacy is called secret sharing, and the core idea is to distribute input/output of the secret to a plurality of partners in a form of shard (share).
It is currently desirable to provide a scheme for computing the product of two-party privacy matrices in a secret sharing manner.
Disclosure of Invention
One of embodiments of the present specification provides a collaborative computing method for protecting data privacy of two parties, where a first private matrix and a private key are stored in a computing device of a first party, a second private matrix is stored in a computing device of a second party, elements in the first private matrix and elements in the second private matrix belong to an initial quotient group, elements of the private key belong to a target quotient group, and the target quotient group is larger than the initial quotient group; the method is performed by a computing device of the first party, comprising: elements in the first private matrix are promoted to a target business group, and the numerical value of the elements is amplified by p times to obtain a first input mapping matrix; obtaining a first small norm matrix, wherein the norm of the first small norm matrix is smaller than a first threshold value, and elements of the first small norm matrix belong to a target quotient group; performing a first operation on the first input mapping matrix, the first small norm matrix and a public key matched with the private key to obtain a first ciphertext matrix; sending the first ciphertext matrix to the second party's computing device to cause: the computing device of the second party performs a second operation on the first ciphertext matrix, a second input mapping matrix, a second small norm matrix, a second output mapping matrix and the public key to obtain a second ciphertext matrix, wherein elements of the second input mapping matrix belong to a target quotient group, values of the elements are amplified by p times relative to values of elements of the second private matrix, norms of the second small norm matrix are smaller than a second threshold, elements of the second small norm matrix belong to the target quotient group, the second output mapping matrix is stored in the computing device of the second party, elements of the second output mapping matrix belong to the target quotient group, and the elements of the second output mapping matrix still belong to the target quotient group after the values of the second output mapping matrix are reduced by p2 times; receiving the second ciphertext matrix from the computing device of the second party; performing a third operation on the second ciphertext matrix and the private key to obtain a matrix to be approximated; determining a matrix which is closest to the matrix to be approximated in a target matrix space as a first output mapping matrix; elements of the matrix in the target matrix space belong to the target quotient group, and the elements still belong to the target quotient group after the numerical value of the elements is reduced by p2 times; and reducing the element value of the first output mapping matrix by p2 times to obtain a first output matrix, and taking the first output matrix as a first fragment of a target product, wherein the target product is the product of the first private matrix and the second private matrix.
One of embodiments of the present specification provides a collaborative computing system for protecting data privacy of two parties, where a first private matrix and a private key are stored in a computing device of a first party, a second private matrix is stored in a computing device of a second party, elements in the first private matrix and elements in the second private matrix belong to an initial quotient group, elements of the private key belong to a target quotient group, and the target quotient group is larger than the initial quotient group; the system executing on a computing device of the first party, comprising: the first input mapping matrix obtaining module is used for promoting the elements in the first private matrix to a target business group and amplifying the values by p times to obtain a first input mapping matrix; a first small norm matrix obtaining module, configured to obtain a first small norm matrix, where a norm of the first small norm matrix is smaller than a first threshold and an element of the first small norm matrix belongs to a target quotient group; the first ciphertext matrix calculation module is used for performing first operation on the first input mapping matrix, the first small norm matrix and a public key matched with the private key to obtain a first ciphertext matrix; a first sending module to send the first ciphertext matrix to the computing device of the second party, such that: the computing device of the second party performs a second operation on the first ciphertext matrix, a second input mapping matrix, a second small norm matrix, a second output mapping matrix and the public key to obtain a second ciphertext matrix, wherein elements of the second input mapping matrix belong to a target quotient group, values of the elements are amplified by p times relative to values of elements of the second private matrix, norms of the second small norm matrix are smaller than a second threshold, elements of the second small norm matrix belong to the target quotient group, the second output mapping matrix is stored in the computing device of the second party, elements of the second output mapping matrix belong to the target quotient group, and the elements of the second output mapping matrix still belong to the target quotient group after the values of the second output mapping matrix are reduced by p2 times; a first receiving module to receive the second ciphertext matrix from the computing device of the second party; the matrix to be approximated calculation module is used for carrying out third operation on the second ciphertext matrix and the private key to obtain a matrix to be approximated; the approximation module is used for determining a matrix which is closest to the matrix to be approximated in a target matrix space as a first output mapping matrix; elements of the matrix in the target matrix space belong to the target quotient group, and the elements still belong to the target quotient group after the numerical value of the elements is reduced by p2 times; a first output matrix obtaining module, configured to reduce the element value of the first output mapping matrix by p2 times to obtain a first output matrix, which is used as a first fragment of a target product, where the target product is a product of the first private matrix and the second private matrix.
One of the embodiments of the present specification provides a collaborative computing apparatus for protecting privacy of data on two sides, including a processor and a storage device, where the storage device is configured to store instructions, and when the processor executes the instructions, the collaborative computing method for protecting privacy of data on two sides, which is executed by a computing device of a first side according to any one of the embodiments of the present specification, is implemented.
One of embodiments of the present specification provides a collaborative computing method for protecting data privacy of two parties, where a first private matrix and a private key are stored in a computing device of a first party, a second private matrix is stored in a computing device of a second party, and elements of the first private matrix and elements of the second private matrix belong to an initial group of businesses, whereElements of the private key belong to a target quotient group, and the target quotient group is larger than the initial quotient group; the method is performed by a computing device of the first party, comprising: elements in the second private matrix are promoted to a target business group, and the numerical value of the elements is amplified by p times to obtain a second input mapping matrix; obtaining a second small norm matrix, wherein the norm of the second small norm matrix is smaller than a second threshold value, and elements of the second small norm matrix belong to a target quotient group; receiving a first ciphertext matrix from a computing device of the first party; obtaining a second output mapping matrix and a second output matrix of a second slice as a target product, wherein the element values of the second output mapping matrix are p-times amplified with respect to the element values of the second output matrix2Multiplying, elements of the second output mapping matrix and elements of the second output matrix belonging to the target quotient group; performing a second operation on the first ciphertext matrix, a second input mapping matrix, a second small norm matrix, a second output mapping matrix and a public key matched with the private key to obtain a second ciphertext matrix; sending the second ciphertext matrix to the computing device of the first party to enable the computing device of the first party to obtain a first output matrix of a first segment as the target product.
One of embodiments of the present specification provides a collaborative computing system for protecting data privacy of two parties, where a first private matrix and a private key are stored in a computing device of a first party, a second private matrix is stored in a computing device of a second party, elements of the first private matrix and elements of the second private matrix belong to an initial group of businesses, elements of the private key belong to a target group of businesses, and the target group of businesses is larger than the initial group of businesses; the system is implemented on a computing device of the first party, comprising: the second input mapping matrix obtaining module is used for promoting the elements in the second private matrix to a target business group and amplifying the values by p times to obtain a second input mapping matrix; a second small-norm matrix obtaining module, configured to obtain a second small-norm matrix, where a norm of the second small-norm matrix is smaller than a second threshold and an element of the second small-norm matrix belongs to a target quotient group; a second receiving module to receive a first ciphertext matrix from a computing device of the first party; a second obtaining module, configured to obtain a second output mapping matrix and a second output matrix that is a second slice of a target product, where a value of an element of the second output mapping matrix is p2 times larger than a value of an element of the second output matrix, and the element of the second output mapping matrix and the element of the second output matrix belong to the target quotient group; the second ciphertext matrix calculation module is used for performing second operation on the first ciphertext matrix, the second input mapping matrix, the second small norm matrix, the second output mapping matrix and the public key matched with the private key to obtain a second ciphertext matrix; a second sending module, configured to send the second ciphertext matrix to the computing device of the first party, so that the computing device of the first party obtains a first output matrix of the first segment as the target product.
One of the embodiments of the present specification provides a collaborative computing apparatus for protecting privacy of data on two sides, including a processor and a storage device, where the storage device is configured to store instructions, and when the processor executes the instructions, the collaborative computing method for protecting privacy of data on two sides, which is executed by a computing device on a second side according to any one of the present specification, is implemented.
Drawings
The present description will be further explained by way of exemplary embodiments, which will be described in detail by way of the accompanying drawings. These embodiments are not intended to be limiting, and in these embodiments like numerals are used to indicate like structures, wherein:
FIG. 1 is a schematic diagram of an application scenario of a computing system in accordance with some embodiments of the present description;
FIG. 2 is an exemplary flow diagram of a collaborative computing method for protecting privacy of data on two sides, according to some embodiments of the present description;
FIG. 3 is an exemplary flow diagram of a collaborative computing method for protecting privacy of data on two sides, according to some embodiments of the present description;
FIG. 4 illustrates two-party collaborative computing M according to some embodiments of the present description1M2Is shown inAn exemplary flow chart;
FIG. 5 is a schematic diagram of a matrix multiplication protocol involving a third party server according to some embodiments of the present description;
FIG. 6 is a block diagram of a collaborative computing system that protects privacy of data on two sides, according to some embodiments of the present description;
FIG. 7 is a block diagram of a collaborative computing system that protects privacy of data on two sides, according to some embodiments of the present description.
Detailed Description
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings used in the description of the embodiments will be briefly described below. It is obvious that the drawings in the following description are only examples or embodiments of the present description, and that for a person skilled in the art, the present description can also be applied to other similar scenarios on the basis of these drawings without inventive effort. Unless otherwise apparent from the context, or otherwise indicated, like reference numbers in the figures refer to the same structure or operation.
It should be understood that "system", "device", "unit" and/or "module" as used herein is a method for distinguishing different components, elements, parts, portions or assemblies at different levels. However, other words may be substituted by other expressions if they accomplish the same purpose.
As used in this specification and the appended claims, the terms "a," "an," "the," and/or "the" are not intended to be inclusive in the singular, but rather are intended to be inclusive in the plural, unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that steps and elements are included which are explicitly identified, that the steps and elements do not form an exclusive list, and that a method or apparatus may include other steps or elements.
Flow charts are used in this description to illustrate operations performed by a system according to embodiments of the present description. It should be understood that the preceding or following operations are not necessarily performed in the exact order in which they are performed. Rather, the various steps may be processed in reverse order or simultaneously. Meanwhile, other operations may be added to the processes, or a certain step or several steps of operations may be removed from the processes.
For the purpose of illustrating embodiments of the present specification, reference will first be made to the mathematical knowledge involved therein.
In mathematics, a group (hereinafter denoted by G) defines a binary operation, which can generally be represented by a multiplication symbol "+" (omitted when unambiguous) or an addition symbol "+" as the symbol of the binary operation, but it should be noted that the binary operation cannot be directly equivalent to a multiplication or an addition in a four-fold operation. The result of several elements through one or more binary operations may be referred to as a sum, and each element may be referred to as a slice of the sum.
The binary operation of the group satisfies: 1. closed law, for any element a, b in G, a × b is still in G; 2. binding law, for any element a, b in G, (a × b) × c = a × b × c); 3. there are unit cells, the element e is present in G, such that e a = a e = a; 4. there is an inverse element, and for any element a in G, b exists in G, so that a × b = b × a = e, and a and b are inverse elements of each other. It should be noted that e may be called zero and the inverse may be called negative for the binary operation denoted by "+", and a + (inverse of b) may be denoted by a-b for any of the elements a, b in G. The abelian group has, in addition to the above 4 properties, also the commutative law, i.e. a + b = b + a for any element a, b in the abelian group.
Further, the present specification relates to a quotient group based on an integer abelian group, and the mathematical representation of the quotient group may be G: =2, taking binary as an example-kZ/2N-kZ, where Z is a set of integers, k is a non-negative integer, N is a positive integer and N-k>0. The mathematical representation of the quotient group G when k =0 can be reduced to 2NAnd Z. The elements in the quotient group G are non-negative binary fixed point numbers, with k decimal places (no decimal part when k = 0) and N-k decimal places for integer digits, and 1N-bit (bit) storage unit can be used in the binary computing device to store any fixed point number in the quotient group G. The binary operation of quotient group G includes group multiplication and group addition, and differs from the four arithmetic operations in modulus: mathematical table of group additionShown as (a + b) mod2N-kIn the unambiguous case, the method can be simplified into a + b, mod represents that the left value is modulo with respect to the right value, and the plus in (a + b) belongs to the four arithmetic operations; the mathematical representation of the group multiplication is (a × b) mod2N-kIn the case of unambiguous terms, these terms can be simplified to a b or ab, where the term "b" is used for the four arithmetic operations. In addition, a-b can be viewed as a simplified mathematical representation of a group subtraction, equivalent to (a-b) mod2N-kThe "-" in (a-b) belongs to the four arithmetic operations.
It should be noted that since a computing device usually uses a fixed number (e.g. bit) to store the value generated during the computation process, the multi-party collaborative computation frequently uses modulo group addition, group multiplication, group subtraction, and so on. In this specification, unless otherwise specified, the mathematical expression relating to the symbols can be understood with priority as group addition, group multiplication, and group subtraction, rather than as a four-way operation.
FIG. 1 is a schematic diagram of an application scenario of a computing system in accordance with some embodiments of the present description. As shown in FIG. 1, computing system 100 may include at least two (i.e., n is not less than 2) participating computing devices 110-1, 110-2, …, 110- (n-1), 110-n and a network 130.
The computing devices of any two parties in computing system 100 may cooperate to compute the product of the two party privacy matrices. For convenience of description, in the present specification, one party performing calculation by using its own private key in two-party cooperative calculation is referred to as a first party, and the other party is referred to as a second party. The first party may disclose the public key of the first party to the second party so that the second party may process the received data using the public key of the first party. Of course, the first party's computing device needs to generate a pair of public and private keys in advance.
The first party and the second party may obtain a fragment of the product of the two party privacy matrices, respectively. It should be understood that if either party knows the product of the private matrices of the two parties, the private matrix of the other party can be obtained through the inverse operation of the matrix multiplication, thereby revealing the data privacy. Therefore, any party participating in the cooperative computing only obtains one fragment of the product of the private matrixes of the two parties, and the data privacy of the two parties can be effectively protected.
For example only, in a distributed neural network training scenario, for the same sample ID, N (N is not less than 2) data providers respectively hold feature data (represented by a matrix, hereinafter also referred to as a feature value matrix) of the sample ID. Here, the eigenvalue matrices of the same sample ID owned by N data providers are not represented as X'1、X'2、...、X'NThe subscripts 1 to N may be respectively used as the numbers of N data providers, and data corresponding to a certain data provider (e.g., X, W, etc.) appearing in other positions in the present specification may also be labeled in this way. N data providers want to combine all feature data to perform model training to improve model effect, but any data provider holds feature data X'iAnd (i is less than or equal to N) all belong to private data and cannot be leaked to the outside.
To protect data privacy, for any sample ID, any data provider i can own a feature value matrix X 'of the sample ID'iSplitting into N fragments, and taking one fragment as a local fragment (not set to X'ii) And storing and distributing the rest N-1 fragments to other N-1 data providers, namely, secret sharing is carried out on the characteristic value matrix held by any data provider. Wherein, N slices are not recorded as X 'respectively'i1、X'i2、...、X'iNSatisfy the following requirements
Figure 647603DEST_PATH_IMAGE001
The first subscript (e.g., i) indicates which data provider the shard is split from, and the second subscript (e.g., j) indicates which data provider the shard is assigned to, and other shards (dual subscripts) in this specification may also employ this notation. Furthermore, any data provider can splice and store local fragments X'iiShard X 'corresponding to the sample ID from the other N-1 data providers'ji(j is less than or equal to N and j is not equal to i), and obtaining the characteristic value fragment X corresponding to the sample IDiIn which satisfy
Figure 42813DEST_PATH_IMAGE002
It should be understood that for any sample ID, by splicing X 'without considering privacy protection'1、X'2、...、X'NObtaining X and splitting the X into N sub-slices X1、X2、...、XNEach eigenvalue slice Xi(i ≦ N) may be assigned to a data provider where satisfied
Figure 165840DEST_PATH_IMAGE003
For supervised training, one of the N data providers (hereinafter referred to as a tagger) can hold tag data, the tag data is also split into N fragments, the tagger stores one fragment as a local fragment, and the other N-1 data providers execute one fragment respectively.
The tuning of the neural network may include both forward and backward propagation processes. The forward propagation process includes calculating a product (denoted as X × W) of an input matrix X and a parameter matrix W layer by layer, where the input matrix of an input layer (i.e., the first layer of the neural network) is a eigenvalue matrix. Each row of the input matrix X may correspond to an input of one sample at a corresponding layer of the neural network (denoted as i-th layer), and the number of columns of the input matrix X is the number of nodes at the i-th layer. When samples are trained in batches, the number of rows of the input matrix X may be the batch size (i.e., the number of samples contained in a batch). The number of rows of the parameter matrix may be the number of nodes of the i-th layer, and the number of columns of the parameter matrix may be the number of nodes of the i + 1-th layer.
Since centralized training may lead to privacy leakage, each data provider in distributed training calculates only a portion of X × W (i.e., one slice) layer by layer. Each data provider trains a local model, the local models of the N data providers correspond to a joint equivalent model, the structures (such as the number of layers, the number of nodes of corresponding layers, the connection relation between nodes and the like) of the local models and the joint equivalent model are the same, and the input/output/parameters of any layer of the local model of any data provider are equivalent to the fragments of the input/output/parameters of the layer corresponding to the joint equivalent model.
For any layer, N data providers may haveThe cooperative computing method provided by the embodiment of the specification is applied to forward propagation in the training of the distributed neural network. Specifically, for a certain layer (hidden layer or output layer) of the neural network, the relational expression (X) is expanded1+X2+...XN)*(W1+W2+...+WN)=
Figure 847357DEST_PATH_IMAGE004
. It can be seen that the product term obtained by the expansion includes two types: xi*Wi(in total of N items) and Xi*Wj(i ≠ j). Wherein: xi*WiCan be calculated locally at the data provider i, and can be called as a local product term; xi*WjTwo factors X (which may be referred to as cross product terms)i、WjPrivacy matrixes (secrets) belonging to different data providers still need to perform secret sharing calculation on the data provider i and the data provider j, namely, the data provider i obtains X on the premise of not revealing the privacy of the two partiesi*WjA slice of<Xi*Wj>iData provider j obtains Xi*WjA slice of<Xi*Wj>j. For a certain layer (hidden layer or output layer) of the neural network, any data provider i obtains an output fragment Yi=
Figure 989756DEST_PATH_IMAGE005
It should be noted that any participant may be used as a first party to perform cooperative computing with a second party to obtain a partition of a private matrix product of the two parties, or may be used as a second party to perform cooperative computing with the first party to obtain a partition of a private matrix product of the two parties. For example, in the above-described distributed neural network training scenario, any data provider i can be used as a first party and cooperatively compute with a second party (e.g., data provider j) to obtain Xi*WjCan also be used as a second party to cooperate with the first party (e.g., data provider j) to obtain Xj*WiTo be divided into pieces. Thus, the embodiments of the present specification provideThe collaborative computing method performed by the first party and the collaborative computing method performed by the second party of (a) may be implemented by running the same protocol (i.e., the protocol includes a first portion corresponding to the first party and a second portion corresponding to the second party), i.e., the computing devices 110 of the respective parties may install the same protocol and each time the protocol will run with a particular identity (either the first party or the second party).
For example, network 130 may include a cable network, a wired network, a fiber optic network, a telecommunications network, AN intranet, the Internet, a local area network (L AN), a Wide Area Network (WAN), a wireless local area network (W L AN), a Metropolitan Area Network (MAN), a Public Switched Telephone Network (PSTN), a Bluetooth network, a ZigBee network (ZigBee), Near Field Communication (NFC), AN intra-device bus, AN intra-device line, a cable connection, and the like, or any combination thereof.
FIG. 2 is an exemplary flow diagram of a collaborative computing method for protecting privacy of data on two sides, according to some embodiments of the present description. The computing device of the first party stores a first privacy matrix P1And a private key S (in matrix form), a public key matched with the private key S being publicly available to the second party, a second private matrix P being stored in a computing device of the second party2. The process 200 may be performed by a computing device of a first party to obtain a target product P1P2The first segment of (a). In some embodiments, flow 200 may be implemented by system 600 shown in FIG. 6. As shown in fig. 2, the process 200 may include:
step 210, apply the first privacy matrix P1The medium elements are promoted to a target quotient group and the numerical value of the medium elements is amplified by p times to obtain a first input mapping matrix M1. In some embodiments, step 210 may be implemented by the first input mapping matrix obtaining module 610.
Similarly, the second party's computing device may assign a second privacy matrix P2Medium element is promoted toThe target business group amplifies the value by p times to obtain a second input mapping matrix M2
P1And P2All elements in (1) belong to the same quotient group, and are not marked as an initial quotient group. The target quotient group is larger (in scope) than the initial quotient group, i.e., "boosting" means using more memory locations to store the same value. It should be understood that the specific range of the target quotient group can be determined according to the magnification factor p, since M is the factor1、M2Relative P1、P2Are respectively amplified by p times and M is calculated based on target quotient group1*M2Is divided into at least p parts of the maximum elements of the target quotient group, which are enlarged relative to the maximum elements of the initial quotient group2Double, i.e. it can be considered that (over the range) the target quotient group is at least p amplified relative to the initial quotient group2And (4) doubling. E.g. P before lifting1And P2The elements in (1) belong to an initial quotient group Z/qZ, and M is promoted1And M2Belong to a target quotient Z/dZ, wherein d = p can be satisfied2q。
In step 220, a first small norm matrix R is obtained. In some embodiments, step 220 may be implemented by the first small-norm matrix obtaining module 620.
The norm of the first small norm matrix R is less than a first threshold. In some embodiments, the first threshold may be p, or much less than p, such as 0.1p, 0.01p, or less. In some embodiments, R may be randomly generated.
Step 230, map the first input to matrix M1And performing first operation on the first small norm matrix R and the public key matched with the private key S to obtain a first ciphertext matrix. In some embodiments, step 230 may be implemented by the first ciphertext matrix computation module 630.
The public key can be obtained based on the private key S and an error term (also referred to as an error term, denoted as e), which is associated with the first small-norm matrix R and the second small-norm matrix H described later0Which is also a small norm matrix. In some embodiments, the error term e may be randomly generated. In some embodiments, similar to the first small norm matrix R, the norm of e may be less than p. With regard to the specific manner of constructing a public-private key pair,reference may be made to fig. 4 and its associated description.
The effect of the first operation comprises mapping the matrix M to a first input1Encryption is performed. For encrypting M1R as private data of the first party is not disclosed to the second party, so the second party cannot know M after getting the first ciphertext matrix1(furthermore, the first privacy matrix P cannot be known either1). It should be appreciated that since the magnification p of the privacy matrix is a public parameter, the first input mapping matrix M1May further result in a first privacy matrix P1Thus, a mapping matrix M to the first input is required1Encryption is performed.
According to the closure of the group, the element in R, the element in the public key and M jointly participate in the operation1All belonging to the target business group. Similarly, with reference to the subsequent steps (e.g., steps 240-260), M2、H0、H1Also belong to the target business group.
Step 240, sending the first ciphertext matrix to the second party's computing device, so that the second party's computing device obtains a second ciphertext matrix U. In some embodiments, step 240 may be implemented by first transmitting module 640.
At step 250, a second ciphertext matrix U may be received from a computing device of a second party. In some embodiments, step 250 may be implemented by the first receiving module 650.
The second ciphertext matrix U is the first ciphertext matrix and the second input mapping matrix M2A second small norm matrix H0A second output mapping matrix H1And a second operation result of the public key. Similar to the first operation, the effect of the second operation includes mapping the matrix M to a second input2Encryption is performed. For encrypting M2H of (A) to (B)0、H1The private data as the second party is not disclosed to the first party, so the first party can not know M after taking the second ciphertext matrix2(and thus the second privacy matrix P cannot be learned either2). It will be appreciated that since the magnification p of the privacy matrix is a public parameter, the second input mapping matrix M2Can further result in second privacySecret matrix P2Thus requiring mapping of the matrix M to the second input2Encryption is performed.
Second input mapping matrix M2The elements of (2) belong to a target quotient group and the values thereof are relative to a second privacy matrix P2Is multiplied by p, the second small norm matrix H0Is less than a second threshold and a second small norm matrix H0Is a target quotient group, and a second output mapping matrix H1A second output mapping matrix H stored in a computing device of the second party1Belonging to a target business group and the value of which is reduced by p2And after doubling, the target business group still belongs. The second threshold may be the same as or different from the first threshold.
And step 260, performing a third operation on the second ciphertext matrix U and the private key S to obtain a matrix V to be approximated. In some embodiments, step 260 may be implemented by a matrix to be approximated calculation module 660.
According to the foregoing, the input of the first operation comprises M1R and the public key, the input of the second operation comprises the output of the first operation (i.e. the first ciphertext matrix), M2、H0、H1And the public key, then through designing the first operation, the second operation and the third operation of the specific combination, can make: the output of the third operation, V, is equivalent to M1M2-H1And the sum of the third small norm matrix. Wherein the third small norm matrix can be based on the first small norm matrix R and the second small norm matrix H0And the error term e, due to R, H0E is a small norm matrix, R, H0The result of e can still be a small norm matrix (i.e. the third small norm matrix).
Step 270, determining a matrix closest to the matrix V to be approximated in the target matrix space as a first output mapping matrix. In some embodiments, step 270 may be implemented by approximation module 670.
The elements of the matrix in the target matrix space belong to the target quotient group and the numerical value is reduced by p2And after doubling, the target business group still belongs. In other words, the numerical scale (distance from neighboring elements) with respect to the elements in the target quotient groupA positive correlation is noted as0) Numerical scale of the elements of the matrix in the target matrix space (denoted as l)1) Amplify p2And (4) doubling. Taking the target quotient group as an integer group Z/dZ (i.e. /)0= 1), the numerical scale of the elements of the matrix in the target matrix space is p2. By matching a first small norm matrix R and a second small norm matrix H0And the norm of the error term e and the like are restricted, so that the element value of the third small norm matrix is far less than l1(e.g., less than 0.5, 0.01 or less times l1) I.e. numerically the third small norm matrix (e as hereinafter referred to as e)T(RTM2+H0) Relative to M)1M2-H1So to speak, negligible perturbation, the first party may get M by the approximation process of step 2701M2-H1As M1M2The first segment of (a). Due to M local to the first party1M2First shard of (2) and M local to the second party1M2Second segment H1All are not open to the outside, can protect both sides' data privacy.
It should be appreciated that in some embodiments, a small number (relative to the matrix size) of elements in the third small-norm matrix may be allowed to fall short of the numerical requirement, i.e., the approximation result and M are allowed to match, within the allowable range of the error1M2-H1There is a bias on a small number of matrix elements.
Step 280, reduce the element number of the first output mapping matrix by p2Multiplying to obtain a first output matrix as a target product P1P2The first segment of (a). In some embodiments, step 280 may be implemented by a first output matrix obtaining module 680.
Similarly, the second party's computing device may map the second output to the matrix H locally1Reduction of the element number of p2Multiplying to obtain a second output matrix as a target product P1P2The second slice.
It should be understood that since M1M2Relative to the value of P1P2The element value of (a) is amplified by p2Multiple, to obtain P1P2By reducing the element values of the first and second output mapping matrices by p2And (4) doubling. Since the values of the first output matrix and the second output matrix are reduced by p2Multiple, meaning that fewer memory cells may be used for storage, so in some embodiments, the first output matrix and the second output matrix may be dropped back to the initial quotient group.
It should be noted that, in this specification, the multiple relationship between the element value of one matrix and the element value of another matrix is specifically meant to satisfy the multiple relationship between the elements in the same row and column of two matrices of the same size.
FIG. 3 is an exemplary flow diagram of a collaborative computing method for protecting privacy of data on two sides, according to some embodiments of the present description. The process 300 may be performed by a computing device of a second party to obtain a target product P1P2The second slice. In some embodiments, flow 300 may be implemented by system 700 shown in FIG. 7. As shown in fig. 3, the process 300 may include:
step 310, apply the second privacy matrix P2The medium elements are promoted to a target quotient group and the numerical value of the medium elements is amplified by p times to obtain a second input mapping matrix M2. In some embodiments, step 310 may be implemented by the second input mapping matrix obtaining module 710.
Step 320, obtain a second small norm matrix H0. In some embodiments, step 320 may be implemented by the second small-norm matrix obtaining module 720.
In some embodiments, H may be randomly generated0
At step 330, a first ciphertext matrix is received from a computing device of a first party. In some embodiments, step 330 may be implemented by the second receiving module 730.
Step 340, obtaining a second output mapping matrix H1And as the target product P1P2A second output matrix of the second tile. In some embodiments, step 340 may be implemented by the second obtaining module 740.
The elements in the second output mapping matrix are numerically amplified by p relative to the elements in the second output matrix2And (4) doubling. In some embodiments, the mapping matrix H is due to the second output1And reduction of element number p2The second output matrixes obtained after the multiplication all belong to a target quotient group, namely a second output mapping matrix H1Belonging to a target matrix space with a larger scale, the computing device of the second party can first obtain any matrix in the target matrix space as the second output mapping matrix H1Then, H is introduced1Reduction of the element number of p2And multiplying to obtain a second output matrix. Wherein the second output mapping matrix H1Is taken as M1M2Is stored in the computing device of the second party, the second output matrix being the target product P1P2Is stored at the computing device of the second party. In some embodiments, H may be randomly generated in the target matrix space1
In some embodiments, the computing device of the second party may also obtain a second output matrix with the element values belonging to the target quotient group, and then amplify the element values of the second output matrix by p2Multiplying to obtain a second output mapping matrix H1. Of course, it is ensured that the element value still belongs to the target quotient group after being amplified, and then the second output mapping matrix H belonging to the target matrix space can be obtained1. In some embodiments, the second output matrix may be randomly generated.
Step 350, mapping the first ciphertext matrix and the second input mapping matrix M2A second small norm matrix H0A second output mapping matrix H1And carrying out second operation on the public key matched with the private key S to obtain a second ciphertext matrix V. In some embodiments, step 350 may be implemented by the second ciphertext matrix computation module 750.
Step 360, sending the second ciphertext matrix V to the first party's computing device, so that the first party's computing device obtains the target product P1P2A first output matrix of the first tile. In some embodiments, step 360 may be implemented by second sending module 760.
For more details of flow 300, reference may be made to flow 200, FIG. 4, and related description thereof.
FIG. 4 illustrates two-party collaborative computing M according to some embodiments of the present description1M2Exemplary flow chart of fragmentation. Wherein the first input mapping matrix M1Is n1× M-dimensional matrix, second input mapping matrix M2Is m × n2Dimension Matrix, M with double subscripts representing Matrix (Matrix), two subscripts representing Matrix scale (i.e. row number and column number), quotient in parentheses in fig. 4 being said target quotient (in fig. 4, integer group Z/dZ is taken as an example) where Matrix elements are located, ⊕ representing left and right concatenation of Matrix, small corresponding to small norm Matrix, p2M represents that the matrix is located in the target matrix space. As shown in fig. 4, the process 400 may include:
at step 410, a public key is generated.
The first party' S computing device may generate a private key S, A, an error term e, where a may be the first part of the public key to match the private key S, and the error term e is a small norm matrix. The computing device of the first party may then compute a second part b of the public key, which may be public to the second party, AS b = AS + e.
In step 420, a first small norm matrix R is obtained.
Step 430, a first ciphertext matrix is computed.
The first party's computing device may obtain X as calculated RA, calculate Rb + M1Y is obtained. In some embodiments, as shown in fig. 4, the computing device of the first party may send (X, Y) resulting from concatenating X and Y to the computing device of the second party as the first ciphertext matrix. In some embodiments, the first party's computing device may also send X and Y directly to the second party's computing device as the (2) first ciphertext matrices.
Step 440, obtain a second small norm matrix H0And a second output mapping matrix H1
Step 450, a second ciphertext matrix is computed.
As shown in FIG. 4, upon receiving the first ciphertext matrix, the computing device of the second party mayTo calculate
Figure 296978DEST_PATH_IMAGE006
Figure 778907DEST_PATH_IMAGE007
Computing a second ciphertext matrix, wherein U represents the second ciphertext matrix,
Figure 945839DEST_PATH_IMAGE008
o in (a) represents a zero matrix. It is understood that various equivalents may be made to the specific calculation of U, such as may be used
Figure 513218DEST_PATH_IMAGE009
Equivalence of
Figure 358552DEST_PATH_IMAGE010
In another instance, can use
Figure 740992DEST_PATH_IMAGE011
Equivalence of
Figure 46202DEST_PATH_IMAGE012
Step 460, calculate the matrix V to be approximated.
Step 470, a first output mapping matrix is obtained by approximation.
When it is satisfied with
Figure 330903DEST_PATH_IMAGE013
And b = AS + e, it can be demonstrated (-S)T,I)U=M1M2+eT(RTM2+H0)-H1The same holds (I denotes the identity matrix). Wherein, e, R and H are0All are small norm matrices, and the operation result can still be the small norm matrix. When e isT(RTM2+H0) Is smaller than the value scale l of the elements of the matrix in the target matrix space1(e.g., less than 0.5, 0.01 or less times l1,l1=p2l0) In the value of eT(RTM2+H0) Relative to M1M2-H1So to speak negligible, the first party's computing device may approximate e by V onto the closest matrix in the target matrix spaceT(RTM2+H0) Remove and then bring the nearest V (i.e., -S) in the target matrix spaceTI) calculation of U) is determined as a first output mapping matrix (equal to M)1M2-H1)。
In some embodiments, even if e cannot be satisfiedT(RTM2+H0) Each matrix element of (a) is smaller than the corresponding numerical scale l of the target matrix space1Then, V can be approximated to the closest matrix in the target matrix space, as long as the approximated matrix is associated with M1M2-H1The difference of (a) is within the range allowed by the engineering. According to most conditions of engineering application, the target quotient group is not assumed to be an integer group Z/dZ, and according to an approximate ideal condition, if e is satisfiedT(RTM2+H0) Is less than 0.5p per matrix element2This condition is rounded off in this way (i.e. each matrix element is less than 0.5 p)2Is partially cut off and is greater than 0.5p2Is increased to p2) Can obtain M1M2-H1. Then, less than 0.5p per matrix element is combined2E, and the matrix size, e can be calculatedT(RTM2+H0) Is required (i.e., is less than a certain threshold). In addition, M2The norm range of (2) can be obtained by calculation according to the maximum element of the target quotient group and the matrix scale. Further, according to eT(RTM2+H0) Norm requirement of, M2The norm requirements (i.e., less than some threshold) of the small norm matrices e, R, T can be determined. Experimentally, it has been found that when e, R, T are randomly generated and constrained such that the norms of e, R, T are all less than p and p is greater than q (e.g., p is greater than 10q or more q), eT(RTM2+H0) The element in (1) is more than 0.5p2Is less probable, i.e. eT(RTM2+H0) Only a small number (relative to the matrix size) of values above 0.5p will be present2Of (2) is used. Thus, the matrix closest to V in the target matrix space is determined as the first output mapping matrix (equal to M)1M2-H1) Is feasible.
It can be understood that the threshold corresponding to each small norm matrix participating in the operation may be determined based on p, and the smaller the threshold is, the higher the accuracy of the calculation result is, but the security of the data may also be deteriorated, whereas, the higher the threshold is, the lower the accuracy of the calculation result is, but the security of the data is improved. In practice, the threshold corresponding to each small norm matrix can be set based on the requirement to meet the practical requirement.
As shown in FIG. 4, the matrix of interactions between the computing devices of the first and second parties includes a first ciphertext matrix and a second ciphertext matrix, where the first ciphertext matrix includes 2m × n1Matrix P of dimensions1And P2The second ciphertext matrix is 2n1×n2Matrix of dimensions, both of which require a total transmission of 2mn1+2n1n2A matrix element. Taking the target business group as Z/dZ and the numerical value storage adopts binary system as an example, the two parties need to use (2 mn) in total1+2n1n2)log2d bits (bit) to transmit the first and second ciphertext matrices.
Fig. 5 is a schematic diagram of a matrix multiplication protocol involving a third party server according to some embodiments of the present description.
As shown in fig. 5, a first party's computing device stores a first privacy matrix a, and a second party's computing device stores a second privacy matrix b. By running the matrix multiplication protocol, the computing device of the first party obtains a first fragment c of ab0The second party's computing device obtains a second shard c of ab1. The detailed interaction process is described below:
the third party server randomly generates a first random matrix u to be sent to the computing device of the first party and a second random matrix v to be sent to the computing device of the second party. Third party serverCalculating uv and splitting uv (specifically splitting each matrix element by group addition) into a first slice z to be sent to the first party's computing device0And a second slice z to be sent to the second party's computing device1. I.e. u, v, z0、z1Satisfies uv = z0+z1
The third party server divides the first random matrix u and the first fragment z0Sending the second random matrix v and the second slice z to the computing device of the first party1To the computing device of the second party.
The first party's computing device calculates a-u (denoted as e) and sends e to the second party's computing device. The second party's computing device calculates b-v (denoted as f) and sends f to the first party's computing device.
The computing device of the first party calculates uf + z0As a first sub-slice c of ab0. The computing device of the second party calculates eb + z1As a second fragment c of ab1. Can be calculated, c0+c1=uf+eb+z0+z1= uf + eb + uz = u (b-v) + (a-u) b + uz = ab, i.e. c0+c1=ab。
As shown in FIG. 5, assume that a is n1× m-dimensional matrix and b is m × n2Matrix of dimensions, then: u and e are n1× m-dimensional matrix, v and f being m × n2Matrix of dimensions, z0And z1Is n1×n2A matrix of dimensions. The matrix of interactions between the third party server and the first/second party's computing devices and between the first and second party's computing devices includes u, v, z0、z1E, f, each party needs to transmit 2mn in total1+2mn2+2n1n2A matrix element. Taking the quotient group where the matrix elements are located as Z/qZ (i.e. the initial quotient group) as an example and taking binary as an example for numerical value storage, all parties need to use (2 mn) in total1+2mn2+2n1n2)log2q bits (bit) to transmit the matrix. Can prove that when
Figure 668344DEST_PATH_IMAGE014
At that time, the data transmission amount is significantly increased (reflected in 2 mn) due to the joining of the third party server2This term, d in general, is the magnification p of q relative to d2So that log2q and log2d has a close effect on the magnitude relationship). For example, in a distributed neural network training scenario, the characteristic dimension (input layer node number, i.e., m) of the input layer of the neural network can reach tens of millions, and the node number (i.e., n) of the first hidden layer2) Can reach thousands, the number of samples of a single batch (i.e. n)1) Typically tens of samples (when samples are not trained in batches, n1=1)。
It should be noted that the above description of the flow is for illustration and description only and does not limit the scope of the application of the present specification. Various modifications and alterations to the flow may occur to those skilled in the art, given the benefit of this description. However, such modifications and variations are intended to be within the scope of the present description. For example, the first party's computing device may or may not stitch X and Y.
FIG. 6 is a block diagram of a collaborative computing system that protects privacy of data on two sides, according to some embodiments of the present description. The system 600 may be implemented on a computing device of a first party. As shown in fig. 6, the system 600 may include a first input mapping matrix obtaining module 610, a first small norm matrix obtaining module 620, a first ciphertext matrix calculating module 630, a first transmitting module 640, a first receiving module 650, a matrix to be approximated calculating module 660, an approximating module 670, and a first output matrix obtaining module 680.
The first input mapping matrix obtaining module 610 may be configured to apply a first privacy matrix P1The medium elements are promoted to a target quotient group and the numerical value of the medium elements is amplified by p times to obtain a first input mapping matrix M1
The first small-norm matrix obtaining module 620 may be configured to obtain a first small-norm matrix R.
The first ciphertext matrix calculation module 630 may be configured to map the first input to a matrix M1The first small norm matrix R and a public key matched with the private key S are subjected to first operation to obtain a first operationA ciphertext matrix.
The first sending module 640 may be configured to send the first ciphertext matrix to the computing device of the second party, so that the computing device of the second party obtains the second ciphertext matrix U.
The first receiving module 650 may be used to receive a second ciphertext matrix U from a computing device of a second party.
The matrix to be approximated calculation module 660 may be configured to perform a third operation on the second ciphertext matrix U and the private key S to obtain a matrix to be approximated V.
The approximation module 670 may be configured to determine a matrix in the target matrix space that is closest to the matrix to be approximated V as the first output mapping matrix.
The first output matrix obtaining module 680 may be configured to reduce the value of the elements of the first output mapping matrix by p2Multiplying to obtain a first output matrix as a target product P1P2The first segment of (a).
For more details of the system 600 and its modules, reference may be made to fig. 2 and its related description, which are not repeated here.
FIG. 7 is a block diagram of a collaborative computing system that protects privacy of data on two sides, according to some embodiments of the present description. The system 700 may be implemented on a computing device of the second party. As shown in fig. 7, the system 700 may include a second input mapping matrix obtaining module 710, a second small-norm matrix obtaining module 720, a second receiving module 730, a second obtaining module 740, a second ciphertext matrix calculation module 750, and a second sending module 760.
The second input mapping matrix obtaining module 710 may be configured to apply a second privacy matrix P2The medium elements are promoted to a target quotient group and the numerical value of the medium elements is amplified by p times to obtain a second input mapping matrix M2
The second small-norm matrix obtaining module 720 may be configured to obtain a second small-norm matrix H0
The second receiving module 730 may be used to receive the first ciphertext matrix from the computing device of the first party.
The second obtaining module 740 may be configured to obtain a second output mapping matrix H1And asTarget product P1P2A second output matrix of the second tile.
The second ciphertext matrix computing module 750 may be configured to map the first ciphertext matrix with the second input mapping matrix M2A second small norm matrix H0A second output mapping matrix H1And carrying out second operation on the public key matched with the private key S to obtain a second ciphertext matrix V.
The second sending module 760 may be configured to send the second ciphertext matrix V to the first party's computing device to enable the first party's computing device to obtain the target product P1P2A first output matrix of the first tile.
For more details of the system 700 and its modules, reference may be made to fig. 3 and its related description, which are not repeated here.
It should be understood that the systems shown in fig. 6, 7 and their modules may be implemented in various ways. For example, in some embodiments, the system and its modules may be implemented in hardware, software, or a combination of software and hardware. Wherein the hardware portion may be implemented using dedicated logic; the software portions may be stored in a memory for execution by a suitable instruction execution system, such as a microprocessor or specially designed hardware. Those skilled in the art will appreciate that the methods and systems described above may be implemented using computer executable instructions and/or embodied in processor control code, such code being provided, for example, on a carrier medium such as a diskette, CD-or DVD-ROM, a programmable memory such as read-only memory (firmware), or a data carrier such as an optical or electronic signal carrier. The system and its modules in this specification may be implemented not only by hardware circuits such as very large scale integrated circuits or gate arrays, semiconductors such as logic chips, transistors, or programmable hardware devices such as field programmable gate arrays, programmable logic devices, etc., but also by software executed by various types of processors, for example, or by a combination of the above hardware circuits and software (e.g., firmware).
It should be noted that the above description of the system and its modules is for convenience only and should not limit the present disclosure to the illustrated embodiments. It will be appreciated by those skilled in the art that, given the teachings of the system, any combination of modules or sub-system configurations may be used to connect to other modules without departing from such teachings. For example, in some embodiments, the matrix to be approximated calculation module 660 and the approximation module 670 shown in fig. 6 may be different modules in one system, or may be one module to implement the functions of the two modules. For another example, in some embodiments, the second ciphertext matrix calculation module 750 and the second sending module 760 shown in fig. 7 may be two modules, or may be combined into one module. Such variations are within the scope of the present disclosure.
The beneficial effects that may be brought by the embodiments of the present description include, but are not limited to: (1) the method for calculating the product of the private matrixes of the two parties in a secret sharing mode is provided, so that the data privacy of the two parties can be effectively protected; (2) and the data transmission amount can be effectively reduced without the assistance of a third party. It is to be noted that different embodiments may produce different advantages, and in different embodiments, any one or combination of the above advantages may be produced, or any other advantages may be obtained.
Having thus described the basic concept, it will be apparent to those skilled in the art that the foregoing detailed disclosure is to be considered merely illustrative and not restrictive of the embodiments herein. Various modifications, improvements and adaptations to the embodiments described herein may occur to those skilled in the art, although not explicitly described herein. Such modifications, improvements and adaptations are proposed in the embodiments of the present specification and thus fall within the spirit and scope of the exemplary embodiments of the present specification.
Also, the description uses specific words to describe embodiments of the description. Reference throughout this specification to "one embodiment," "an embodiment," and/or "some embodiments" means that a particular feature, structure, or characteristic described in connection with at least one embodiment of the specification is included. Therefore, it is emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, some features, structures, or characteristics of one or more embodiments of the specification may be combined as appropriate.
Moreover, those skilled in the art will appreciate that aspects of the embodiments of the present description may be illustrated and described in terms of several patentable species or situations, including any new and useful combination of processes, machines, manufacture, or materials, or any new and useful improvement thereof. Accordingly, aspects of embodiments of the present description may be carried out entirely by hardware, entirely by software (including firmware, resident software, micro-code, etc.), or by a combination of hardware and software. The above hardware or software may be referred to as "data block," module, "" engine, "" unit, "" component, "or" system. Furthermore, aspects of the embodiments of the present specification may be represented as a computer product, including computer readable program code, embodied in one or more computer readable media.
The computer storage medium may comprise a propagated data signal with the computer program code embodied therewith, for example, on baseband or as part of a carrier wave. The propagated signal may take any of a variety of forms, including electromagnetic, optical, etc., or any suitable combination. A computer storage medium may be any computer-readable medium that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code located on a computer storage medium may be propagated over any suitable medium, including radio, cable, fiber optic cable, RF, or the like, or any combination of the preceding.
Computer program code required for operation of portions of embodiments of the present description may be written in any one or more programming languages, including AN object oriented programming language such as Java, Scala, Smalltalk, Eiffel, JADE, Emerald, C + +, C #, VB.NET, Python, and the like, a conventional procedural programming language such as C, VisualBasic, Fortran2003, Perl, COBO L2002, PHP, ABAP, a dynamic programming language such as Python, Ruby, and Groovy, or other programming languages, and the like.
In addition, unless explicitly stated in the claims, the order of processing elements and sequences, use of numbers and letters, or use of other names in the embodiments of the present specification are not intended to limit the order of the processes and methods in the embodiments of the present specification. While various presently contemplated embodiments of the invention have been discussed in the foregoing disclosure by way of example, it is to be understood that such detail is solely for that purpose and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover all modifications and equivalent arrangements that are within the spirit and scope of the embodiments herein. For example, although the system components described above may be implemented by hardware devices, they may also be implemented by software-only solutions, such as installing the described system on an existing processing device or mobile device.
Similarly, it should be noted that in the preceding description of embodiments of the specification, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more embodiments of the invention. This method of disclosure, however, is not intended to imply that more features are required than are expressly recited in the claims. Indeed, the embodiments may be characterized as having less than all of the features of a single embodiment disclosed above.
For each patent, patent application publication, and other material, such as articles, books, specifications, publications, documents, etc., cited in this specification, the entire contents of each are hereby incorporated by reference into this specification. Except where the application history document does not conform to or conflict with the contents of the present specification, it is to be understood that the application history document, as used herein in the present specification or appended claims, is intended to define the broadest scope of the present specification (whether presently or later in the specification) rather than the broadest scope of the present specification. It is to be understood that the descriptions, definitions and/or uses of terms in the accompanying materials of this specification shall control if they are inconsistent or contrary to the descriptions and/or uses of terms in this specification.
Finally, it should be understood that the embodiments described herein are merely illustrative of the principles of the embodiments of the present disclosure. Other variations are possible within the scope of the embodiments of the present description. Thus, by way of example, and not limitation, alternative configurations of the embodiments of the specification can be considered consistent with the teachings of the specification. Accordingly, the embodiments of the present description are not limited to only those embodiments explicitly described and depicted herein.

Claims (14)

1. A collaborative computing method for protecting data privacy of two parties is disclosed, wherein a first private matrix and a private key are stored in computing equipment of a first party, a second private matrix is stored in computing equipment of a second party, elements in the first private matrix and elements in the second private matrix belong to an initial quotient group, elements of the private key belong to a target quotient group, and the target quotient group is larger than the initial quotient group; the method is performed by a computing device of the first party, comprising:
elements in the first private matrix are promoted to a target business group, and the numerical value of the elements is amplified by p times to obtain a first input mapping matrix;
obtaining a first small norm matrix, wherein the norm of the first small norm matrix is smaller than a first threshold value, and elements of the first small norm matrix belong to a target quotient group;
performing a first operation on the first input mapping matrix, the first small norm matrix and a public key matched with the private key to obtain a first ciphertext matrix;
sending the first ciphertext matrix to the second party's computing device to cause: the computing equipment of the second party performs second operation on the first ciphertext matrix, the second input mapping matrix, the second small norm matrix, the second output mapping matrix and the public key to obtain a second ciphertext matrix; wherein, the elements of the second input mapping matrix belong to a target quotient group and the values thereof are amplified p times relative to the values of the elements of the second private matrix, the norm of the second small-norm matrix is smaller than a second threshold value and the elements of the second small-norm matrix belong to the target quotient group, the second output mapping matrix is stored in the computing device of the second party, the elements of the second output mapping matrix belong to the target quotient group and the values thereof are reduced p times2After doubling, still belonging to the target business group;
receiving the second ciphertext matrix from the computing device of the second party;
performing a third operation on the second ciphertext matrix and the private key to obtain a matrix to be approximated;
determining a matrix which is closest to the matrix to be approximated in a target matrix space as a first output mapping matrix; elements of the matrix in the target matrix space belong to the target quotient group and the numerical value of the elements is reduced by p2After doubling, still belonging to the target business group;
reducing the element value of the first output mapping matrix by p2And multiplying to obtain a first output matrix and taking the first output matrix as a first fragment of a target product, wherein the target product is the product of the first private matrix and the second private matrix.
2. The method of claim 1, wherein the public key is derived based on the private key and an error term, a norm of the error term being less than a third threshold.
3. The method of claim 2, wherein the initial quotient group is an integer group Z/qZ and the target quotient group is an integer group Z/p2qZ, wherein p is greater than q; the first small norm matrix, the second small norm matrix, and the error termIs less than p.
4. The method of claim 1, wherein the first operating the first input mapping matrix, the first small-norm matrix, and a public key that matches the private key comprises:
calculating RA to obtain X;
calculating Rb + M1Obtaining Y;
wherein A represents a first portion of the public key, b represents a second portion of the public key, b = AS + e, S represents the private key, e represents an error term, R represents the first small-norm matrix, M represents a first integer, and1representing the first input mapping matrix, (X, Y) representing the first ciphertext matrix.
5. The method of claim 4, wherein the second ciphertext matrix
Figure 716107DEST_PATH_IMAGE001
Where U represents the second ciphertext matrix, M2Representing said second input mapping matrix, H0Representing said second small norm matrix, O represents a zero matrix, H1Representing the second output mapping matrix;
performing a third operation on the second ciphertext matrix and the private key, including:
according to V = (-S)TAnd I) calculating U, wherein V represents the matrix to be approximated, and I represents an identity matrix.
6. A collaborative computing system for protecting data privacy of two parties is disclosed, wherein a first private matrix and a private key are stored in a computing device of a first party, a second private matrix is stored in a computing device of a second party, elements in the first private matrix and elements in the second private matrix belong to an initial quotient group, elements of the private key belong to a target quotient group, and the target quotient group is larger than the initial quotient group; the system executing on a computing device of the first party, comprising:
the first input mapping matrix obtaining module is used for promoting the elements in the first private matrix to a target business group and amplifying the values by p times to obtain a first input mapping matrix;
a first small norm matrix obtaining module, configured to obtain a first small norm matrix, where a norm of the first small norm matrix is smaller than a first threshold and an element of the first small norm matrix belongs to a target quotient group;
the first ciphertext matrix calculation module is used for performing first operation on the first input mapping matrix, the first small norm matrix and a public key matched with the private key to obtain a first ciphertext matrix;
a first sending module to send the first ciphertext matrix to the computing device of the second party, such that: the computing equipment of the second party performs second operation on the first ciphertext matrix, the second input mapping matrix, the second small norm matrix, the second output mapping matrix and the public key to obtain a second ciphertext matrix; the elements of the second input mapping matrix belong to a target quotient group, the numerical values of the elements of the second input mapping matrix are amplified by p times relative to the numerical values of the elements of the second private matrix, the norm of the second small-norm matrix is smaller than a second threshold value, the elements of the second small-norm matrix belong to the target quotient group, the second output mapping matrix is stored in the computing equipment of the second party, and the elements of the second output mapping matrix belong to the target quotient group and still belong to the target quotient group after the numerical values of the second output mapping matrix are reduced by p2 times;
a first receiving module to receive the second ciphertext matrix from the computing device of the second party;
the matrix to be approximated calculation module is used for carrying out third operation on the second ciphertext matrix and the private key to obtain a matrix to be approximated;
the approximation module is used for determining a matrix which is closest to the matrix to be approximated in a target matrix space as a first output mapping matrix; elements of the matrix in the target matrix space belong to the target quotient group, and the elements still belong to the target quotient group after the numerical value of the elements is reduced by p2 times;
a first output matrix obtaining module, configured to reduce the element value of the first output mapping matrix by p2 times to obtain a first output matrix, which is used as a first fragment of a target product, where the target product is a product of the first private matrix and the second private matrix.
7. A collaborative computing apparatus that protects privacy of data on both sides, comprising a processor and a storage device to store instructions that, when executed by the processor, implement the method of any of claims 1-6.
8. A collaborative computing method for protecting data privacy of two parties is disclosed, wherein a first private matrix and a private key are stored in computing equipment of a first party, a second private matrix is stored in computing equipment of a second party, elements of the first private matrix and elements of the second private matrix belong to an initial quotient group, elements of the private key belong to a target quotient group, and the target quotient group is larger than the initial quotient group; the method is performed by a computing device of the first party, comprising:
elements in the second private matrix are promoted to a target business group, and the numerical value of the elements is amplified by p times to obtain a second input mapping matrix;
obtaining a second small norm matrix, wherein the norm of the second small norm matrix is smaller than a second threshold value, and elements of the second small norm matrix belong to a target quotient group;
receiving a first ciphertext matrix from a computing device of the first party;
obtaining a second output mapping matrix and a second output matrix of a second slice as a target product; wherein the values of the elements of the second output mapping matrix are amplified by p relative to the values of the elements of the second output matrix2Multiplying, elements of the second output mapping matrix and elements of the second output matrix belonging to the target quotient group;
performing a second operation on the first ciphertext matrix, a second input mapping matrix, a second small norm matrix, a second output mapping matrix and a public key matched with the private key to obtain a second ciphertext matrix;
sending the second ciphertext matrix to the computing device of the first party to enable the computing device of the first party to obtain a first output matrix of a first segment as the target product.
9. The method of claim 8, wherein the public key is derived based on the private key and an error term, a norm of the error term being less than a third threshold.
10. The method of claim 8, wherein the norm of the second small norm matrix and the error term is less than p.
11. The method of claim 8, wherein the first ciphertext matrix (X, Y) = (RA, Rb + M)1) Wherein (X, Y) represents the first ciphertext matrix, a represents a first portion of the public key, b represents a second portion of the public key, b = AS + e, S represents the private key, e represents an error term, R represents a first small-norm matrix having a norm less than a first threshold, M1Representing a first input mapping matrix having elemental values that are p times greater than elemental values of the first privacy matrix;
performing a second operation on the first ciphertext matrix, the second input mapping matrix, the second small-norm matrix, the second output mapping matrix, and the public key matched with the private key, including:
push button
Figure 538307DEST_PATH_IMAGE001
Computing, where U represents the second ciphertext matrix, M2Representing said second input mapping matrix, H0Representing said second small norm matrix, O represents a zero matrix, H1A second output matrix representing a second slice as the target product.
12. The method of claim 8, wherein the obtaining a second output mapping matrix and a second output matrix that is a second slice of a target product comprises: obtaining a matrix in the target matrix space asThe second output mapping matrix, the elements of the matrix in the target matrix space belong to the target quotient group and the numerical value of the element is reduced by p2After doubling, still belonging to the target business group;
reducing the element value of the second output mapping matrix by p2And multiplying to obtain the second output matrix.
13. A collaborative computing system for protecting data privacy of two parties is disclosed, wherein a first private matrix and a private key are stored in a computing device of a first party, a second private matrix is stored in a computing device of a second party, elements of the first private matrix and elements of the second private matrix belong to an initial business group, elements of the private key belong to a target business group, and the target business group is larger than the initial business group; the system is implemented on a computing device of the first party, comprising:
the second input mapping matrix obtaining module is used for promoting the elements in the second private matrix to a target business group and amplifying the values by p times to obtain a second input mapping matrix;
a second small-norm matrix obtaining module, configured to obtain a second small-norm matrix, where a norm of the second small-norm matrix is smaller than a second threshold and an element of the second small-norm matrix belongs to a target quotient group;
a second receiving module to receive a first ciphertext matrix from a computing device of the first party;
a second obtaining module for obtaining a second output mapping matrix and a second output matrix of a second slice as a target product; wherein the values of the elements of the second output mapping matrix are amplified by p relative to the values of the elements of the second output matrix2Multiplying, elements of the second output mapping matrix and elements of the second output matrix belonging to the target quotient group;
the second ciphertext matrix calculation module is used for performing second operation on the first ciphertext matrix, the second input mapping matrix, the second small norm matrix, the second output mapping matrix and the public key matched with the private key to obtain a second ciphertext matrix;
a second sending module, configured to send the second ciphertext matrix to the computing device of the first party, so that the computing device of the first party obtains a first output matrix of the first segment as the target product.
14. A collaborative computing apparatus that protects privacy of data on both sides, comprising a processor and a storage device for storing instructions that, when executed by the processor, implement the method of any of claims 8-12.
CN202010587170.XA 2020-06-24 2020-06-24 Collaborative computing method and system for protecting data privacy of two parties Active CN111475854B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010587170.XA CN111475854B (en) 2020-06-24 2020-06-24 Collaborative computing method and system for protecting data privacy of two parties

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010587170.XA CN111475854B (en) 2020-06-24 2020-06-24 Collaborative computing method and system for protecting data privacy of two parties

Publications (2)

Publication Number Publication Date
CN111475854A true CN111475854A (en) 2020-07-31
CN111475854B CN111475854B (en) 2020-10-20

Family

ID=71765294

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010587170.XA Active CN111475854B (en) 2020-06-24 2020-06-24 Collaborative computing method and system for protecting data privacy of two parties

Country Status (1)

Country Link
CN (1) CN111475854B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111723404A (en) * 2020-08-21 2020-09-29 支付宝(杭州)信息技术有限公司 Method and device for jointly training business model
CN111859035A (en) * 2020-08-12 2020-10-30 华控清交信息科技(北京)有限公司 Data processing method and device
CN112561085A (en) * 2021-02-20 2021-03-26 支付宝(杭州)信息技术有限公司 Multi-classification model training method and system based on multi-party safety calculation
CN112685788A (en) * 2021-03-08 2021-04-20 支付宝(杭州)信息技术有限公司 Data processing method and device
CN112800466A (en) * 2021-02-10 2021-05-14 支付宝(杭州)信息技术有限公司 Data processing method and device based on privacy protection and server
CN113094739A (en) * 2021-03-05 2021-07-09 支付宝(杭州)信息技术有限公司 Data processing method and device based on privacy protection and server
CN113094763A (en) * 2021-04-12 2021-07-09 支付宝(杭州)信息技术有限公司 Selection problem processing method and system for protecting data privacy
CN113158254A (en) * 2021-05-18 2021-07-23 支付宝(杭州)信息技术有限公司 Selection problem processing method and system for protecting data privacy
CN113312641A (en) * 2021-06-02 2021-08-27 杭州趣链科技有限公司 Multipoint and multiparty data interaction method, system, electronic device and storage medium
CN114021198A (en) * 2021-12-29 2022-02-08 支付宝(杭州)信息技术有限公司 Method and device for determining common data for protecting data privacy

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103095671A (en) * 2011-07-07 2013-05-08 米特尔网络公司 Collaboration privacy
CN105007270A (en) * 2015-07-13 2015-10-28 西安理工大学 Attribute-based encryption method for lattice multi-authority key strategy
CN107317666A (en) * 2017-05-25 2017-11-03 南京邮电大学 A kind of parallel full homomorphism encipher-decipher method for supporting floating-point operation
CN108055118A (en) * 2017-12-11 2018-05-18 东北大学 A kind of diagram data intersection computational methods of secret protection
CN108280366A (en) * 2018-01-17 2018-07-13 上海理工大学 A kind of batch linear query method based on difference privacy
CN108985929A (en) * 2018-06-11 2018-12-11 阿里巴巴集团控股有限公司 Training method, business datum classification processing method and device, electronic equipment
CN109426861A (en) * 2017-08-16 2019-03-05 阿里巴巴集团控股有限公司 Data encryption, machine learning model training method, device and electronic equipment
CN110210248A (en) * 2019-06-13 2019-09-06 重庆邮电大学 A kind of network structure towards secret protection goes anonymization systems and method
CN111162896A (en) * 2020-04-01 2020-05-15 支付宝(杭州)信息技术有限公司 Method and device for data processing by combining two parties
CN111177790A (en) * 2020-04-10 2020-05-19 支付宝(杭州)信息技术有限公司 Collaborative computing method, system and device for protecting data privacy of two parties
CN111178549A (en) * 2020-04-10 2020-05-19 支付宝(杭州)信息技术有限公司 Method and device for protecting business prediction model of data privacy joint training by two parties
CN111241570A (en) * 2020-04-24 2020-06-05 支付宝(杭州)信息技术有限公司 Method and device for protecting business prediction model of data privacy joint training by two parties

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103095671A (en) * 2011-07-07 2013-05-08 米特尔网络公司 Collaboration privacy
CN105007270A (en) * 2015-07-13 2015-10-28 西安理工大学 Attribute-based encryption method for lattice multi-authority key strategy
CN107317666A (en) * 2017-05-25 2017-11-03 南京邮电大学 A kind of parallel full homomorphism encipher-decipher method for supporting floating-point operation
CN109426861A (en) * 2017-08-16 2019-03-05 阿里巴巴集团控股有限公司 Data encryption, machine learning model training method, device and electronic equipment
CN108055118A (en) * 2017-12-11 2018-05-18 东北大学 A kind of diagram data intersection computational methods of secret protection
CN108280366A (en) * 2018-01-17 2018-07-13 上海理工大学 A kind of batch linear query method based on difference privacy
CN108985929A (en) * 2018-06-11 2018-12-11 阿里巴巴集团控股有限公司 Training method, business datum classification processing method and device, electronic equipment
CN110210248A (en) * 2019-06-13 2019-09-06 重庆邮电大学 A kind of network structure towards secret protection goes anonymization systems and method
CN111162896A (en) * 2020-04-01 2020-05-15 支付宝(杭州)信息技术有限公司 Method and device for data processing by combining two parties
CN111177790A (en) * 2020-04-10 2020-05-19 支付宝(杭州)信息技术有限公司 Collaborative computing method, system and device for protecting data privacy of two parties
CN111178549A (en) * 2020-04-10 2020-05-19 支付宝(杭州)信息技术有限公司 Method and device for protecting business prediction model of data privacy joint training by two parties
CN111241570A (en) * 2020-04-24 2020-06-05 支付宝(杭州)信息技术有限公司 Method and device for protecting business prediction model of data privacy joint training by two parties

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111859035B (en) * 2020-08-12 2022-02-18 华控清交信息科技(北京)有限公司 Data processing method and device
CN111859035A (en) * 2020-08-12 2020-10-30 华控清交信息科技(北京)有限公司 Data processing method and device
CN111723404A (en) * 2020-08-21 2020-09-29 支付宝(杭州)信息技术有限公司 Method and device for jointly training business model
CN112800466A (en) * 2021-02-10 2021-05-14 支付宝(杭州)信息技术有限公司 Data processing method and device based on privacy protection and server
CN112800466B (en) * 2021-02-10 2022-04-22 支付宝(杭州)信息技术有限公司 Data processing method and device based on privacy protection and server
CN112561085A (en) * 2021-02-20 2021-03-26 支付宝(杭州)信息技术有限公司 Multi-classification model training method and system based on multi-party safety calculation
CN113094739A (en) * 2021-03-05 2021-07-09 支付宝(杭州)信息技术有限公司 Data processing method and device based on privacy protection and server
CN113094739B (en) * 2021-03-05 2022-04-22 支付宝(杭州)信息技术有限公司 Data processing method and device based on privacy protection and server
CN112685788A (en) * 2021-03-08 2021-04-20 支付宝(杭州)信息技术有限公司 Data processing method and device
CN113094763A (en) * 2021-04-12 2021-07-09 支付宝(杭州)信息技术有限公司 Selection problem processing method and system for protecting data privacy
CN113094763B (en) * 2021-04-12 2022-03-29 支付宝(杭州)信息技术有限公司 Selection problem processing method and system for protecting data privacy
CN113158254A (en) * 2021-05-18 2021-07-23 支付宝(杭州)信息技术有限公司 Selection problem processing method and system for protecting data privacy
CN113312641A (en) * 2021-06-02 2021-08-27 杭州趣链科技有限公司 Multipoint and multiparty data interaction method, system, electronic device and storage medium
CN114021198A (en) * 2021-12-29 2022-02-08 支付宝(杭州)信息技术有限公司 Method and device for determining common data for protecting data privacy

Also Published As

Publication number Publication date
CN111475854B (en) 2020-10-20

Similar Documents

Publication Publication Date Title
CN111475854B (en) Collaborative computing method and system for protecting data privacy of two parties
CN111177790B (en) Collaborative computing method, system and device for protecting data privacy of two parties
CN111512589B (en) Method for fast secure multiparty inner product with SPDZ
CN112988237B (en) Paillier decryption system, chip and method
RU2534944C2 (en) Method for secure communication in network, communication device, network and computer programme therefor
CN111539041B (en) Safety selection method and system
CN113158239B (en) Selection problem processing method for protecting data privacy
JP6973868B2 (en) Secret calculation methods, devices, and programs
Seo et al. Efficient arithmetic on ARM‐NEON and its application for high‐speed RSA implementation
CN114021734B (en) Parameter calculation device, system and method for federal learning and privacy calculation
JP7259876B2 (en) Information processing device, secure calculation method and program
CN113591113B (en) Privacy calculation method, device and system and electronic equipment
US11902432B2 (en) System and method to optimize generation of coprime numbers in cryptographic applications
CN114491629A (en) Privacy-protecting graph neural network training method and system
CN113761469B (en) Highest bit carry calculation method for protecting data privacy
US11101981B2 (en) Generating a pseudorandom number based on a portion of shares used in a cryptographic operation
CN111712816B (en) Using cryptographic masking for efficient use of Montgomery multiplications
EP3379408A1 (en) Updatable random functions
Wu et al. On the improvement of wiener attack on rsa with small private exponent
CN116032639A (en) Message pushing method and device based on privacy calculation
CN113158254B (en) Selection problem processing method and system for protecting data privacy
CN112989421A (en) Method and system for processing safety selection problem
Chung et al. Encoding of rational numbers and their homomorphic computations for FHE-based applications
Saha et al. Outsourcing private equality tests to the cloud
CN112990260A (en) Model evaluation method and system based on multi-party security calculation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40034544

Country of ref document: HK